Microk8s多机安装
Canonical为Microk8s增加了高可用性,使其可以随时部署到生产环境。至少部署三个节点后,Microk8s会自动扩展控制平面,以便在多个节点上运行API服务。
背景说明
Canonical为Microk8s增加了高可用性,使其可以随时部署到生产环境。至少部署三个节点后,Microk8s会自动扩展控制平面,以便在多个节点上运行API服务。在典型的高可用性(HA)部署场景中,etcd用作键/值数据库以维护集群状态。 Microk8s使用Dqlite分布式版本和SQLite的高可用性版本。HA MicroK8只需要集群中的三个或更多节点,此时Dqlite会自动变成高可用性。如果群集有三个以上的节点,那么额外节点将是数据存储系统的备用节点,如果数据存储系统丢失了其中一个节点,会自动升级。备用节点自动升级到Dqlite的仲裁集群使MicroK8s HA具有自治性,即使没有采取任何管理措施,也能确保仲裁得到维护。
解决方案
实例创建
当 multipass 安装好后,可以创建一个虚拟机来运行 MicroK8s。建议至少 4 G 的 RAM 和 40G 存储,这里创建3台实例
hanlongjie ~ multipass launch 22.04 --name cloud --cpus 3 --mem 3G --disk 40G
hanlongjie ~ multipass launch 22.04 --name edge1 --cpus 3 --mem 3G --disk 40G
hanlongjie ~ multipass launch 22.04 --name edge2 --cpus 3 --mem 3G --disk 40G
通过命令更改超级管理员密码
ubuntu@cloud:~$ sudo passwd
New password:
Retype new password:
passwd: password updated successfully
ubuntu@cloud:~$ su root
Password:
root@cloud:/home/ubuntu#
通过命令更改apt-get源
root@cloud:/home/ubuntu# sudo cp -a /etc/apt/sources.list /etc/apt/sources.list.bak
root@cloud:/home/ubuntu# sudo sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list
root@cloud:/home/ubuntu# sudo sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list
root@cloud:/home/ubuntu# sudo apt-get -y update
可用版本
hanlongjie ~ multipass shell cloud
ubuntu@cloud:~$
ubuntu@cloud:~$ snap info microk8s
name: microk8s
summary: Kubernetes for workstations and appliances
publisher: Canonical✓
store-url: https://snapcraft.io/microk8s
contact: https://github.com/ubuntu/microk8s
license: Apache-2.0
description: |
MicroK8s is a small, fast, secure, single node Kubernetes that installs on
just about any Linux box. Use it for offline development, prototyping,
testing, or use it on a VM as a small, cheap, reliable k8s for CI/CD. It's
also a great k8s for appliances - develop your IoT apps for k8s and deploy
them to MicroK8s on your boxes.
snap-id: EaXqgt1lyCaxKaQCU349mlodBkDCXRcg
channels:
1.24/stable: v1.24.0 2022-05-13 (3272) 230MB classic
1.24/candidate: v1.24.0 2022-05-13 (3272) 230MB classic
1.24/beta: v1.24.0 2022-05-13 (3272) 230MB classic
1.24/edge: v1.24.1 2022-05-26 (3349) 230MB classic
latest/stable: v1.24.0 2022-05-13 (3272) 230MB classic
latest/candidate: v1.24.0 2022-05-13 (3273) 230MB classic
latest/beta: v1.24.0 2022-05-13 (3273) 230MB classic
latest/edge: v1.24.1 2022-05-27 (3360) 230MB classic
dqlite/stable: –
dqlite/candidate: –
dqlite/beta: –
dqlite/edge: v1.16.2 2019-11-07 (1038) 189MB classic
1.23/stable: v1.23.6 2022-04-29 (3204) 218MB classic
1.23/candidate: v1.23.6 2022-04-28 (3204) 218MB classic
1.23/beta: v1.23.6 2022-04-28 (3204) 218MB classic
1.23/edge: v1.23.7 2022-05-26 (3335) 218MB classic
1.22/stable: v1.22.9 2022-05-06 (3203) 193MB classic
1.22/candidate: v1.22.9 2022-04-28 (3203) 193MB classic
1.22/beta: v1.22.9 2022-04-28 (3203) 193MB classic
1.22/edge: v1.22.10 2022-05-26 (3331) 193MB classic
1.21/stable: v1.21.12 2022-05-06 (3202) 191MB classic
1.21/candidate: v1.21.12 2022-04-29 (3202) 191MB classic
1.21/beta: v1.21.12 2022-04-29 (3202) 191MB classic
1.21/edge: v1.21.13 2022-05-25 (3297) 191MB classic
1.20/stable: v1.20.13 2021-12-08 (2760) 221MB classic
1.20/candidate: v1.20.13 2021-12-07 (2760) 221MB classic
1.20/beta: v1.20.13 2021-12-07 (2760) 221MB classic
1.20/edge: v1.20.14 2022-01-11 (2843) 217MB classic
1.19/stable: v1.19.15 2021-09-30 (2530) 216MB classic
1.19/candidate: v1.19.15 2021-09-29 (2530) 216MB classic
1.19/beta: v1.19.15 2021-09-29 (2530) 216MB classic
1.19/edge: v1.19.16 2022-01-07 (2820) 212MB classic
1.18/stable: v1.18.20 2021-07-12 (2271) 198MB classic
1.18/candidate: v1.18.20 2021-07-12 (2271) 198MB classic
1.18/beta: v1.18.20 2021-07-12 (2271) 198MB classic
1.18/edge: v1.18.20 2021-11-03 (2647) 198MB classic
1.17/stable: v1.17.17 2021-01-15 (1916) 177MB classic
1.17/candidate: v1.17.17 2021-01-14 (1916) 177MB classic
1.17/beta: v1.17.17 2021-01-14 (1916) 177MB classic
1.17/edge: v1.17.17 2021-01-13 (1916) 177MB classic
1.16/stable: v1.16.15 2020-09-12 (1671) 179MB classic
1.16/candidate: v1.16.15 2020-09-04 (1671) 179MB classic
1.16/beta: v1.16.15 2020-09-04 (1671) 179MB classic
1.16/edge: v1.16.15 2020-09-02 (1671) 179MB classic
1.15/stable: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/candidate: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/beta: v1.15.11 2020-03-27 (1301) 171MB classic
1.15/edge: v1.15.11 2020-03-26 (1301) 171MB classic
1.14/stable: v1.14.10 2020-01-06 (1120) 217MB classic
1.14/candidate: ↑
1.14/beta: ↑
1.14/edge: v1.14.10 2020-03-26 (1303) 217MB classic
1.13/stable: v1.13.6 2019-06-06 (581) 237MB classic
1.13/candidate: ↑
1.13/beta: ↑
1.13/edge: ↑
1.12/stable: v1.12.9 2019-06-06 (612) 259MB classic
1.12/candidate: ↑
1.12/beta: ↑
1.12/edge: ↑
1.11/stable: v1.11.10 2019-05-10 (557) 258MB classic
1.11/candidate: ↑
1.11/beta: ↑
1.11/edge: ↑
1.10/stable: v1.10.13 2019-04-22 (546) 222MB classic
1.10/candidate: ↑
1.10/beta: ↑
1.10/edge: ↑
ubuntu@cloud:~$
版本安装
ubuntu@cloud:~$ sudo snap install microk8s --classic --channel=1.24/stable
microk8s (1.24/stable) v1.24.0 from Canonical✓ installed
ubuntu@cloud:~$
命令简化
ubuntu@cloud:~$ sudo snap alias microk8s.kubectl kubectl
Added:
- microk8s.kubectl as kubectl
ubuntu@cloud:~$ sudo usermod -a -G microk8s ubuntu
ubuntu@cloud:~$ sudo chown -f -R ubuntu ~/.kube
ubuntu@cloud:~$ newgrp microk8s
ubuntu@cloud:~$ kubectl version
WARNING: This version information is deprecated and will be replaced with the output from kubectl version --short. Use --output=yaml|json to get the full version.
Client Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.0-2+59bbb3530b6769", GitCommit:"59bbb3530b6769e4935a05ac0e13c9910c79253e", GitTreeState:"clean", BuildDate:"2022-05-13T06:43:45Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v4.5.4
Server Version: version.Info{Major:"1", Minor:"24+", GitVersion:"v1.24.0-2+59bbb3530b6769", GitCommit:"59bbb3530b6769e4935a05ac0e13c9910c79253e", GitTreeState:"clean", BuildDate:"2022-05-13T06:41:13Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@cloud:~$
环境检查
ubuntu@cloud:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cloud NotReady <none> 67s v1.24.0-2+59bbb3530b6769
ubuntu@cloud:~$
此时查看节点状态,发现是NotReady,查看节点详情
ubuntu@cloud:~$ kubectl describe node cloud
Name: cloud
Roles: <none>
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=cloud
kubernetes.io/os=linux
microk8s.io/cluster=true
node.kubernetes.io/microk8s-controlplane=microk8s-controlplane
Annotations: node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sun, 29 May 2022 16:53:15 +0800
Taints: node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Lease:
HolderIdentity: cloud
AcquireTime: <unset>
RenewTime: Sun, 29 May 2022 16:55:18 +0800
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Sun, 29 May 2022 16:53:25 +0800 Sun, 29 May 2022 16:53:15 +0800 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sun, 29 May 2022 16:53:25 +0800 Sun, 29 May 2022 16:53:15 +0800 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sun, 29 May 2022 16:53:25 +0800 Sun, 29 May 2022 16:53:15 +0800 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Sun, 29 May 2022 16:53:25 +0800 Sun, 29 May 2022 16:53:15 +0800 KubeletNotReady container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized
Addresses:
InternalIP: 192.168.64.21
Hostname: cloud
Capacity:
cpu: 3
ephemeral-storage: 40470732Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 3053892Ki
pods: 110
Allocatable:
cpu: 3
ephemeral-storage: 39422156Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 2951492Ki
pods: 110
System Info:
Machine ID: d9b10d26f12a4557942a9b68988d1ae8
System UUID: a1233bfd-0000-0000-bb96-dd576b65bbea
Boot ID: 3e4407bc-fb8b-4f02-a74f-7343e4b817df
Kernel Version: 5.15.0-27-generic
OS Image: Ubuntu 22.04 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: containerd://1.5.11
Kubelet Version: v1.24.0-2+59bbb3530b6769
Kube-Proxy Version: v1.24.0-2+59bbb3530b6769
Non-terminated Pods: (1 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system calico-node-pwrwx 250m (8%) 0 (0%) 0 (0%) 0 (0%) 2m
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 250m (8%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Starting 2m3s kube-proxy
Normal NodeAllocatableEnforced 2m5s kubelet Updated Node Allocatable limit across pods
Warning InvalidDiskCapacity 2m5s kubelet invalid capacity 0 on image filesystem
Normal NodeHasSufficientMemory 2m5s (x2 over 2m5s) kubelet Node cloud status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m5s (x2 over 2m5s) kubelet Node cloud status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m5s (x2 over 2m5s) kubelet Node cloud status is now: NodeHasSufficientPID
Normal Starting 2m5s kubelet Starting kubelet.
Normal RegisteredNode 2m node-controller Node cloud event: Registered Node cloud in Controller
ubuntu@cloud:~$
由上可知问题原因,此时不用管这个问题,后面会解决。
container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not
安装依赖
安装docker
ubuntu@cloud:~$ sudo snap install docker
docker 20.10.14 from Canonical✓ installed
ubuntu@cloud:~$
安装pullk8s
ubuntu@cloud:~$ git clone https://github.com/OpsDocker/pullk8s.git
Cloning into 'pullk8s'...
remote: Enumerating objects: 11, done.
remote: Counting objects: 100% (11/11), done.
remote: Compressing objects: 100% (10/10), done.
remote: Total 11 (delta 1), reused 3 (delta 0), pack-reused 0
Receiving objects: 100% (11/11), 10.49 KiB | 5.24 MiB/s, done.
Resolving deltas: 100% (1/1), done.
ubuntu@cloud:~$ cd pullk8s/
ubuntu@cloud:~/pullk8s$ chmod +x pullk8s.sh
ubuntu@cloud:~/pullk8s$ sudo cp pullk8s.sh /usr/local/bin/pullk8s
下载依赖镜像
ubuntu@cloud:~/$ sudo pullk8s pull k8s.gcr.io/pause:3.1
ubuntu@cloud:~/$ sudo pullk8s pull coredns/coredns:1.9.0
ubuntu@cloud:~/$ sudo pullk8s pull k8s.gcr.io/metrics-server/metrics-server:v0.5.2
状态查看
通过状态查看可以看出默认启用了ha-cluster插件。
ubuntu@cloud:~$ microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
ha-cluster # (core) Configure high availability on the current node
disabled:
community # (core) The community addons repository
dashboard # (core) The Kubernetes dashboard
dns # (core) CoreDNS
gpu # (core) Automatic enablement of Nvidia CUDA
helm # (core) Helm 2 - the package manager for Kubernetes
helm3 # (core) Helm 3 - Kubernetes package manager
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
metrics-server # (core) K8s Metrics Server for API access to service metrics
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
registry # (core) Private image registry exposed on localhost:32000
storage # (core) Alias to hostpath-storage add-on, deprecated
ubuntu@cloud:~$
节点加入
按照主节点的初始化步骤对工作节点edge1和edge2进行初始化。
修改host文件添加如下内容
192.168.64.21 cloud
192.168.64.22 edge1
192.168.64.23 edge2
主节点
ubuntu@cloud:~$ microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.64.21:25000/bd84d326e519ddd0e80f42cefa7167e3/83cdfc54a8c9
Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.64.21:25000/bd84d326e519ddd0e80f42cefa7167e3/83cdfc54a8c9 --worker
If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.64.21:25000/bd84d326e519ddd0e80f42cefa7167e3/83cdfc54a8c9
microk8s join 172.17.0.1:25000/bd84d326e519ddd0e80f42cefa7167e3/83cdfc54a8c9
ubuntu@cloud:~$
子节点edge1
ubuntu@edge1:~/pullk8s$ microk8s join 192.168.64.21:25000/e198c4c09fcfac48add57ddc2271e4c6/83cdfc54a8c9 --worker
Contacting cluster at 192.168.64.21
The node has joined the cluster and will appear in the nodes list in a few seconds.
Currently this worker node is configured with the following kubernetes API server endpoints:
- 192.168.64.21 and port 16443, this is the cluster node contacted during the join operation.
If the above endpoints are incorrect, incomplete or if the API servers are behind a loadbalancer please update
/var/snap/microk8s/current/args/traefik/provider.yaml
ubuntu@edge1:~/pullk8s$
子节点edge2
ubuntu@edge2:~/pullk8s$ microk8s join 192.168.64.21:25000/1c31ca4867e7c34c30c62d052740deaa/83cdfc54a8c9 --worker
Contacting cluster at 192.168.64.21
The node has joined the cluster and will appear in the nodes list in a few seconds.
Currently this worker node is configured with the following kubernetes API server endpoints:
- 192.168.64.21 and port 16443, this is the cluster node contacted during the join operation.
If the above endpoints are incorrect, incomplete or if the API servers are behind a loadbalancer please update
/var/snap/microk8s/current/args/traefik/provider.yaml
ubuntu@edge2:~/pullk8s$
查看节点状态
ubuntu@cloud:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
cloud Ready <none> 43m v1.24.0-2+59bbb3530b6769
edge2 NotReady <none> 77s v1.24.0-2+59bbb3530b6769
edge1 NotReady <none> 16m v1.24.0-2+59bbb3530b6769
ubuntu@cloud:~$
安装扩展
在主节点上使能插件
ubuntu@cloud:~$ microk8s.enable dns dashboard rbac
Infer repository core for addon dns
Infer repository core for addon dashboard
Enabling DNS
Applying manifest
serviceaccount/coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
clusterrole.rbac.authorization.k8s.io/coredns created
clusterrolebinding.rbac.authorization.k8s.io/coredns created
Restarting kubelet
Adding argument --cluster-domain to nodes.
Configuring node 192.168.64.23
Configuring node 192.168.64.22
Adding argument --cluster-dns to nodes.
Configuring node 192.168.64.23
Configuring node 192.168.64.22
Restarting nodes.
Configuring node 192.168.64.23
Configuring node 192.168.64.22
DNS is enabled
Enabling Kubernetes Dashboard
Infer repository core for addon metrics-server
Enabling Metrics-Server
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
clusterrolebinding.rbac.authorization.k8s.io/microk8s-admin created
Adding argument --authentication-token-webhook to nodes.
Configuring node 192.168.64.23
Configuring node 192.168.64.22
Restarting nodes.
Configuring node 192.168.64.23
Configuring node 192.168.64.22
Metrics-Server is enabled
Applying manifest
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
If RBAC is not enabled access the dashboard using the default token retrieved with:
token=$(microk8s kubectl -n kube-system get secret | grep default-token | cut -d " " -f1)
microk8s kubectl -n kube-system describe secret $token
In an RBAC enabled setup (microk8s enable RBAC) you need to create a user with restricted
permissions as shown in:
https://github.com/kubernetes/dashboard/blob/master/docs/user/access-control/creating-sample-user.md
ubuntu@cloud:~$
此时再查看节点状态
ubuntu@cloud:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-node-kk5bx 0/1 Init:0/3 0 28m 192.168.64.22 edge1 <none> <none>
kube-system calico-node-89r8m 0/1 Init:0/3 0 13m 192.168.64.23 edge2 <none> <none>
kube-system calico-node-8hhgf 1/1 Running 0 28m 192.168.64.21 cloud <none> <none>
kube-system calico-kube-controllers-9c6566474-h24gd 1/1 Running 0 56m 10.1.41.1 cloud <none> <none>
kube-system coredns-66bcf65bb8-hs8hn 1/1 Running 0 10m 10.1.41.2 cloud <none> <none>
kube-system kubernetes-dashboard-765646474b-jkxjv 1/1 Running 0 9m9s 10.1.41.3 cloud <none> <none>
kube-system metrics-server-5f8f64cb86-5mhl6 1/1 Running 0 9m9s 10.1.41.5 cloud <none> <none>
kube-system dashboard-metrics-scraper-6b6f796c8d-wfdgc 1/1 Running 0 9m9s 10.1.41.4 cloud <none> <none>
ubuntu@cloud:~$ kubectl describe pod calico-node-kk5bx -n kube-system
Name: calico-node-kk5bx
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Node: edge1/192.168.64.22
Start Time: Sun, 29 May 2022 17:20:50 +0800
Labels: controller-revision-hash=67ccbf755f
k8s-app=calico-node
pod-template-generation=3
Annotations: kubectl.kubernetes.io/restartedAt: 2022-05-29T16:53:16+08:00
Status: Pending
IP: 192.168.64.22
IPs:
IP: 192.168.64.22
Controlled By: DaemonSet/calico-node
Init Containers:
upgrade-ipam:
Container ID:
Image: docker.io/calico/cni:v3.21.4
Image ID:
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/calico-ipam
-upgrade
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
Mounts:
/host/opt/cni/bin from cni-bin-dir (rw)
/var/lib/cni/networks from host-local-net-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxspd (ro)
install-cni:
Container ID:
Image: docker.io/calico/cni:v3.21.4
Image ID:
Port: <none>
Host Port: <none>
Command:
/opt/cni/bin/install
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
CNI_CONF_NAME: 10-calico.conflist
CNI_NETWORK_CONFIG: <set to the key 'cni_network_config' of config map 'calico-config'> Optional: false
KUBERNETES_NODE_NAME: (v1:spec.nodeName)
CNI_MTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
SLEEP: false
CNI_NET_DIR: /var/snap/microk8s/current/args/cni-network
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/host/opt/cni/bin from cni-bin-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxspd (ro)
flexvol-driver:
Container ID:
Image: docker.io/calico/pod2daemon-flexvol:v3.21.4
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/host/driver from flexvol-driver-host (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxspd (ro)
Containers:
calico-node:
Container ID:
Image: docker.io/calico/node:v3.21.4
Image ID:
Port: <none>
Host Port: <none>
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Requests:
cpu: 250m
Liveness: exec [/bin/calico-node -felix-live] delay=10s timeout=10s period=10s #success=1 #failure=6
Readiness: exec [/bin/calico-node -felix-ready] delay=0s timeout=10s period=10s #success=1 #failure=3
Environment Variables from:
kubernetes-services-endpoint ConfigMap Optional: true
Environment:
DATASTORE_TYPE: kubernetes
WAIT_FOR_DATASTORE: true
NODENAME: (v1:spec.nodeName)
CALICO_NETWORKING_BACKEND: <set to the key 'calico_backend' of config map 'calico-config'> Optional: false
CLUSTER_TYPE: k8s,bgp
IP: autodetect
IP_AUTODETECTION_METHOD: can-reach=192.168.64.22
CALICO_IPV4POOL_VXLAN: Always
FELIX_IPINIPMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_VXLANMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
FELIX_WIREGUARDMTU: <set to the key 'veth_mtu' of config map 'calico-config'> Optional: false
CALICO_IPV4POOL_CIDR: 10.1.0.0/16
CALICO_DISABLE_FILE_LOGGING: true
FELIX_DEFAULTENDPOINTTOHOSTACTION: ACCEPT
FELIX_IPV6SUPPORT: false
FELIX_LOGSEVERITYSCREEN: error
FELIX_HEALTHENABLED: true
Mounts:
/host/etc/cni/net.d from cni-net-dir (rw)
/lib/modules from lib-modules (ro)
/run/xtables.lock from xtables-lock (rw)
/var/lib/calico from var-lib-calico (rw)
/var/log/calico/cni from cni-log-dir (ro)
/var/run/calico from var-run-calico (rw)
/var/run/nodeagent from policysync (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wxspd (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
lib-modules:
Type: HostPath (bare host directory volume)
Path: /lib/modules
HostPathType:
var-run-calico:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/var/run/calico
HostPathType:
var-lib-calico:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/var/lib/calico
HostPathType:
xtables-lock:
Type: HostPath (bare host directory volume)
Path: /run/xtables.lock
HostPathType: FileOrCreate
sysfs:
Type: HostPath (bare host directory volume)
Path: /sys/fs/
HostPathType: DirectoryOrCreate
cni-bin-dir:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/opt/cni/bin
HostPathType:
cni-net-dir:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/args/cni-network
HostPathType:
cni-log-dir:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/common/var/log/calico/cni
HostPathType:
host-local-net-dir:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/var/lib/cni/networks
HostPathType:
policysync:
Type: HostPath (bare host directory volume)
Path: /var/snap/microk8s/current/var/run/nodeagent
HostPathType: DirectoryOrCreate
flexvol-driver-host:
Type: HostPath (bare host directory volume)
Path: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds
HostPathType: DirectoryOrCreate
kube-api-access-wxspd:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: Burstable
Node-Selectors: kubernetes.io/os=linux
Tolerations: :NoSchedule op=Exists
:NoExecute op=Exists
CriticalAddonsOnly op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/network-unavailable:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreatePodSandBox 108s (x11 over 9m5s) kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to pull and unpack image "k8s.gcr.io/pause:3.1": failed to resolve reference "k8s.gcr.io/pause:3.1": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.1": dial tcp 64.233.189.82:443: i/o timeout
Warning FailedCreatePodSandBox 83s kubelet Failed to create pod sandbox: rpc error: code = Unavailable desc = error reading from server: EOF
Warning FailedCreatePodSandBox 28s kubelet Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "k8s.gcr.io/pause:3.1": failed to pull image "k8s.gcr.io/pause:3.1": failed to pull and unpack image "k8s.gcr.io/pause:3.1": failed to resolve reference "k8s.gcr.io/pause:3.1": failed to do request: Head "https://k8s.gcr.io/v2/pause/manifests/3.1": dial tcp 64.233.189.82:443: i/o timeout
此时发现子节点上无法下载k8s.gcr.io/pause:3.1根镜像,通过docker images进行查看发现镜像已经存在
ubuntu@edge1:~$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
coredns/coredns 1.9.0 0857bcbd38c9 3 months ago 49.5MB
k8s.gcr.io/metrics-server/metrics-server v0.5.2 f73640fb5061 6 months ago 64.3MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 4 years ago 742kB
在两个子节点上编辑文件/var/snap/microk8s/current/args/containerd-template.toml
可以通过编辑配置文件进行更改
24 # The 'plugins."io.containerd.grpc.v1.cri"' table contains all of the server options.
25 [plugins."io.containerd.grpc.v1.cri"]
26
27 stream_server_address = "127.0.0.1"
28 stream_server_port = "0"
29 enable_selinux = false
30 sandbox_image = "k8s.gcr.io/pause:3.1"
31 stats_collect_period = 10
访问阿里云镜像 搜索pause编辑为如下内容
24 # The 'plugins."io.containerd.grpc.v1.cri"' table contains all of the server options.
25 [plugins."io.containerd.grpc.v1.cri"]
26
27 stream_server_address = "127.0.0.1"
28 stream_server_port = "0"
29 enable_selinux = false
30 sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1"
31 stats_collect_period = 10
此时在两个节点上执行
ubuntu@edge2:~$ sudo snap stop microk8s
Stopped.
ubuntu@edge2:~$ sudo snap start microk8s
Started.
ubuntu@edge2:~$
此时再次观察状态均正常
ubuntu@cloud:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
edge2 Ready <none> 27m v1.24.0-2+59bbb3530b6769
edge1 Ready <none> 42m v1.24.0-2+59bbb3530b6769
cloud Ready <none> 70m v1.24.0-2+59bbb3530b6769
ubuntu@cloud:~$ kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system kubernetes-dashboard-765646474b-jkxjv 1/1 Running 0 22m 10.1.41.3 cloud <none> <none>
kube-system dashboard-metrics-scraper-6b6f796c8d-wfdgc 1/1 Running 0 22m 10.1.41.4 cloud <none> <none>
kube-system calico-node-kk5bx 1/1 Running 0 42m 192.168.64.22 edge1 <none> <none>
kube-system metrics-server-5f8f64cb86-5mhl6 1/1 Running 0 22m 10.1.41.5 cloud <none> <none>
kube-system coredns-66bcf65bb8-hs8hn 1/1 Running 0 24m 10.1.41.2 cloud <none> <none>
kube-system calico-node-8hhgf 1/1 Running 0 42m 192.168.64.21 cloud <none> <none>
kube-system calico-kube-controllers-9c6566474-h24gd 1/1 Running 0 69m 10.1.41.1 cloud <none> <none>
kube-system calico-node-89r8m 1/1 Running 0 27m 192.168.64.23 edge2 <none> <none>
ubuntu@cloud:~$
简单使用
ubuntu@cloud:~$ kubectl run nginx --image=nginx
pod/nginx created
ubuntu@cloud:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 0/1 ContainerCreating 0 6s
ubuntu@cloud:~$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 28s
ubuntu@cloud:~$
看板访问
网络地址
通过ip命令查看网络地址
ubuntu@microk8s:~$ ip addr
ubuntu@cloud:~$ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp0s2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 16:e6:8b:f4:0d:7d brd ff:ff:ff:ff:ff:ff
inet 192.168.64.21/24 metric 100 brd 192.168.64.255 scope global dynamic enp0s2
valid_lft 80415sec preferred_lft 80415sec
inet6 fe80::14e6:8bff:fef4:d7d/64 scope link
valid_lft forever preferred_lft forever
可以看到当前机器网络地址: 192.168.64.21
开启代理
通过命令microk8s dashboard-proxy开启看板访问
ubuntu@cloud:~$ microk8s dashboard-proxy
Checking if Dashboard is running.
Infer repository core for addon dashboard
Waiting for Dashboard to come up.
Create token for accessing the dashboard
secret/microk8s-dashboard-proxy-token created
Waiting for secret token (attempt 0)
Waiting for secret token (attempt 1)
Waiting for secret token (attempt 2)
Waiting for secret token (attempt 3)
Waiting for secret token (attempt 4)
Dashboard will be available at https://127.0.0.1:10443
Use the following token to login:
eyJhbGciOiJSUzI1NiIsImtpZCI6IlhPUDN3S3FtR00wQXgxMllfME1Yc2hNN29ySUhCTjlZNExzMnJHb1NobEkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJtaWNyb2s4cy1kYXNoYm9hcmQtcHJveHktdG9rZW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGVmYXVsdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6Ijg3MWFkZTIyLWUyMjMtNDc1My1hMWY3LWU4NDNmYzhhMmYxOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTpkZWZhdWx0In0.cC-rw672OeIq9qW5Fw3dg3ydV4klBcHHc0qLUNSA5SfKPqcEDwPJhPjXY-6BxNMpLyrQDNT6Nqa4mkGzBtUOAD2rPkRa6_UF0q6t4T_wd1jt9weP-VM_uBo-lh2d4GK5br36DtvV-RQWDScQ3OFoP9y7iOJDz63FnnZ760keVcxOc694VDXXcUo_zHr5BslutlSf5PurN86JnQvRuAsS61sPYYu4S254RfJGxtsLyRrJntfKWZYC6qD4DqTCMTeRUFfkJHm2tsLmgcOmrkQYucQEU150DNwg9oyqZUaV7Z8UOBDVQvVsIlw0lguGntocV8stIHnRci27Ov4T3Ewq9A
打开浏览器访问https://192.168.64.21:10443
在当前浏览器该chrome页面上,直接使用键盘输入这11个字符:thisisunsafe 此时发现已经正常进入
此时输入token即可正常登录
权限赋予
ubuntu@cloud:~$ kubectl create clusterrolebinding kubernetes-dashboard-clusterbingding_kube-system_default --clusterrole=cluster-admin --serviceaccount=kube-system:default
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-clusterbingding_kube-system_default created
ubuntu@cloud:~$
更多推荐
所有评论(0)