k8s单机部署
kubeadm 部署单机k8s
·
只要一台机器,部署k8s环境
1.部署docker
已经安装好了docker环境,docker的版本为20.10.12
# docker --version
Docker version 20.10.12, build e91ed57
2.安装kubernetes
1.设置kuberneters.repo文件
# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0 # 设置1会报错
> repo_gpgcheck=0 # 设置1会校验报错
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF
2.安装kubelet kubeadm kubectl
# yum install -y kubelet kubeadm kubectl
Loaded plugins: fastestmirror
Repository 'kubernetes': Error parsing config: Error parsing "repo_gpgcheck = '0 # \xe8\xae\xbe\xe7\xbd\xae1\xe4\xbc\x9a\xe6\xa0\xa1\xe9\xaa\x8c\xe6\x8a\xa5\xe9\x94\x99'": invalid boolean value
Determining fastest mirrors
ai-local | 3.6 kB 00:00:00
foot | 2.9 kB 00:00:00
paas | 2.9 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.20.5-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.20.5-0.x86_64
--> Processing Dependency: cri-tools >= 1.13.0 for package: kubeadm-1.20.5-0.x86_64
---> Package kubectl.x86_64 0:1.20.5-0 will be installed
---> Package kubelet.x86_64 0:1.20.5-0 will be installed
--> Processing Dependency: socat for package: kubelet-1.20.5-0.x86_64
--> Processing Dependency: conntrack for package: kubelet-1.20.5-0.x86_64
--> Running transaction check
---> Package conntrack-tools.x86_64 0:1.4.4-7.el7 will be installed
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_queue.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cttimeout.so.1()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
--> Processing Dependency: libnetfilter_cthelper.so.0()(64bit) for package: conntrack-tools-1.4.4-7.el7.x86_64
---> Package cri-tools.x86_64 0:1.13.0-0 will be installed
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
---> Package socat.x86_64 0:1.7.3.2-2.el7 will be installed
--> Running transaction check
---> Package libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 will be installed
---> Package libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 will be installed
---> Package libnetfilter_queue.x86_64 0:1.0.2-2.el7_2 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================================================================================================
Package Arch Version Repository Size
=========================================================================================================================================================
Installing:
kubeadm x86_64 1.20.5-0 paas 8.3 M
kubectl x86_64 1.20.5-0 paas 8.5 M
kubelet x86_64 1.20.5-0 paas 20 M
Installing for dependencies:
conntrack-tools x86_64 1.4.4-7.el7 ai-local 187 k
cri-tools x86_64 1.13.0-0 paas 5.1 M
kubernetes-cni x86_64 0.8.7-0 paas 19 M
libnetfilter_cthelper x86_64 1.0.0-11.el7 ai-local 18 k
libnetfilter_cttimeout x86_64 1.0.0-7.el7 ai-local 18 k
libnetfilter_queue x86_64 1.0.2-2.el7_2 ai-local 23 k
socat x86_64 1.7.3.2-2.el7 ai-local 290 k
Transaction Summary
=========================================================================================================================================================
Install 3 Packages (+7 Dependent packages)
Total download size: 61 M
Installed size: 263 M
Downloading packages:
(1/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm | 5.1 MB 00:00:00
(2/10): c2634321e0d8ebe24ba7c6f025df171f5d1707c75a90e3bdd08199ab47aac565-kubeadm-1.20.5-0.x86_64.rpm | 8.3 MB 00:00:00
(3/10): 8593f28d972a6818131c1a6cd34f52b22a6acd0c4c7dcf3d7447ad53a9f24cc3-kubectl-1.20.5-0.x86_64.rpm | 8.5 MB 00:00:00
(4/10): 356e511f8963b4b68fdf41593e64e92f03f0b58c72aae0613aeff3e770078cf7-kubelet-1.20.5-0.x86_64.rpm | 20 MB 00:00:00
(5/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm | 19 MB 00:00:00
(6/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm | 18 kB 00:00:02
(7/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm | 18 kB 00:00:00
(8/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm | 187 kB 00:00:03
(9/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm | 23 kB 00:00:00
(10/10): socat-1.7.3.2-2.el7.x86_64.rpm | 290 kB 00:00:00
---------------------------------------------------------------------------------------------------------------------------------------------------------
Total 17 MB/s | 61 MB 00:00:03
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : libnetfilter_cthelper-1.0.0-11.el7.x86_64 1/10
Installing : kubectl-1.20.5-0.x86_64 2/10
Installing : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 3/10
Installing : libnetfilter_queue-1.0.2-2.el7_2.x86_64 4/10
Installing : conntrack-tools-1.4.4-7.el7.x86_64 5/10
Installing : cri-tools-1.13.0-0.x86_64 6/10
Installing : socat-1.7.3.2-2.el7.x86_64 7/10
Installing : kubernetes-cni-0.8.7-0.x86_64 8/10
Installing : kubelet-1.20.5-0.x86_64 9/10
Installing : kubeadm-1.20.5-0.x86_64 10/10
Verifying : socat-1.7.3.2-2.el7.x86_64 1/10
Verifying : conntrack-tools-1.4.4-7.el7.x86_64 2/10
Verifying : kubernetes-cni-0.8.7-0.x86_64 3/10
Verifying : kubelet-1.20.5-0.x86_64 4/10
Verifying : cri-tools-1.13.0-0.x86_64 5/10
Verifying : libnetfilter_queue-1.0.2-2.el7_2.x86_64 6/10
Verifying : libnetfilter_cttimeout-1.0.0-7.el7.x86_64 7/10
Verifying : kubectl-1.20.5-0.x86_64 8/10
Verifying : kubeadm-1.20.5-0.x86_64 9/10
Verifying : libnetfilter_cthelper-1.0.0-11.el7.x86_64 10/10
Installed:
kubeadm.x86_64 0:1.20.5-0 kubectl.x86_64 0:1.20.5-0 kubelet.x86_64 0:1.20.5-0
Dependency Installed:
conntrack-tools.x86_64 0:1.4.4-7.el7 cri-tools.x86_64 0:1.13.0-0 kubernetes-cni.x86_64 0:0.8.7-0
libnetfilter_cthelper.x86_64 0:1.0.0-11.el7 libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7 libnetfilter_queue.x86_64 0:1.0.2-2.el7_2
socat.x86_64 0:1.7.3.2-2.el7
Complete!
3.查看镜像并下载
# kubeadm config images list
I0608 14:48:32.218259 23578 version.go:254] remote version is much newer: v1.27.2; falling back to: stable-1.20
k8s.gcr.io/kube-apiserver:v1.23.4
k8s.gcr.io/kube-controller-manager:v1.23.4
k8s.gcr.io/kube-scheduler:v1.23.4
k8s.gcr.io/kube-proxy:v1.23.4
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
# docker pullk 8s.gcr.io/kube-apiserver:v1.23.4
# docker pullk 8s.gcr.io/kube-controller-manager:v1.23.4
# docker pullk 8s.gcr.io/kube-scheduler:v1.23.4
# docker pullk 8s.gcr.io/kube-proxy:v1.23.4
# docker pullk 8s.gcr.io/pause:3.2
# docker pullk 8s.gcr.io/etcd:3.4.13-0
# docker pullk 8s.gcr.io/coredns:1.7.0
#直接下载,下载不到这些镜像,需要从aliyun上重新下载
# kubeadm config images pull --kubernetes-version v1.23.4 --image-repository registry.aliyuncs.com/google_containers
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
# 换了新版本
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.23.4 62930710c963 15 months ago 135MB
registry.aliyuncs.com/google_containers/kube-proxy v1.23.4 2114245ec4d6 15 months ago 112MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.23.4 aceacb6244f9 15 months ago 53.5MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.23.4 25444908517a 15 months ago 125MB
mysql 5.7 c20987f18b13 17 months ago 448MB
registry.aliyuncs.com/google_containers/etcd 3.5.1-0 25f8c7f3da61 19 months ago 293MB
registry.aliyuncs.com/google_containers/coredns v1.8.6 a4ca41631cc7 20 months ago 46.8MB
registry.aliyuncs.com/google_containers/pause 3.6 6270bb605e12 21 months ago 683kB
版本不一致的报错
# kubeadm init --apiserver-advertise-address=1xxxxxxxxx1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.23.4 --service-cidr=172.96.0.0/12 --pod-network-cidr=10.244.0.0/16
this version of kubeadm only supports deploying clusters with the control plane version >= 1.26.0. Current version: v1.23.4
To see the stack trace of this error execute with --v=5 or higher
4.卸载没有版本的版本
# yum remove kubelet.x86_64 kubeadm.x86_64 kubectl.x86_64
Loaded plugins: fastestmirror
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.27.2-0 will be erased
---> Package kubectl.x86_64 0:1.27.2-0 will be erased
---> Package kubelet.x86_64 0:1.27.2-0 will be erased
--> Processing Dependency: kubelet for package: kubernetes-cni-1.2.0-0.x86_64
--> Running transaction check
---> Package kubernetes-cni.x86_64 0:1.2.0-0 will be erased
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================================================================================================
Package Arch Version Repository Size
=========================================================================================================================================================
Removing:
kubeadm x86_64 1.27.2-0 @kubernetes 46 M
kubectl x86_64 1.27.2-0 @kubernetes 47 M
kubelet x86_64 1.27.2-0 @kubernetes 101 M
Removing for dependencies:
kubernetes-cni x86_64 1.2.0-0 @kubernetes 49 M
Transaction Summary
=========================================================================================================================================================
Remove 3 Packages (+1 Dependent package)
Installed size: 243 M
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : kubeadm-1.27.2-0.x86_64 1/4
Erasing : kubelet-1.27.2-0.x86_64 2/4
Erasing : kubernetes-cni-1.2.0-0.x86_64 3/4
Erasing : kubectl-1.27.2-0.x86_64 4/4
Verifying : kubeadm-1.27.2-0.x86_64 1/4
Verifying : kubectl-1.27.2-0.x86_64 2/4
Verifying : kubernetes-cni-1.2.0-0.x86_64 3/4
Verifying : kubelet-1.27.2-0.x86_64 4/4
Removed:
kubeadm.x86_64 0:1.27.2-0 kubectl.x86_64 0:1.27.2-0 kubelet.x86_64 0:1.27.2-0
Dependency Removed:
kubernetes-cni.x86_64 0:1.2.0-0
Complete!
5.安装指定版本的kubeadm
# yum install -y kubelet-1.23.4 kubeadm-1.23.4 kubectl-1.23.4
Loaded plugins: fastestmirror
Repository epel is listed more than once in the configuration
Repository epel-debuginfo is listed more than once in the configuration
Repository epel-source is listed more than once in the configuration
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.aliyun.com
* updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.23.4-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.23.4-0.x86_64
---> Package kubectl.x86_64 0:1.23.4-0 will be installed
---> Package kubelet.x86_64 0:1.23.4-0 will be installed
--> Running transaction check
---> Package kubernetes-cni.x86_64 0:1.2.0-0 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
=========================================================================================================================================================
Package Arch Version Repository Size
=========================================================================================================================================================
Installing:
kubeadm x86_64 1.23.4-0 kubernetes 9.0 M
kubectl x86_64 1.23.4-0 kubernetes 9.5 M
kubelet x86_64 1.23.4-0 kubernetes 21 M
Installing for dependencies:
kubernetes-cni x86_64 1.2.0-0 kubernetes 17 M
Transaction Summary
=========================================================================================================================================================
Install 3 Packages (+1 Dependent package)
Total download size: 56 M
Installed size: 255 M
Downloading packages:
(1/4): c8a17896ac2f24c43770d837f9f751acf161d6c33694b5dad42f5f638c6dd626-kubeadm-1.23.4-0.x86_64.rpm | 9.0 MB 00:00:27
(2/4): ae22dad233f0617861909955e30f527067e6f5535c1d1a9cda7b3a288fe62cd2-kubectl-1.23.4-0.x86_64.rpm | 9.5 MB 00:00:29
(3/4): 0f2a2afd740d476ad77c508847bad1f559afc2425816c1f2ce4432a62dfe0b9d-kubernetes-cni-1.2.0-0.x86_64.rpm | 17 MB 00:00:53
(4/4): 7a0d50ba594f62deddd266db3400d40a3b745be71f10684faa9c16 98% [=================================================== ] 450 kB/s | 55 MB 00:00:02 ETA
(4/4): 7a0d50ba594f62deddd266db3400d40a3b745be71f10684faa9c1632aca50d6b-kubelet-1.23.4-0.x86_64.rpm | 21 MB 00:01:03
---------------------------------------------------------------------------------------------------------------------------------------------------------
Total 628 kB/s | 56 MB 00:01:31
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : kubelet-1.23.4-0.x86_64 1/4
Installing : kubernetes-cni-1.2.0-0.x86_64 2/4
Installing : kubectl-1.23.4-0.x86_64 3/4
Installing : kubeadm-1.23.4-0.x86_64 4/4
Verifying : kubeadm-1.23.4-0.x86_64 1/4
Verifying : kubernetes-cni-1.2.0-0.x86_64 2/4
Verifying : kubelet-1.23.4-0.x86_64 3/4
Verifying : kubectl-1.23.4-0.x86_64 4/4
Installed:
kubeadm.x86_64 0:1.23.4-0 kubectl.x86_64 0:1.23.4-0 kubelet.x86_64 0:1.23.4-0
Dependency Installed:
kubernetes-cni.x86_64 0:1.2.0-0
Complete!
6.查看当前版本
kubeadm config images list
# kubeadm config images list
W0608 16:45:43.400361 6901 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
registry.k8s.io/kube-apiserver:v1.27.2
registry.k8s.io/kube-controller-manager:v1.27.2
registry.k8s.io/kube-scheduler:v1.27.2
registry.k8s.io/kube-proxy:v1.27.2
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.7-0
registry.k8s.io/coredns/coredns:v1.10.1
7.kubeadm init
# kubeadm init --apiserver-advertise-address=10.xxxxxxxx1 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.23.4 --service-cidr=172.96.0.0/12 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.23.4
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [host-10-19-83-151 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.96.0.1 10xxxxxxxx1 ]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [host-10xxxxxxxx1 localhost] and IPs [10xxxxxxxx1 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [host-10xxxxxxxx1 localhost] and IPs [10xxxxxxxx1 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 29.005397 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node host-10-19-83-151 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node host-10-19-83-151 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: am8iii.gt090u508378iq3k
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10xxxxxxxx1 :6443 --token am8iii.gt090uww3k \
--discovery-token-ca-cert-hash sha256:b98810xxxxxxxx1 5405c93063c5a0deaedb86e5
[root@host-10xxxxxxxx1 data]#
8.查看当前节点
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
host-10-19-83-151 NotReady control-plane,master 28m v1.23.4
9.当前节点状态不正常,那是因为网络插件flannel未部署
# kubectl get ns
NAME STATUS AGE
default Active 38m
kube-node-lease Active 38m
kube-public Active 38m
kube-system Active 38m
# kubectl create ns kube-flannel
namespace/kube-flannel created
# kubectl get ns
NAME STATUS AGE
default Active 39m
kube-flannel Active 3s
kube-node-lease Active 39m
kube-public Active 39m
kube-system Active 39m
# kubectl apply -f kube-flannel.yml -n kube-flannel
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
10.kube-flannel.yml (version:0.19.2)
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
11.此刻单机状态正常
# kubectl get node
NAME STATUS ROLES AGE VERSION
host-10-19-83-151 Ready control-plane,master 3d21h v1.23.4
12.新的节点加入到master中
# kubeadm join 1xxxxxxxxx1:6443 --token am8iii.gt09xxxxxxxxxx3k --discovery-token-ca-cert-hash sha256:bxxxxxxxxxxxxxxxxx63c5a0deaedb86e5
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
更多推荐
已为社区贡献9条内容
所有评论(0)