CentOS7 部署K8S集群
具体参考: https://www.cnblogs.com/caoxb/p/11243472.html部署规划12192.168.122.152 k8s-master192.168.122.61 k8s-node备注:第1步~第8步,所有的节点都要操作,第9、10步Master节点操作,第1...
具体参考: https://www.cnblogs.com/caoxb/p/11243472.html
部署规划
1 2
|
|
备注:第1步~第8步,所有的节点都要操作,第9、10步Master节点操作,第11步Node节点操作。
如果第9、10、11步操作失败,可以通过 kubeadm reset 命令来清理环境重新安装。
1.关闭防火墙
$ systemctl stop firewalld
备注:必须关闭
2.关闭selinux
$ setenforce 0
3.关闭swap
$ swapoff -a 临时关闭
$ free 可以通过这个命令查看swap是否关闭了
$ vim /etc/fstab 永久关闭
#/dev/mapper/centos_k8s--master-swap swap swap defaults 0 0
备注:必须关闭
4.添加主机名与IP对应的关系
$ vim /etc/hosts
添加如下内容:
1 2 3 |
|
5.将桥接的IPV4流量传递到iptables 的链
1 2 3 4 5 |
|
6.安装Docker
1)下载并安装
1 2 |
|
2)设置开机启动
1 2 |
|
3)查看Docker版本
1 2 |
|
7.添加阿里云YUM软件源
直接执行如下命令
1 2 3 4 5 6 7 8 9 10 |
|
8.安装kubeadm,kubelet和kubectl
在部署kubernetes时,要求master node和worker node上的版本保持一致,否则会出现版本不匹配导致奇怪的问题出现。本文将介绍如何在CentOS系统上,使用yum安装指定版本的Kubernetes。
yum -y install kubectl kubelet kubeadm
报如下的错:
Importing GPG key 0xA7317B0F:
Userid : "Google Cloud Packages Automatic Signing Key <gc-team@google.com>"
Fingerprint: d0bc 747f d8ca f711 7500 d6fa 3746 c208 a731 7b0f
From : https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Public key for 26d3e29e517cb0fd27fca12c02bd75ffa306bc5ce78c587d83a0242ba20588f0-kubectl-1.16.2-0.x86_64.rpm is not installed
Failing package is: kubectl-1.16.2-0.x86_64
GPG Keys are configured as: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
原因是key 校验,下面的命令可以成功:
[root@k8s-master ~]# yum install -y kubelet kubeadm kubectl --nogpgcheck
$ systemctl enable kubelet
加入启动项; kubelet
9.部署Kubernetes Master
1)初始化kubeadm
kubeadm init \
--apiserver-advertise-address=192.168.122.152 \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version v1.16.1 \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
当出现如下结果,表示初始化顺利:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.122.152:6443 --token blp5os.k5jjk54o61txnree \
--discovery-token-ca-cert-hash sha256:e9205bf68357eb190c5a7fda5e782d7533c361d0298093b6f283cc6886ad0b4e
[root@k8s-master ~]#
----
查看镜像
$ docker images
[root@k8s-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.16.1 0d2430db3cd0 3 weeks ago 86.1MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.16.1 f15aad0426f5 3 weeks ago 217MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.16.1 ba306669806e 3 weeks ago 163MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.16.1 e15192a92182 3 weeks ago 87.3MB
registry.aliyuncs.com/google_containers/etcd 3.3.15-0 b2756210eeab 7 weeks ago 247MB
registry.aliyuncs.com/google_containers/coredns 1.6.2 bf261d157914 2 months ago 44.1MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 22 months ago 742kB
[root@k8s-master ~]#
2)使用kubectl工具
复制如下命令直接执行
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 2m3s v1.16.2
[root@k8s-master ~]#
[root@k8s-master ~]# kubectl describe node
Name: k8s-master
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=k8s-master
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Wed, 23 Oct 2019 23:28:29 -0400
Taints: node.kubernetes.io/not-ready:NoExecute
node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoSchedule
Unschedulable: false
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message
---- ------ ----------------- ------------------ ------ -------
MemoryPressure False Wed, 23 Oct 2019 23:35:30 -0400 Wed, 23 Oct 2019 23:28:26 -0400 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Wed, 23 Oct 2019 23:35:30 -0400 Wed, 23 Oct 2019 23:28:26 -0400 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Wed, 23 Oct 2019 23:35:30 -0400 Wed, 23 Oct 2019 23:28:26 -0400 KubeletHasSufficientPID kubelet has sufficient PID available
Ready False Wed, 23 Oct 2019 23:35:30 -0400 Wed, 23 Oct 2019 23:28:26 -0400 KubeletNotReady runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
InternalIP: 192.168.122.152
Hostname: k8s-master
Capacity:
cpu: 2
ephemeral-storage: 17394Mi
hugepages-2Mi: 0
memory: 1014848Ki
pods: 110
Allocatable:
cpu: 2
ephemeral-storage: 16415037823
hugepages-2Mi: 0
memory: 912448Ki
pods: 110
System Info:
Machine ID: f857adac4f1f442f89c3d47da10ad413
System UUID: F857ADAC-4F1F-442F-89C3-D47DA10AD413
Boot ID: 4803ae1d-47f8-410b-b985-76511ec7131f
Kernel Version: 3.10.0-957.el7.x86_64
OS Image: CentOS Linux 7 (Core)
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://18.6.1
Kubelet Version: v1.16.2
Kube-Proxy Version: v1.16.2
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (5 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE
--------- ---- ------------ ---------- --------------- ------------- ---
kube-system etcd-k8s-master 0 (0%) 0 (0%) 0 (0%) 0 (0%) 5m53s
kube-system kube-apiserver-k8s-master 250m (12%) 0 (0%) 0 (0%) 0 (0%) 6m2s
kube-system kube-controller-manager-k8s-master 200m (10%) 0 (0%) 0 (0%) 0 (0%) 5m50s
kube-system kube-proxy-6tchh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6m47s
kube-system kube-scheduler-k8s-master 100m (5%) 0 (0%) 0 (0%) 0 (0%) 5m43s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits
-------- -------- ------
cpu 550m (27%) 0 (0%)
memory 0 (0%) 0 (0%)
ephemeral-storage 0 (0%) 0 (0%)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal NodeHasSufficientMemory 7m12s (x8 over 7m12s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 7m12s (x7 over 7m12s) kubelet, k8s-master Node k8s-master status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 7m12s (x8 over 7m12s) kubelet, k8s-master Node k8s-master status is now: NodeHasSufficientPID
Normal Starting 6m46s kube-proxy, k8s-master Starting kube-proxy.
[root@k8s-master ~]#
10.安装Pod网络插件(CNI)
1)安装插件
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2019-10-23 23:38:41-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.76.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.76.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14416 (14K) [text/plain]
Saving to: ‘kube-flannel.yml’
100%[==========================================================================================================================>] 14,416 --.-K/s in 0.08s
2019-10-23 23:38:42 (168 KB/s) - ‘kube-flannel.yml’ saved [14416/14416]
下面开始安装:
[root@k8s-master ~]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
2)查看是否部署成功
1 |
|
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-58cc8c89f4-s47nn 1/1 Running 0 90m
coredns-58cc8c89f4-tltzh 1/1 Running 0 90m
etcd-k8s-master 1/1 Running 0 89m
kube-apiserver-k8s-master 1/1 Running 0 89m
kube-controller-manager-k8s-master 1/1 Running 1 89m
kube-flannel-ds-amd64-ptgq2 1/1 Running 0 79m
kube-proxy-6tchh 1/1 Running 0 90m
kube-scheduler-k8s-master 1/1 Running 1 88m
下面用kubectl describe node 可以看到没有报错了。
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 11m v1.16.2
安装失败了,怎么清理环境重新安装啊?执行一条命令:
$ kubeadm reset
11.Node节点加入集群
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令:
复制上面命令,在node节点上执行
[root@k8s-node ~]# kubeadm join 192.168.122.152:6443 --token blp5os.k5jjk54o61txnree \
> --discovery-token-ca-cert-hash sha256:e9205bf68357eb190c5a7fda5e782d7533c361d0298093b6f283cc6886ad0b4e
[root@k8s-node ~]# kubeadm join 192.168.122.152:6443 --token blp5os.k5jjk54o61txnree \
> --discovery-token-ca-cert-hash sha256:e9205bf68357eb190c5a7fda5e782d7533c361d0298093b6f283cc6886ad0b4e
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.16" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
[root@k8s-node ~]#
c出现上面的就说已经成功了。
如果token忘记了,则可以通过如下操作:
1)查看token,如果token失效,则重新生成一个
1 2 |
|
2)获取ca证书sha256编码hash值
1 |
|
3)节点加入集群
1 |
|
12.测试kubernetes集群
1 2 3 |
|
可以看到service 是运行再k8s-node 上面的。
[root@k8s-master .ssh]# kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx-86c57db685-86fjw 1/1 Running 0 3m9s 10.244.1.2 k8s-node <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.1.0.1 <none> 443/TCP 137m <none>
service/nginx NodePort 10.1.184.162 <none> 80:30504/TCP 2m38s app=nginx
通过浏览器访问:http://192.168.122.61:30504 可以正常访问
更多推荐
所有评论(0)