实验前提:
三台主机
172.25.4.111 server1 master
172.25.4.112 server2 node1
172.25.4.113 server3 node2
1.三台主机均安装docker服务,并且安装kubernet服务禁止swap分区开机自启

[root@server1 ~]# cd k8s/
[root@server1 k8s]# ls
coredns.tar                    heapster-influxdb.tar        kubectl-1.12.2-0.x86_64.rpm        kubernetes-dashboard.tar   scope.yaml
cri-tools-1.12.0-0.x86_64.rpm  heapster.tar                 kube-flannel.yml                   kubernetes-dashboard.yaml
etcd.tar                       kubeadm-1.12.2-0.x86_64.rpm  kubelet-1.12.2-0.x86_64.rpm        kube-scheduler.tar
flannel.tar                    kube-apiserver.tar           kube-proxy.tar                     pause.tar
heapster-grafana.tar           kube-controller-manager.tar  kubernetes-cni-0.6.0-0.x86_64.rpm  scope.tar
[root@server1 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm
[root@server1 ~]# systemctl start kubelet.service
[root@server1 ~]# systemctl enable kubelet.service
[root@server2 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm
[root@server2 ~]# systemctl start kubelet.service
[root@server2 ~]# systemctl enable kubelet.service

[root@server3 k8s]# yum install -y kubeadm-1.12.2-0.x86_64.rpm kubelet-1.12.2-0.x86_64.rpm kubectl-1.12.2-0.x86_64.rpm kubernetes-cni-0.6.0-0.x86_64.rpm cri-tools-1.12.0-0.x86_64.rpm
[root@server3 ~]# systemctl start kubelet.service
[root@server3 ~]# systemctl enable kubelet.service

禁止swap分区开机自启

[root@server1 k8s]# swapoff -a
[root@server1 k8s]# vim /etc/fstab
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

[root@server2 k8s]# swapoff -a
[root@server2 k8s]# vim /etc/fstab
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

[root@server3 k8s]# swapoff -a
[root@server3 k8s]# vim /etc/fstab
#/dev/mapper/rhel-swap   swap                    swap    defaults        0 0

2.三台主机均导入镜像

docker load -i kube-apiserver.tar
docker load -i kube-controller-manager.tar
docker load -i kube-scheduler.tar
docker load -i kube-proxy.tar
docker load -i pause.tar
docker load -i etcd.tar
docker load -i coredns.tar
docker load -i flannel.tar

3.master主机添加集群

[root@server1 ~]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables 
[root@server1 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.4.111
	[ERROR SystemVerification]: unsupported docker version: 18.09.6
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

出现报错:报错原因是版本不支持

[root@server1 manifests]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=172.25.4.111 --ignore-preflight-errors=all  ##忽略版本错误,再次启动
Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
   kubeadm join 172.25.4.111:6443 --token bz70ra.u77yw1bmyt6387rs --discovery-token-ca-cert-hash  
   sha256:178a3be4d041611b87a7212dd7f5ecf5c7a020482063a662bf3ef5601da102c2   ##生成的哈希密码以便加入节点

初始化成功,按照相应提示继续
4.master主机的配置

[root@server1 k8s]# vim kube-flannel.yml 
76       "Network": "10.244.0.0/16"
[root@server1 k8s]# useradd k8s
[root@server1 k8s]# id k8s
uid=1001(k8s) gid=1001(k8s) groups=1001(k8s)
[root@server1 k8s]# vim /etc/sudoers  ##此时无执行权限
[root@server1 k8s]# chmod u+w /etc/sudoers  ##给与执行权限
[root@server1 k8s]# vim /etc/sudoers
 91 root    ALL=(ALL)       ALL
 92 k8s     ALL=(ALL)       NOPASSWD:ALL  ##添加k8s用户并且免密
[root@server1 k8s]# cp kube-flannel.yml /home/k8s

使用k8s用户进行如下配置

[root@server1 k8s]# su - k8s
[k8s@server1 ~]$ mkdir -p $HOME/.kube
[k8s@server1 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@server1 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@server1 k8s]# vim /home/k8s/.bashrc  ##设置环境变量,使我们使用k8s可以进行一个补全操作
[root@server1 k8s]# cat /home/k8s/.bashrc 
source <(kubectl completion bash)
[root@server1 k8s]# su - k8s
[k8s@server1 ~]$ kubectl apply -f kube-flannel.yml  ##根据配置文件打开应用
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[k8s@server1 ~]$ sudo docker ps  ##查看当前docker的进程

5.添加节点
server2与server3添加节点

[root@server2 k8s]# kubeadm join 172.25.4.111:6443 --token bz70ra.u77yw1bmyt6387rs --discovery-token-ca-cert-hash sha256:178a3be4d041611b87a7212dd7f5ecf5c7a020482063a662bf3ef5601da102c2

W0715 22:58:51.830063    2492 common.go:168] WARNING: could not obtain a bind address for the API Server: no default routes found in "/proc/net/route" or "/proc/net/ipv6_route"; using: 0.0.0.0
cannot use "0.0.0.0" as the bind address for the API Server  ##出现警告,原因是没有添加网关
添加网关以后再次访问
[root@server2 k8s]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         172.25.4.250    0.0.0.0         UG    0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     1002   0        0 eth0
172.17.0.0      0.0.0.0         255.255.0.0     U     0      0        0 docker0
172.25.4.0      0.0.0.0         255.255.255.0   U     0      0        0 eth0
[root@server2 k8s]# kubeadm join 172.25.4.111:6443 --token bz70ra.u77yw1bmyt6387rs --discovery-token-ca-cert-hash sha256:178a3be4d041611b87a7212dd7f5ecf5c7a020482063a662bf3ef5601da102c2

[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1  ##iptables需要等于1
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1  ##ipv4需要等于1
	[ERROR SystemVerification]: unsupported docker version: 18.09.6
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`  ##版本不支持

报错解决

[root@server2 k8s]# echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
[root@server2 k8s]# echo 1 > /proc/sys/net/ipv4/ip_forward
[root@server2 k8s]# kubeadm join 172.25.4.111:6443 --token bz70ra.u77yw1bmyt6387rs --discovery-token-ca-cert-hash sha256:178a3be4d041611b87a7212dd7f5ecf5c7a020482063a662bf3ef5601da102c2  --ignore-preflight-errors=all  ##解决版本问题
This node has joined the cluster:  ##添加成功
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.
[root@server2 k8s]# systemctl start kubelet.service  ##开启服务
[root@server2 k8s]# systemctl enable kubelet.service  ##设置为开机自启

server3同样设置

查看节点状态

[k8s@server1 ~]$ kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
server1   Ready    master   86m   v1.12.2
server2   Ready    <none>   35m   v1.12.2
server3   Ready    <none>   17m   v1.12.2
[k8s@server1 ~]$ kubectl get pod --all-namespaces  ##查看配置状态
NAMESPACE     NAME                              READY   STATUS             RESTARTS   AGE
kube-system   coredns-576cbf47c7-m88nl          1/1     Running            0          86m
kube-system   coredns-576cbf47c7-zcfls          1/1     Running            0          86m
kube-system   etcd-server1                      1/1     Running            0          86m
kube-system   kube-apiserver-server1            1/1     Running            9          86m
kube-system   kube-controller-manager-server1   0/1     CrashLoopBackOff   9          86m
kube-system   kube-flannel-ds-amd64-ds2m4       1/1     Running            1          63m
kube-system   kube-flannel-ds-amd64-m9wjt       1/1     Running            6          36m
kube-system   kube-flannel-ds-amd64-xw2jj       1/1     Running            4          15m
kube-system   kube-proxy-57bqm                  1/1     Running            0          36m
kube-system   kube-proxy-dl295                  1/1     Running            0          86m
kube-system   kube-proxy-wfv9x                  1/1     Running            0          15m
kube-system   kube-scheduler-server1            1/1     Running            9          86m

当多次刷新,如果说配置中还有不是running的,则可以删除此配置name并再次刷新,直至全部running

[k8s@server1 ~]$ kubectl delete pod kube-controller-manager-server1 -n kube-system
pod "kube-controller-manager-server1" deleted
[k8s@server1 ~]$ kubectl get pod --all-namespaces
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE
kube-system   coredns-576cbf47c7-m88nl          1/1     Running   0          95m
kube-system   coredns-576cbf47c7-zcfls          1/1     Running   0          95m
kube-system   etcd-server1                      1/1     Running   0          95m
kube-system   kube-apiserver-server1            1/1     Running   9          95m
kube-system   kube-controller-manager-server1   1/1     Running   12         51s
kube-system   kube-flannel-ds-amd64-ds2m4       1/1     Running   1          73m
kube-system   kube-flannel-ds-amd64-m9wjt       1/1     Running   6          45m
kube-system   kube-flannel-ds-amd64-xw2jj       1/1     Running   4          24m
kube-system   kube-proxy-57bqm                  1/1     Running   0          45m
kube-system   kube-proxy-dl295                  1/1     Running   0          95m
kube-system   kube-proxy-wfv9x                  1/1     Running   0          24m
kube-system   kube-scheduler-server1            1/1     Running   10         2m49s

此时集群配置成功

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐