K8S部署

 

1、安装环境准备

1)、安装环境: centos7.3,一个master节点,两个node节点

[root@master yum.repos.d]# cat /etc/redhat-release

CentOS Linux release 7.3.1611 (Core)

2)、设置/etc/hosts文件的IP和主机名映射

[root@master ~]# cat /etc/hosts

10.100.240.221 master

10.100.240.222 node01

10.100.240.223 node02

 

 

2、安装

以下安装步骤若未指明,则都为在master节点的安装

 

2.1、配置centos7、docker和k8s仓库

a、配置centos 7 repo

cd /etc/yum.repo.d

wget http://mirrors.aliyun.com/repo/Centos-7.repo

b、配置docker仓库

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

c、配置k8s仓库

新建vi kubernetes.repo,编辑内容

[kubernetes]

name=Kubernetes Repo

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/   

gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg

enabled=1

 

 

2.2、安装docker、kubelet、kubeadm、kubectl

yum install -y docker-ce-18.06* kubelet-1.11.1 kubeadm-1.11.1 kubectl-1.11.1

 

注1:k8s这三个相关应用后面加入了-1.11.1,指明了安装版本,若需要安装最新版,则不需要加-1.11.1,但k8s版本更新较快,最新版可能不稳定且出问题后网上资料较少排查困难,因此不建议安装最新版

 

注2:若安装提示gpg key无法安装,则需要手动wget https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg和wget https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

 

然后手动导入两个密钥文件rpm --import yum-key.gpg  rpm --import rpm-package-key.gpg

2.3、停止防火墙和selinux,每个节点执行

[root@master ~]# systemctl stop firewalld

[root@master ~]# systemctl disable firewalld

[root@master ~]# setenforce 0

[root@master ~]# sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/sysconfig/selinux

2.4、docker相关配置

为了使docker能正常访问bridge-nf-call-iptables ,需要把如下两个文件内容修改为1

 

/proc/sys/net/bridge/bridge-nf-call-ip6tables

 

/proc/sys/net/bridge/bridge-nf-call-iptables

 

若这两个文件原来的值都为0,则参考如下修改

 在CentOS中

 vim /etc/sysctl.conf 

 net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-arptables = 1

 重启。

 

启动docker并设置docker自启动

[root@master yum.repos.d]# systemctl start docker

[root@master yum.repos.d]# systemctl enable docker

 

2.5、kubelet相关安装

设置kubelet

1)rpm -ql kubelet 查看kubelet安装了哪些文件

/etc/kubernetes/manifests

/etc/sysconfig/kubelet

/etc/systemd/system/kubelet.service

/usr/bin/kubelet

2)修改配置文件:vi /etc/sysconfig/kubelet

KUBELET_EXTRA_ARGS="--fail-swap-on=false"

默认参数为空,需要修改

3)设置kubelet开机自启动:

systemctl enable kubelet

 

​​​​​​​2.6、使用kubeadm完成集群初始化

在master节点初始化集群

1)kubeadm init --help查看init相关命令帮助

2)vim /etc/sysconfig/kubelet修改配置选项,忽略swap

3)KUBELET_EXTRA_ARGS="--fail-swap-on=false"

4)  初始化

kubeadm init --kubernetes-version=v1.11.1 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --apiserver-advertise-address=10.100.240.221 --ignore-preflight-errors=Swap

5) 如拉取镜像失败,则按如下方式处理

images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1

etcd-amd64:3.2.18 coredns:1.1.3 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.9 k8s-dns-kube-dns-amd64:1.14.9

k8s-dns-dnsmasq-nanny-amd64:1.14.9 )

for imageName in ${images[@]} ; do

  docker pull registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName

  docker tag registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName k8s.gcr.io/$imageName

  #docker rmi registry.cn-hangzhou.aliyuncs.com/k8sth/$imageName

done

docker tag da86e6ba6ca1 k8s.gcr.io/pause:3.1

或者

images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1 etcd-amd64:3.2.18 pause-amd64:3.1)

for imageName in ${images[@]} ; do

  docker pull mirrorgooglecontainers/$imageName  

  docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  

  docker rmi mirrorgooglecontainers/$imageName

done

 

该命令为从mirrorgooglecontainers仓库拉取images里指定的镜像,并重命名tag,然后删除原来的镜像

由于镜像仓库国内无法访问,因此一般都拉取失败,此时就需要在本地先pull镜像,然后用docker tag命令更改镜像标签

images=(kube-proxy-amd64:v1.11.1 kube-scheduler-amd64:v1.11.1 kube-controller-manager-amd64:v1.11.1 kube-apiserver-amd64:v1.11.1 etcd-amd64:3.2.18 pause-amd64:3.1)

for imageName in ${images[@]} ; do

  docker pull mirrorgooglecontainers/$imageName  

  docker tag mirrorgooglecontainers/$imageName k8s.gcr.io/$imageName  

  docker rmi mirrorgooglecontainers/$imageName

done

[root@master yum.repos.d]# docker images

REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy-amd64                v1.11.1             d5c25579d0ff        13 months ago       97.8MB

k8s.gcr.io/kube-apiserver-amd64            v1.11.1             816332bd9d11        13 months ago       187MB

k8s.gcr.io/kube-scheduler-amd64            v1.11.1             272b3a60cd68        13 months ago       56.8MB

k8s.gcr.io/kube-controller-manager-amd64   v1.11.1             52096ee87d0e        13 months ago       155MB

k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        16 months ago       219MB

k8s.gcr.io/pause-amd64                     3.1                 da86e6ba6ca1        20 months ago       742kB

 

[root@master yum.repos.d]# docker pull coredns/coredns:1.1.3

docker tag coredns/coredns:1.1.3 k8s.gcr.io/coredns:1.1.3

[root@master yum.repos.d]# docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1

[root@master yum.repos.d]# docker rmi k8s.gcr.io/pause-amd64:3.1

 

[root@master yum.repos.d]# docker images

REPOSITORY                                 TAG                 IMAGE ID            CREATED             SIZE

k8s.gcr.io/kube-proxy-amd64                v1.11.1             d5c25579d0ff        13 months ago       97.8MB

k8s.gcr.io/kube-apiserver-amd64            v1.11.1             816332bd9d11        13 months ago       187MB

k8s.gcr.io/kube-scheduler-amd64            v1.11.1             272b3a60cd68        13 months ago       56.8MB

k8s.gcr.io/kube-controller-manager-amd64   v1.11.1             52096ee87d0e        13 months ago       155MB

k8s.gcr.io/coredns                         1.1.3               b3b94275d97c        15 months ago       45.6MB

k8s.gcr.io/etcd-amd64                      3.2.18              b8df3b177be2        16 months ago       219MB

k8s.gcr.io/pause                           3.1                 da86e6ba6ca1        20 months ago       742kB

初始化完成之后提示成功

[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

 

Your Kubernetes master has initialized successfully!

 

To start using your cluster, you need to run the following as a regular user:

 

  mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

  https://kubernetes.io/docs/concepts/cluster-administration/addons/

 

You can now join any number of machines by running the following on each node

as root:

 

  kubeadm join 10.100.240.221:6443 --token jgey3y.agbnzkhtxg5vvrer --discovery-token-ca-cert-hash sha256:e012175071edc2dd7e63026bacfc86abf5d08b0cd093a901b256ed7de8ad992a

执行文件

 mkdir -p $HOME/.kube

  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

  sudo chown $(id -u):$(id -g) $HOME/.kube/config

 

 

6) 使用kubectl get cs(cs为componentstatus简写)

[root@master ~]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

controller-manager   Healthy   ok                   

scheduler            Healthy   ok                   

etcd-0               Healthy   {"health": "true"}   

[root@master ~]#

 

7) 使用kubectl get nodes查看集群节点,看到master节点处于NotReady状态,这是因为集群网络组建flannel并未安装

[root@master ~]# kubectl get node

NAME        STATUS     ROLES    AGE   VERSION

master   NotReady   master   3m    v1.11.1

8) 到GitHub上搜索coreos/flannel仓库,readme文件里可以看到如下安装命令

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

执行该命令会自动拉去kube-proxy和kube-flannel镜像,稍等片刻重新查看节点状态即为Ready状态

[root@master ~]# kubectl get nodes

NAME        STATUS   ROLES    AGE   VERSION

master   Ready    master   8m    v1.11.1

9) 同时使用kubectl get pods -n kube-system查看组建状态,-n后的参数指定了命名空间,status为running代表无误

[root@master ~]# kubectl get pods -n kube-system

NAME                                READY   STATUS    RESTARTS   AGE

coredns-78fcdf6894-nzttl            1/1     Running   0          9m

coredns-78fcdf6894-ww4jk            1/1     Running   0          9m

etcd-master                      1/1     Running   0          1m

kube-apiserver-master            1/1     Running   0          1m

kube-controller-manager-master   1/1     Running   0          1m

kube-flannel-ds-amd64-hm5bf         1/1     Running   0          2m

kube-proxy-vxcvr                    1/1     Running   0          9m

kube-scheduler-master            1/1     Running   0          1m

至此,master节点部署已经完成

 

2.7、在各node节点安装docker和kubelet、kubeadm、kubectl工具

a、接下需要做的就是在各node节点安装dockers和kubelet、kubeadm、kubectl,然后执行kubeadm join那条命令将节点加入集群

 

b、分别在node1和node2节点上重复2.1-2.5的安装步骤(相关配置文件可以直接用scp拷贝到两个节点)

 

c、执行加入集群的命令,需要添加--ignore-preflight-errors=Swap参数

 kubeadm join 10.100.240.221:6443 --token jgey3y.agbnzkhtxg5vvrer --discovery-token-ca-cert-hash sha256:e012175071edc2dd7e63026bacfc86abf5d08b0cd093a901b256ed7de8ad992a --ignore-preflight-errors=Swap

C同样由于镜像被墙的,此处加入集群会失败,使用kubectl get pods -n kube-system -o wide命令会看到有两个pod一直在init和creating

此时使用kubectl describe pod kube-flannel-ds-amd64-ddtnx -n kube-system查看错误信息,可以看到还是因为镜像拉取失败的问题,因此将flannel、proxy和pause三个镜像使用docker save和docker load命令从master拷贝到node1和node2上,这样kubeadm就会从本地直接使用这些镜像,避免了拉取失败的问题

[root@master ~]# docker save quay.io/coreos/flannel:v0.11.0-amd64 k8s.gcr.io/kube-proxy-amd64:v1.11.1 k8s.gcr.io/pause:3.1 -o image.tar

[root@k8snode01 ~]# docker load -i image.tar  

 

d、当两个节点都加入集群后,使用kubectl get node命令查看集群节点是否就绪

[root@master soft]#  kubectl get node

NAME     STATUS   ROLES    AGE   VERSION

master   Ready    master   36m   v1.11.1

node01   Ready    <none>   1m    v1.11.1

node02   Ready    <none>   2m    v1.11.1

使用kubectl get pods -n kube-system -o wide查看pod状态

[root@master docker]# kubectl get pods -n kube-system -o wide

NAME                                READY   STATUS    RESTARTS   AGE   IP               NODE

coredns-78fcdf6894-cj4fx            1/1     Running   3          21m   10.244.0.11      master

coredns-78fcdf6894-htf6m            1/1     Running   2          20m   10.244.0.10      master

etcd-master                      1/1     Running   1          34m   10.100.240.221   master

kube-apiserver-master            1/1     Running   1          35m   10.100.240.221   master

kube-controller-manager-master   1/1     Running   1          34m   10.100.240.221   master

kube-flannel-ds-amd64-dpzt7         1/1     Running   0          13m   10.100.240.222   k8snode01

kube-flannel-ds-amd64-f2wl2         1/1     Running   1          34m   10.100.240.221   master

kube-flannel-ds-amd64-vf4k5         1/1     Running   1          3m    10.100.240.223   k8snode02

kube-proxy-g9kwx                    1/1     Running   0          13m   10.100.240.222   k8snode01

kube-proxy-gpmz6                    1/1     Running   1          35m   10.100.240.221   master

kube-proxy-khb4g                    1/1     Running   1          3m    10.100.240.223   k8snode02

kube-scheduler-master            1/1     Running   1          35m   10.100.240.221   master

当所有节点都为ready状态且pod可以running起来就代表集群可以正常运转了

 

 

2.8、测试

[root@master ~]# kubectl run nginx --image=nginx --dry-run

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx created (dry run)

[root@master ~]# kubectl run nginx --image=nginx

kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.

deployment.apps/nginx created

[root@master ~]# kubectl get pod -o wide

NAME                     READY   STATUS    RESTARTS   AGE   IP           NODE

nginx-64f497f8fd-jvhlb   1/1     Running   0          9m    10.244.2.2   node02

[root@master ~]# ping 10.244.2.2

PING 10.244.2.2 (10.244.2.2) 56(84) bytes of data.

64 bytes from 10.244.2.2: icmp_seq=1 ttl=63 time=0.506 ms

查看pod为running ,部署正常。

 

  1. 附录:

部署 kubernetes 服务时,出现 Kube DNS 服务反复重启现象(错误如上),

这很可能是 iptables 规则乱了,我通过执行以下命令解决了,在此记录:

 

[root@master ~]# kubectl logs  coredns-78fcdf6894-cj4fx   -n kube-system

E0826 15:16:40.867018       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:313: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host

E0826 15:16:40.867142       1 reflector.go:205] github.com/coredns/coredns/plugin/kubernetes/controller.go:315: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: connect: no route to host

解决:

[root@master docker]# systemctl stop kubelet

[root@master docker]# systemctl stop docker

[root@master docker]#

[root@master docker]#

[root@master docker]# iptables --flush

[root@master docker]# iptables -tnat --flush

[root@master docker]# systemctl start docker

[root@master docker]# systemctl start kubelet

 

 

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐