背景说明

本人研二狗,实验室想要把多个服务器配置成K8S集群。实验室的服务器都是Ubuntu系统,所以我先在自己笔记本VMware Workstation中的Ubuntu18.04尝试搭建k8s集群。搭建过程中主要卡在master节点kubeadm init初始化的地方,主要问题是pull的镜像文件的版本和kubeadm init初始化要求的版本不匹配。网上关于k8s集群搭建的博客好多都是docker pull指定版本镜像文件,而kubeadm init初始化没有指定版本。这种情况,当初其他博主搭建初始化时是没有版本问题的,但是随着版本更新,kubeadm inti初始化默认的最新版本和其他博主指定的镜像版本就不匹配了。接下来,我将指定版本搭建k8s集群,并介绍搭建过程。

环境说明

3台Ubuntu18.04虚拟机 网络为桥接模式
在root权限下进行
su root

安装前准备

1.设置master主机名
hostnamectl set-hostname master1

设置后可查看

k8smaster1@master1:~$ hostname
master1

其他两个虚拟机分别设置
hostnamectl set-hostname worker1hostnamectl set-hostname worker2

查询每台虚拟机ip地址
ip a s
不同电脑IP不同,根据实际情况,我的三台虚拟机分别得到
master1:192.168.1.109 、worker1:192.168.1.121 、worker2:192.168.1.125

通过sudo vim /etc/hosts
将以下内容分别写入每台机器的/etc/hosts文件尾部

192.168.1.109 master1
192.168.1.121 worker1
192.168.1.125 worker2
2.关闭防火墙

ufw disable
ufw status

3.禁止swap分区

sudo swapoff -a (如果机器重启,这句需要重新执行一次)
也可以永久禁用
sudo vim /etc/fstab
将 swap 那一行注释掉

安装Docker

首先 master 节点和 worker 节点都安装 Docker,步骤如下:
sudo apt-get update
sudo apt-get remove docker docker-engine docker.io
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
docker --version

我的 docker 版本是 19.03.6

顺便可以配置一下 Docker 镜像加速,并将 Docker cgroup 驱动程序从 “cgroupfs” 改到 “systemd”。
sudo vim /etc/docker/daemon.json

{
    "registry-mirrors": ["https://registry.docker-cn.com"],
    "exec-opts": ["native.cgroupdriver=systemd"]   
}

sudo systemctl daemon-reload
sudo systemctl restart docker

安装Kubernetes

安装 curl
sudo apt-get update && sudo apt-get install -y apt-transport-https curl

国内可以使用
curl -s https:// mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
编辑kubernetes.list文件

sudo cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
EOF

也可以直接编辑
sudo vim /etc/apt/sources.list.d/kubernetes.list
在其中加入
deb http://mirrors.ustc.edu.cn/kubernetes/apt kubernetes-xenial main
然后更新
sudo apt-get update
sudo apt-get install software-properties-common

安装k8s三剑客kubelet、kubectl、kubeadm
为了和kubeadm init初始化匹配,切记指定版本!!!!
我这里用的是1.18.0-00版本,按顺序执行
apt-get install -y kubelet=1.18.0-00
apt-get install -y kubectl=1.18.0-00
apt-get install -y kubeadm=1.18.0-00
sudo apt-mark hold kubelet kubeadm kubectl

如果安装过其他版本,先删除软件及其配置文件
apt-get --purge remove kubelet
apt-get --purge remove kubectl
apt-get --purge remove kubeadm
在进行安装

验证安装是否成功
kubeadm version

配置Master节点

kubeadm init初始化默认从k8s.gcr.io拉取镜像文件,但是国内无法访问k8s.gcr.io,我们可以从阿里云中拉取镜像文件,再改tag成 【k8s.gcr.io/xxxx:版本号】形式。

首先,我们要确定我们kubeadm需要的镜像文件版本
kubeadm config images list 查询

k8smaster1@master1:~$ kubeadm config images list
W0820 20:17:02.926297   58046 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.8
k8s.gcr.io/kube-controller-manager:v1.18.8
k8s.gcr.io/kube-scheduler:v1.18.8
k8s.gcr.io/kube-proxy:v1.18.8
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

依次将kubeadm config images list中的所有镜像拉取下来

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0         
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0  
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0        
docker pull registry.aliyuncs.com/google_containers/etcd:3.4.3-0         
docker pull registry.aliyuncs.com/google_containers/coredns:1.6.7                                                  
docker pull registry.aliyuncs.com/google_containers/pause:3.2 

这时会发现所有的镜像都是registry.aliyuncs.com/google_containers/开头,这与kubeadm config images list中要求的镜像名称不一样。我们要修改镜像名称,即对镜像重新打个tag,即【docker tag + 旧的镜像名称:版本号 新的镜像名称:版本号】,依次执行修改镜像名称操作

docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0 k8s.gcr.io/kube-apiserver:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0 k8s.gcr.io/kube-proxy:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0 k8s.gcr.io/kube-controller-manager:v1.18.0
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0 k8s.gcr.io/kube-scheduler:v1.18.0
docker tag registry.aliyuncs.com/google_containers/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag registry.aliyuncs.com/google_containers/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7
docker tag registry.aliyuncs.com/google_containers/pause:3.2 k8s.gcr.io/pause:3.2

接着删除旧的镜像

docker rmi registry.aliyuncs.com/google_containers/kube-apiserver:v1.18.0         
docker rmi registry.aliyuncs.com/google_containers/kube-proxy:v1.18.0
docker rmi registry.aliyuncs.com/google_containers/kube-controller-manager:v1.18.0  
docker rmi registry.aliyuncs.com/google_containers/kube-scheduler:v1.18.0        
docker rmi registry.aliyuncs.com/google_containers/etcd:3.4.3-0         
docker rmi registry.aliyuncs.com/google_containers/coredns:1.6.7                                                  
docker rmi registry.aliyuncs.com/google_containers/pause:3.2

最终我们使用sudo docker images查看本地docker仓库的镜像信息如下

k8smaster1@master1:~$ sudo docker images
REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy                v1.18.0             43940c34f24f        4 months ago        117MB
k8s.gcr.io/kube-scheduler            v1.18.0             a31f78c7c8ce        4 months ago        95.3MB
k8s.gcr.io/kube-controller-manager   v1.18.0             d3e55153f52f        4 months ago        162MB
k8s.gcr.io/kube-apiserver            v1.18.0             74060cea7f70        4 months ago        173MB
quay.io/coreos/flannel               v0.12.0-amd64       4e9f801d2217        5 months ago        52.8MB
k8s.gcr.io/pause                     3.2                 80d28bedfe5d        6 months ago        683kB
k8s.gcr.io/coredns                   1.6.7               67da37a9a360        6 months ago        43.8MB
k8s.gcr.io/etcd                      3.4.3-0             303ce5db0e90        9 months ago        288MB
quay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        18 months ago       52.6MB

初始化k8s:

sudo kubeadm init  --kubernetes-version=v1.18.0  --service-cidr=10.96.0.0/12 --pod-network-cidr=10.96.0.0/16  --ignore-preflight-errors=Swap

这里我们指定了v1.18.0版本,并且本地docker仓库中包含了kubeadm v1.18.0初始化所需要的所有对应版本的镜像文件。如果初始化失败查看附录。成功后输出以下类似的信息,保存好 kubeadm join 那一行的信息,用于后续加入节点。
如果忘记可以用kubeadm token create --print-join-command来生成。

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.20.91.84:6443 --token ko3ba2.aj8t33vg32m7jkdm \
    --discovery-token-ca-cert-hash sha256:961e0744d2ba21b945f93cd8054526559fe54d7fa2778d58bf5d6095a2d7bdf0

从 root 下 exit 出来,在普通用户下完成以下工作:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

由于配置 pod 网络时需要下载 quay.io/coreos/flannel
为避免因网速问题出现状态为 Not Ready 的现象,国内可以采用 ustc 的镜像加速

docker pull quay.mirrors.ustc.edu.cn/coreos/flannel:v0.11.0-amd64
docker tag quay.mirrors.ustc.edu.cn/coreos/flannel:v0.11.0-amd64 quay.io/coreos/flannel:v0.11.0-amd64
docker rmi quay.mirrors.ustc.edu.cn/coreos/flannel:v0.11.0-amd64
docker images

配置 pod network
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
如果在终端中无法下载,可以在浏览器中打开,粘贴到这个网站 pastebin,选择 raw 模式的链接,利用 curl 下载到 ubuntu 中。

touch kube-flannel.yml
curl -o kube-flannel.yml https://pastebin.com/raw/tyZNGNK4
kubectl apply -f kube-flannel.yml

成功后会输出如下信息:

podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created

验证是否成功
kubectl get pods --all-namespaces
输出以下信息

k8smaster1@master1:~$ su root
Password: 
root@master1:/home/k8smaster1# kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY   STATUS              RESTARTS   AGE
kube-system   coredns-66bff467f8-msmv7          0/1     ContainerCreating   0          4h47m
kube-system   coredns-66bff467f8-r94d5          0/1     ContainerCreating   0          4h47m
kube-system   etcd-master1                      1/1     Running             12         5h34m
kube-system   kube-apiserver-master1            1/1     Running             10         5h34m
kube-system   kube-controller-manager-master1   1/1     Running             23         5h34m
kube-system   kube-flannel-ds-amd64-cpczq       0/1     CrashLoopBackOff    59         4h47m
kube-system   kube-flannel-ds-amd64-f4vfw       0/1     Init:0/1            0          4h9m
kube-system   kube-flannel-ds-amd64-tlms4       0/1     Init:0/1            0          4h46m
kube-system   kube-proxy-hbl28                  0/1     ContainerCreating   0          4h9m
kube-system   kube-proxy-hfhq7                  1/1     Running             2          4h47m
kube-system   kube-proxy-n7fpz                  0/1     ContainerCreating   0          4h46m
kube-system   kube-scheduler-master1            1/1     Running             16         5h34m

配置worker节点

使用kubeadm init最后输出的token,在worker节点上执行(直接复制token,执行即可)

sudo kubeadm join 172.20.91.84:6443 --token ko3ba2.aj8t33vg32m7jkdm \
    --discovery-token-ca-cert-hash sha256:961e0744d2ba21b945f93cd8054526559fe54d7fa2778d58bf5d6095a2d7bdf0

输出以下信息:

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

返回 master 节点,输入以下命令
kubectl get nodes

输出以下信息:

root@master1:/home/k8smaster1# kubectl get nodes
NAME      STATUS     ROLES    AGE     VERSION
master1   Ready      master   5h44m   v1.18.0
worker1   NotReady   <none>   4h56m   v1.18.0
worker2   NotReady   <none>   4h19m   v1.18.0

如果想知道某个 node 的详细信息,可以采用
kubectl describe node node_name
配置成功
END

附录

1.kubeadm初始化失败情况

如果说某次执行kubeadm init初始化k8s集群失败了,在下一次执行kubeadm init初始化语句之前,先执行kubeadm reset命令。这个命令的作用是重置节点,大家可以把这个命令理解为:上一次kubeadm init初始化集群操作失败了,该命令清理了之前的失败环境。

kubeadm reset
sudo kubeadm init  --kubernetes-version=v1.18.0  --service-cidr=10.96.0.0/12 --pod-network-cidr=10.96.0.0/16  --ignore-preflight-errors=Swap

每次执行kubeadm init初始化k8s集群之前,先执行kubeadm reset指令,如若不执行,会出现额外的错误比如xxx already existsPort xxx is in use,所以大家一定要先执行kubeadm reset指令。

2.参考博客

https://blog.csdn.net/algzjh/article/details/102850510
https://www.cnblogs.com/zliW/p/12603536.html
https://blog.csdn.net/curry10086/article/details/107579113
https://blog.csdn.net/qq_39115567/article/details/97926351
https://docs.nvidia.com/datacenter/kubernetes/kubernetes-upstream/index.html

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐