关于K8S:

Kubernetes是Google开源的容器集群管理系统。它构建于docker技术之上,为容器化的应用提供资源调度、部署运行、服务发现、扩 容缩容等整一套功能,本质上可看作是基于容器技术的mini-PaaS平台。


相信看过我博客的童鞋应该知道,我在14年的时候就发表了一篇名为Docker容器管理之Kubernetes当时国内Docker刚刚兴起,对于Docker的兴起我很有感触,仿佛一瞬间就火了,当时也是一个偶然的机会了解到K8S,所以当时就写文简单的介绍了下K8S以及如何采用源码部署。今时不同往日K8S在容器界已经是翘首,再读旧文有感而发,索性来研究下kubeadm安装K8S以及Dashboard功能预览。

环境描述:

采用CentOS7.4 minimual,docker 1.13,kubeadm 1.10.0,etcd 3.0, k8s 1.10.0

我们这里选用三个节点搭建一个实验环境。

10.0.100.202 k8smaster

10.0.100.203 k8snode1

10.0.100.204 k8snode2

准备环境:

1.配置好各节点hosts文件

2.关闭系统防火墙

3.关闭SElinux

4.关闭swap

5.配置系统内核参数使流过网桥的流量也进入iptables/netfilter框架中,在/etc/sysctl.conf中添加以下配置:

1

2

3

4

5

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

sysctl -p

使用kubeadm安装:

1.首先配置阿里K8S YUM源

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=0

EOF

yum -y install epel-release

yum clean all

yum makecache

2.安装kubeadm和相关工具包

1

yum -y install docker kubelet kubeadm kubectl kubernetes-cni

3.启动Docker与kubelet服务

1

2

3

systemctl enable docker && systemctl start docker

systemctl enable kubelet && systemctl start kubelet

提示:此时kubelet的服务运行状态是异常的,因为缺少主配置文件kubelet.conf。但可以暂不处理,因为在完成Master节点的初始化后才会生成这个配置文件。

4.下载K8S相关镜像

因为无法直接访问gcr.io下载镜像,所以需要配置一个国内的容器镜像加速器

配置一个阿里云的加速器:

登录 https://cr.console.aliyun.com/

在页面中找到并点击镜像加速按钮,即可看到属于自己的专属加速链接,选择Centos版本后即可看到配置方法。

提示:在阿里云上使用 Docker 并配置阿里云镜像加速器,可能会遇到 daemon.json 导致 docker daemon 无法启动的问题,可以通过以下方法解决。

1

2

3

4

5

6

7

8

9

10

11

12

你需要的是编辑

vim /etc/sysconfig/docker

然后

OPTIONS='--selinux-enabled --log-driver=journald --registry-mirror=http://xxxx.mirror.aliyuncs.com'

registry-mirror 输入你的镜像地址

最后 service docker restart 重启 daemon

然后 ps aux | grep docker 然后你就会发现带有镜像的启动参数了。

5.下载K8S相关镜像

OK,解决完加速器的问题之后,开始下载k8s相关镜像,下载后将镜像名改为k8s.gcr.io/开头的名字,以便kubeadm识别使用。

1

2

3

4

5

6

7

8

9

#!/bin/bash

images=(kube-proxy-amd64:v1.10.0 kube-scheduler-amd64:v1.10.0 kube-controller-manager-amd64:v1.10.0 kube-apiserver-amd64:v1.10.0

etcd-amd64:3.1.12 pause-amd64:3.1 kubernetes-dashboard-amd64:v1.8.3 k8s-dns-sidecar-amd64:1.14.8 k8s-dns-kube-dns-amd64:1.14.8

k8s-dns-dnsmasq-nanny-amd64:1.14.8)

for imageName in ${images[@]} ; do

docker pull keveon/$imageName

docker tag keveon/$imageName k8s.gcr.io/$imageName

docker rmi keveon/$imageName

done

上面的shell脚本主要做了3件事,下载各种需要用到的容器镜像、重新打标记为符合k8s命令规范的版本名称、清除旧的容器镜像。

提示:镜像版本一定要和kubeadm安装的版本一致,否则会出现time out问题。

6.初始化安装K8S Master

执行上述shell脚本,等待下载完成后,执行kubeadm init

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

[root@k8smaster ~]# kubeadm init --kubernetes-version=v1.10.0 --pod-network-cidr=10.244.0.0/16

[init] Using Kubernetes version: v1.10.0

[init] Using Authorization modes: [Node RBAC]

[preflight] Running pre-flight checks.

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

[WARNING FileExisting-crictl]: crictl not found in system path

Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

[preflight] Starting the kubelet service

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.100.202]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated etcd/ca certificate and key.

[certificates] Generated etcd/server certificate and key.

[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]

[certificates] Generated etcd/peer certificate and key.

[certificates] etcd/peer serving cert is signed for DNS names [k8smaster] and IPs [10.0.100.202]

[certificates] Generated etcd/healthcheck-client certificate and key.

[certificates] Generated apiserver-etcd-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".

[init] This might take a minute or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 21.001790 seconds

[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[markmaster] Will mark node k8smaster as master by adding a label and a taint

[markmaster] Master k8smaster tainted and labelled with key/value: node-role.kubernetes.io/master=""

[bootstraptoken] Using token: thczis.64adx0imeuhu23xv

[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: kube-dns

[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join 10.0.100.202:6443 --token thczis.64adx0imeuhu23xv --discovery-token-ca-cert-hash sha256:fa7b11bb569493fd44554aab0afe55a4c051cccc492dbdfafae6efeb6ffa80e6

提示:选项–kubernetes-version=v1.10.0是必须的,否则会因为访问google网站被墙而无法执行命令。这里使用v1.10.0版本,刚才前面也说到了下载的容器镜像版本必须与K8S版本一致否则会出现time out。

上面的命令大约需要1分钟的过程,期间可以观察下tail -f /var/log/message日志文件的输出,掌握该配置过程和进度。上面最后一段的输出信息保存一份,后续添加工作节点还要用到。

7.配置kubectl认证信息

1

2

3

4

5

6

7

8

9

10

11

12

13

# 对于非root用户

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 对于root用户

export KUBECONFIG=/etc/kubernetes/admin.conf

也可以直接放到~/.bash_profile

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

8.安装flannel网络

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

mkdir -p /etc/cni/net.d/

cat <<EOF> /etc/cni/net.d/10-flannel.conf

{

“name”: “cbr0”,

type”: “flannel”,

“delegate”: {

“isDefaultGateway”: true

}

}

EOF

mkdir /usr/share/oci-umount/oci-umount.d -p

mkdir /run/flannel/

cat <<EOF> /run/flannel/subnet.env

FLANNEL_NETWORK=10.244.0.0/16

FLANNEL_SUBNET=10.244.1.0/24

FLANNEL_MTU=1450

FLANNEL_IPMASQ=true

EOF

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml

9.让node1、node2加入集群

在node1和node2节点上分别执行kubeadm join命令,加入集群:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

[root@k8snode1 ~]# kubeadm join 10.0.100.202:6443 --token thczis.64adx0imeuhu23xv --discovery-token-ca-cert-hash sha256:fa7b11bb569493fd44554aab0afe55a4c051cccc492dbdfafae6efeb6ffa80e6

[preflight] Running pre-flight checks.

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

[WARNING FileExisting-crictl]: crictl not found in system path

Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

[discovery] Trying to connect to API Server "10.0.100.202:6443"

[discovery] Created cluster-info discovery client, requesting info from "https://10.0.100.202:6443"

[discovery] Requesting info from "https://10.0.100.202:6443" again to validate TLS against the pinned public key

[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "10.0.100.202:6443"

[discovery] Successfully established connection with API Server "10.0.100.202:6443"

This node has joined the cluster:

* Certificate signing request was sent to master and a response

was received.

* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the master to see this node join the cluster.

提示:细心的童鞋应该会发现,这段命令其实就是前面K8S Matser安装成功后我让你们保存的那段命令。

默认情况下,Master节点不参与工作负载,但如果希望安装出一个All-In-One的k8s环境,则可以执行以下命令,让Master节点也成为一个Node节点:

1

kubectl taint nodes --all node-role.kubernetes.io/master-

10.验证K8S Master是否搭建成功

1

2

3

4

5

6

7

8

# 查看节点状态

kubectl get nodes

# 查看pods状态

kubectl get pods --all-namespaces

# 查看K8S集群状态

kubectl get cs

常见错误解析

安装时候最常见的就是time out,因为K8S镜像在国外,所以我们在前面就说到了提前把他下载下来,可以用一个国外机器采用habor搭建一个私有仓库把镜像都download下来。

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

[root@k8smaster ~]# kubeadm init

[init] Using Kubernetes version: v1.10.0

[init] Using Authorization modes: [Node RBAC]

[preflight] Running pre-flight checks.

[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'

[WARNING FileExisting-crictl]: crictl not found in system path

Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl

[preflight] Starting the kubelet service

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [k8smaster kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.0.100.202]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated etcd/ca certificate and key.

[certificates] Generated etcd/server certificate and key.

[certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1]

[certificates] Generated etcd/peer certificate and key.

[certificates] etcd/peer serving cert is signed for DNS names [k8smaster] and IPs [10.0.100.202]

[certificates] Generated etcd/healthcheck-client certificate and key.

[certificates] Generated apiserver-etcd-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".

[init] This might take a minute or longer if the control plane images have to be pulled.

Unfortunately, an error has occurred:

timed out waiting for the condition

This error is likely caused by:

- The kubelet is not running

- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

- Either there is no internet connection, or imagePullPolicy is set to "Never",

so the kubelet cannot pull or find the following control plane images:

- k8s.gcr.io/kube-apiserver-amd64:v1.10.0

- k8s.gcr.io/kube-controller-manager-amd64:v1.10.0

- k8s.gcr.io/kube-scheduler-amd64:v1.10.0

- k8s.gcr.io/etcd-amd64:3.1.12 (only if no external etcd endpoints are configured)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:

- 'systemctl status kubelet'

- 'journalctl -xeu kubelet'

couldn't initialize a Kubernetes cluster

那出现这个问题大部分原因是因为安装的K8S版本和依赖的K8S相关镜像版本不符导致的,关于这部分排错可以查看/var/log/message我们在文章开始安装的时候也提到了要多看日志。

还有些童鞋可能会说,那我安装失败了,怎么清理环境重新安装啊?下面教大家一条命令:

1

kubeadm reset

好了,至此就完成了K8S三节点集群的安装部署。

本文转自kubernetes中文社区-使用kubeadm安装Kubernetes v1.10以及常见问题解答

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐