在这里插入图片描述

01-Kubeadm初始化K8S集群

本文讲述使用Kubeadm方式初始化Kubernetes集群(Kubernetes-version v1.18.2)

一、Kubernetes

1.1、基础架构图

kubernetes架构图

1.2、集群整体部署架构图

在这里插入图片描述

二、K8S安装方式

Kubernetes项目托管在Github上 ,项目地址 :https://github.com/kubernetes
想要安装K8S集群的话通常使用以下两种方式:二进制 & kubeadm

2.1、二进制部署介绍

  • 适合生产。用户需要把Master中心节点的所有组件通通安装,同理Node节点也需要安装对应的核心组件,还需要手动配置多组CA证书,过程非常繁琐,后续文档中会为大家整理出较为详细的步骤。

2.2、K8S官方集群管理工具Kubeadm

  • 它把原本需要自己部署的组件通过镜像方式拉到本地进行使用,由于证书不是自签,面临过期问题,适合学习K8S使用。用户只需要安装好kubelet和docker,然后每个Master和Node节点上安装kubeadm即可,通过kubeadm init把第一个节点初始化为Master;通过kubeadm join将其他节点初始化为Node并加入集群。
  • Github项目托管路径 : https://github.com/kubernetes/kubeadm
2.2.1、什么是Kubeadm
  • Kubeadm是一种工具,旨在为创建Kubernetes集群提供最佳实践的“快速路径”。它以用户友好的方式执行必要的操作,以使最低限度的可行,安全的群集启动并运行。Kubeadm的范围仅限于本地节点文件系统和Kubernetes API,它旨在成为高级工具的可组合构建块。
  • Github 文档介绍 : https://github.com/kubernetes/kubeadm/blob/master/docs/design/design_v1.10.md
2.2.2、Kubeadm初始化K8S

我这里仅使用三台主机进行演示,Kubeadm版本为 1.18.2

主机hostname
master/192.168.20.236k8s.master1
node1/192.168.20.212k8s.node1
node2/192.168.20.214k8s.node2
2.2.2.1、环境初始化
一、服务器初始化
1、各个节点host文件初始化 进制使用 -
    master
    ~]# hostnamectl set-hostname k8s_master1
    ~]# cat /etc/hosts
        192.168.1.196 k8s.master1 k8s.master.apiserver
        192.168.1.197 k8s.node1
        192.168.1.198 k8s.node2
 
     
    node1
    ~]# hostnamectl set-hostname k8s.node1
 
    node2
    ~]# hostnamectl set-hostname k8s.node2
 
 
2、关闭防火墙
    ~]# chkconfig iptables off  && iptables -F
    ~]# systemctl stop firewalld
    ~]# systemctl disable firewalld
 
 
3、关闭slinux
    ~]# setenforce 0
    ~]# sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
 
4、关闭swap
    ~]# swapoff -a
    ~]# yes | cp /etc/fstab /etc/fstab_bak
    ~]# cat /etc/fstab_bak |grep -v swap > /etc/fstab
 
5、时间同步
    ~]# yum install ntpdate -y
    ~]# systemctl enable ntpdate.service
    ~]# echo '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1' > /tmp/crontab2.tmp
    ~]# crontab /tmp/crontab2.tmp
    ~]# systemctl start ntpdate.service
    ~]# ntpdate -u ntp.api.bz
 
6、文件描述符
    ~]# echo "* soft nofile 65536" >> /etc/security/limits.conf
    ~]# echo "* hard nofile 65536" >> /etc/security/limits.conf
    ~]# echo "* soft nproc 65536"  >> /etc/security/limits.conf
    ~]# echo "* hard nproc 65536"  >> /etc/security/limits.conf
    ~]# echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf
    ~]# echo "* hard memlock  unlimited"  >> /etc/security/limits.conf
 
7、内核参数设置(可选配置项)
~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
vm.swappiness = 0
net.ipv4.neigh.default.gc_stale_time=120
net.ipv4.ip_forward = 1
# see details in https://help.aliyun.com/knowledge_detail/39428.html
net.ipv4.conf.all.rp_filter=0
net.ipv4.conf.default.rp_filter=0
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_announce=2
net.ipv4.conf.all.arp_announce=2
# # see details in https://help.aliyun.com/knowledge_detail/41334.html
net.ipv4.tcp_max_tw_buckets = 5000
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 1024
net.ipv4.tcp_synack_retries = 2
kernel.sysrq = 1
# # iptables透明网桥的实现
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
 
~]# modprobe br_netfilter
~]# sysctl -p
2.2.2.2、所有节点安装kubelet 、kubeadm 、docker、kubectl
  • 阿里云安装 kubelet kubeadm kubectl 示例 :https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11auUzVs
二、master 、nodes 节点 : 安装kubelet 、kubeadm 、docker、kubectl(apiserver命令行客户端,node可不安装)
    1、各个节点主机上安装docker
    yum.repos.d]# pwd
        /etc/yum.repos.d
    yum.repos.d]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo     #下载的docker源为阿里云
    yum.repos.d]# yum install docker-ce -y
 
2、由于不可描述的事实需要修改docker 配置文件
~]# cat /etc/docker/daemon.json
{
    "exec-opts": ["native.cgroupdriver=systemd"],   # cgroup驱动修改为systemd
    "log-driver": "json-file",
    "log-opts": {
    "max-size": "100m"
    },
    "storage-driver": "overlay2",
    "registry-mirrors": ["https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn"]
}
 
3、重新加载、启动docker
   yum.repos.d]# systemctl restart docker
   yum.repos.d]# docker info     # 确认配置是否生效
   阿里云:kubernetes镜像仓库 :https://developer.aliyun.com/mirror/kubernetes?spm=a2c6h.13651102.0.0.3e221b11xaCXyk
 
4、各个节点配置阿里云k8s镜像仓库并安装k8s
yum.repos.d]#cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

    #kubectl:apiserver命令行客户端,node可不安装
    # 安装最新版本  yum.repos.d]# yum install -y kubelet kubeadm kubectl             
    # 安装时可以指定版本 yum install -y kubelet-1.18.2 kubeadm-1.18.2 kubectl-1.18.2
 
 
5、确保master节点iptables相关映射文件打开
 ~]# cat /proc/sys/net/bridge/bridge-nf-call-iptables
    1
 ~]# cat /proc/sys/net/bridge/bridge-nf-call-ip6tables
    1
 
 
6、查看kubelet安装生成的文件,并编辑配置文件
~]# rpm -ql kubelet
    /etc/kubernetes/manifests    #清单文件
    /etc/sysconfig/kubelet       #配置文件
    /usr/bin/kubelet             #程序文件
    /usr/lib/systemd/system/kubelet.service  #unit file
  
    ~]# vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf    (#不然初始化时有报错:kubelet 1.1.8.2 node_container_manager_linux.go:57] Failed to create ["kubepods"] cgroup)
    ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --feature-gates SupportPodPidsLimit=false --feature-gates SupportNodePidsLimit=false

--------------------------------------------------------
# 为了让搭建少踩坑,建议直接复制下面kubelet配置文件:
~]# vi /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf

[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
EnvironmentFile=-/etc/sysconfig/kubelet
ExecStart=
#ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --feature-gates SupportPodPidsLimit=false --feature-gates SupportNodePidsLimit=false
Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=systemd"
~]# systemctl daemon-reload  # 重新加载  service unit
~]# systemctl restart kubelet # 重启服务
--------------------------------------------------------
 
7、master节点将kubelet加入开机自启动
    ~]# systemctl start kubelet         # 在没有初始化集群前是报错的
    ~]# systemctl start docker 
    ~]# systemctl enable kubelet   
    ~]# systemctl enable docker
2.2.2.3、主节点初始化集群,kubeadm init
  • 初始化过程可分为三步
    • 初始化控制平面第一节点
    • 配置kubectl命令行的管理员配置
    • 部署网络插件
  • 运行kubeadm init 初始化命令参数

初始化时用到的参数注解:
–kubernetes-version: 用于指定k8s版本;
–apiserver-advertise-address:用于指定kube-apiserver监听的ip地址,就是 master本机IP地址。
–pod-network-cidr:用于指定Pod的网络范围; 10.244.0.0/16
–service-cidr:用于指定SVC的网络范围;
–image-repository: 指定阿里云镜像仓库地址

三、master节点 初始化集群, kubeadm init
	1、master节点初始化kubeadm命令参数
		kubeadm init --help    #查看帮助
			--apiserver-advertise-address string  #apiserver向外公告的地址,默认监听的0.0.0.0
			--apiserver-bind-port  string         #apiserver监听的端口,默认端口为6443
			--cert-dir	string					  #加载证书的相关目录,默认为/etc/kubernetes/pki
			--ignore-preflight-errors  string     #在初始化预检查状态下可以忽略错误
										swap     #忽略swap
										all      #忽略所有
			--kubernetes-version  string	     #k8s版本,(default "stable-1")
			--pod-network-cidr string            #pod使用的网络地址
			--service-cidr string  				 #service使用的网络地址


	2、初始化
 		docker]# kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.2 --apiserver-advertise-address 0.0.0.0 --pod-network-cidr=10.244.0.0/16 --token-ttl 0

		# 成功  记住下面的从节点加入集群的tocken
			Your Kubernetes control-plane has initialized successfully!
			To start using your cluster, you need to run the following as a regular user:
  				mkdir -p $HOME/.kube
  				sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  				sudo chown $(id -u):$(id -g) $HOME/.kube/config
			You should now deploy a pod network to the cluster.
			Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  				https://kubernetes.io/docs/concepts/cluster-administration/addons/
			Then you can join any number of worker nodes by running the following on each as root:
				kubeadm join 192.168.20.236:6443 --token m7yu3o.iceuvk7tynjcsl11 \
   				 --discovery-token-ca-cert-hash sha256:d4efbb35b4910b27aedff1c7dcca2a064db218641c24578242620ccd8e625a44 

	4、生成kubectl命令的管理员配置
		# 建议使用Linux系统的普通用户身份执行后续操作  (创建用户设置密码,授权  echo "username=(ALL:ALL)NOPASSWD:ALL" > /etc/sudoers.d/username)
		[root@k8s ~]# mkdir -p $HOME/.kube
		[root@k8s ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
		[root@k8s ~]# chown $(id -u):$(id -g) $HOME/.kube/config
		# 执行显示集群节点进行验证
		~]# kubectl get nodes     
			NAME          STATUS     ROLES    AGE   VERSION
			k8s.master1   NotReady   master   14h   v1.18.2

	5、部署网络插件flannel
	# kubernetes以插件方式支持众多第三方的pod网络实现,著名有flannel和calico等,这里以flannel为示例进行说明
	# 为了简化flannel部署难度和一些不可描述的问题无法拉取镜像,所以我已经在Github上已经为大家整理,将下面地址的yaml下载到主节点
	# https://github.com/Dz6666/Kubernetes_summary/blob/master/01-Kubeadm%E9%83%A8%E7%BD%B2K8S/kube-flannel-mage.yaml
	~]# wget https://github.com/Dz6666/Kubernetes_summary/blob/master/01-Kubeadm%E9%83%A8%E7%BD%B2K8S/kube-flannel.yaml
	~]# kubectl apply -f kube-flannel.yaml

	6、解决coredns无法运行问题
# 发现kubeadm初始化k8s时coredns无法运行解决它
				# 补充,我这里的coredns状态为pending,我就直接修改了coredns镜像支持去互联网下载即进删除了loop, 如果为CrashLoopBackOff状态,则要先删除loop,再使用kubectl delete pod coredns-xxx-xxxx     -n kube-system 将未运行的coredns全部删除,会自动启动coredns
				k8s]# kubectl edit cm coredns -n kube-system
					# 22行 删除loop
				#重启 coredns pod (先删除)
					k8s]# kubectl get pods -n kube-system
						NAME                                  READY   STATUS    RESTARTS   AGE
						coredns-7ff77c879f-2sn2f              1/1     Running   0          14h
						coredns-7ff77c879f-8qqdb              1/1     Running   0          14h
						etcd-k8s.master1                      1/1     Running   0          14h
						kube-apiserver-k8s.master1            1/1     Running   0          14h
						kube-controller-manager-k8s.master1   1/1     Running   0          14h
						kube-flannel-ds-amd64-9ck78           1/1     Running   0          6m6s
						kube-proxy-4p84s                      1/1     Running   0          14h
						kube-scheduler-k8s.master1            1/1     Running   0          14h
2.2.2.4、各集群中nodes节点加入master节点集群, kubeadm join
四、各集群中nodes节点加入master节点集群, kubeadm join
	# 这里的kubeadm join 是初始化master成功提示的信息


1、node节点加入集群
	 ~]# systemctl enable docker && systemctl enable kubelet.service
	~]# kubeadm join 192.168.20.236:6443 --token m7yu3o.iceuvk7tynjcsl11 \
>     --discovery-token-ca-cert-hash sha256:d4efbb35b4910b27aedff1c7dcca2a064db218641c24578242620ccd8e625a44 
W0425 15:43:49.626329   12176 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

You have new mail in /var/spool/mail/root

2、master节点查看加入的节点信息,以确保节点全部加入集群中
	k8s]# kubectl get nodes
		NAME          STATUS     ROLES    AGE   VERSION
		k8s.master1   Ready      master   14h   v1.18.2
		k8s.node1     NotReady   <none>   87s   v1.18.2
		k8s.node2     NotReady   <none>   11s   v1.18.2


3、master节点--> 解决node节点一直为NotReady状态(好大一段时间也无法Ready)
	# 查看kube-system名称空间下的系统Pod
		k8s]# kubectl get pod -n kube-system
			NAME                                  READY   STATUS    RESTARTS   AGE
			coredns-7ff77c879f-2sn2f              1/1     Running   0          15h
			coredns-7ff77c879f-8qqdb              1/1     Running   0          15h
			etcd-k8s.master1                      1/1     Running   0          15h
			kube-apiserver-k8s.master1            1/1     Running   0          15h
			kube-controller-manager-k8s.master1   1/1     Running   0          15h
			kube-flannel-ds-amd64-9ck78           1/1     Running   0          26m
			kube-flannel-ds-amd64-hblzk           0/1     Pending   0          11m
			kube-flannel-ds-amd64-vm82d           0/1     Pending   0          12m
			kube-proxy-4p84s                      1/1     Running   0          15h
			kube-proxy-kxw2v                      0/1     Pending   0          12m
			kube-proxy-sptb7                      0/1     Pending   0          11m
			kube-scheduler-k8s.master1            1/1     Running   0          15h


		~]# mkdir .kube
		# 将master节点的admin.conf复制到其他节点
		k8s]# scp /etc/kubernetes/admin.conf root@192.168.20.214:/root/.kube/config
		k8s]# scp /etc/kubernetes/admin.conf root@192.168.20.212:/root/.kube/config
		
		# 再次查看运行的pod
			k8s]# kubectl get pod -n kube-system
				NAME                                  READY   STATUS    RESTARTS   AGE
				coredns-7ff77c879f-2sn2f              1/1     Running   0          15h
				coredns-7ff77c879f-8qqdb              1/1     Running   0          15h
				etcd-k8s.master1                      1/1     Running   0          15h
				kube-apiserver-k8s.master1            1/1     Running   0          15h
				kube-controller-manager-k8s.master1   1/1     Running   0          15h
				kube-flannel-ds-amd64-9ck78           1/1     Running   0          46m
				kube-flannel-ds-amd64-hblzk           1/1     Running   0          31m
				kube-flannel-ds-amd64-vm82d           1/1     Running   0          32m
				kube-proxy-4p84s                      1/1     Running   0          15h
				kube-proxy-kxw2v                      1/1     Running   0          32m
				kube-proxy-sptb7                      1/1     Running   0          31m
				kube-scheduler-k8s.master1            1/1     Running   0          15h

 		# 再次查看node节点状态
			k8s]# kubectl get nodes
				NAME          STATUS   ROLES    AGE   VERSION
				k8s.master1   Ready    master   15h   v1.18.2
				k8s.node1     Ready    <none>   33m   v1.18.2
				k8s.node2     Ready    <none>   31m   v1.18.2
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐