一 环境准备

1 三台机器,还需要一台docker镜像服务器

 	master   192.168.100.89  
	node2    192.168.100.91   
	node3    192.168.100.92  
	registry 192.168.100.89

2 所有机器都关闭selinux

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

3 三台机器都配置好hostname

hostnamectl set-hostname master[或者node2/node3]
echo "192.168.100.89 master" >> /etc/hosts
echo "192.168.100.91 node2" >> /etc/hosts
echo "192.168.100.92 node3" >> /etc/hosts

把kubeadm init时查找的仓库地址配置为本地docker镜像仓库地址,这样被墙了也能从本地拉倒镜像

echo "192.168.100.89 quay.io k8s.gcr.io gcr.io"  >> /etc/hosts

4 关闭swap

swapoff -a
编辑 /etc/fstab,注释掉包含swap的那一行
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=20ca01ff-c5eb-47bc-99a0-6527b8cb246e /boot                   xfs     defaults        0 0
#/dev/mapper/centos-swap swap 

使用top命令查看结果

5 配置yum源

wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo 
yum makecache

6 配置docker镜像仓库

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum makecache

7 安装docker并启动

yum install docker-ce -y
systemctl start docker & systemctl enable docker

8 关闭防火墙

systemctl stop firewalld.service
systemctl disable firewalld.service

9 在registry上创建docker本地镜像仓库
这里registry和master公用的一个服务器

docker pull registry
docker run --restart=always -d -p 80:5000 --hostname=my-registry --name my-registry -v /mnt/data/registry:/var/lib/registry registry

10 配置各节点系统内核参数使流过网桥的流量也进入iptables/netfilter框架中

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
vm.swappiness                       = 0
EOF

执行命令使配置生效

sysctl -p /etc/sysctl.d/k8s.conf

11 确认iptables的FORWARD规则
Docker不知啥时候开始会将iptables filter链的FORWARD规则默认设置为DROP

[root@CentOS-7-2 ~]# iptables -vnL | grep FORWARD
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
[root@CentOS-7-2 ~]# systemctl start docker
[root@CentOS-7-2 ~]# iptables -vnL | grep FORWARD
Chain FORWARD (policy DROP 0 packets, 0 bytes)

设置FORWARD规则为ACCEPT

iptables -P FORWARD ACCEPT

将该操作固化到开机流程中,而且得在docker服务启动之后,因此我们添加一个systemd开机服务

cat > /usr/lib/systemd/system/forward-accept.service <<EOF
[Unit]
Description=set forward accept
After=docker.service
 
[Service]
ExecStart=/usr/sbin/iptables -P FORWARD ACCEPT
 
[Install]
WantedBy=multi-user.target
EOF
systemctl enable forward-accept && systemctl start forward-accept

12 安装ntp服务并启动
保证集群间的时间一致,否则会有各种未知问题。

yum install -y ntp
systemctl start ntpd;systemctl enable ntpd

13、使docker和kubelet的cgroup driver一致,并配置本地镜像以及kubeadm初始化默认读取的镜像仓库
kubelet默认是systemd,docker是cgroupfs

查看docker的cgroup

docker info | grep "Cgroup Driver"
Cgroup Driver: cgroupfs

cat << EOF > /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "insecure-registries":["192.168.100.89:80", "quay.io", "k8s.gcr.io", "gcr.io"]
}
EOF
systemctl restart docker

都配置完后最好重启机器

二 准备镜像
kubeadm init的时候会从默认仓库下载镜像,我们先准备好对应版本的镜像
下面的操作在一台机器执行就可以了

1 查看最先版本的镜像依赖
当前版本为v1.15.2
查看镜像依赖

这里可能会kubeadm找不到命令,如果想查看版本,可以先执行以下三-1步骤,安装以下再查

kubeadm config images list --kubernetes-version=v1.15.2

结果为:

k8s.gcr.io/kube-apiserver:v1.15.2
k8s.gcr.io/kube-controller-manager:v1.15.2
k8s.gcr.io/kube-scheduler:v1.15.2
k8s.gcr.io/kube-proxy:v1.15.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.3.10
k8s.gcr.io/coredns:1.3.1

2 下载各个镜像

docker pull mirrorgooglecontainers/kube-apiserver:v1.15.2
docker pull mirrorgooglecontainers/kube-proxy:v1.15.2
docker pull yonh/kube-controller-manager:v1.15.2
docker pull aiotceo/kube-scheduler:v1.15.2
docker pull coredns/coredns:1.3.1
docker pull mirrorgooglecontainers/etcd:3.3.10
docker pull mirrorgooglecontainers/pause:3.1
获取flannel镜像
docker pull quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64

3 为镜像打上自己的tag

docker tag mirrorgooglecontainers/kube-apiserver:v1.15.2 192.168.100.89:80/kube-apiserver:v1.15.2
docker tag mirrorgooglecontainers/kube-proxy:v1.15.2 192.168.100.89:80/kube-proxy:v1.15.2
docker tag yonh/kube-controller-manager:v1.15.2 192.168.100.89:80/kube-controller-manager:v1.15.2
docker tag aiotceo/kube-scheduler:v1.15.2 192.168.100.89:80/kube-scheduler:v1.15.2
docker tag coredns/coredns:1.3.1 192.168.100.89:80/coredns:1.3.1
docker tag mirrorgooglecontainers/etcd:3.3.10 192.168.100.89:80/etcd:3.3.10
docker tag mirrorgooglecontainers/pause:3.1 192.168.100.89:80/pause:3.1
docker tag quay-mirror.qiniu.com/coreos/flannel:v0.11.0-amd64 192.168.100.89:80/coreos/flannel:v0.11.0-amd64

4 push到本地仓库

docker push 192.168.100.89:80/kube-apiserver:v1.15.2
docker push 192.168.100.89:80/kube-proxy:v1.15.2
docker push 192.168.100.89:80/kube-controller-manager:v1.15.2
docker push 192.168.100.89:80/kube-scheduler:v1.15.2
docker push 192.168.100.89:80/coredns:1.3.1
docker push 192.168.100.89:80/etcd:3.3.10
docker push 192.168.100.89:80/pause:3.1
docker push 192.168.100.89:80/coreos/flannel:v0.11.0-amd64

三、安装kubelet

1 使用阿里云repo源安装

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

2 master节点一键部署集群

kubeadm init --pod-network-cidr 10.244.0.0/16

切记,如果集群通信采用flannel的话,执行时一定要带 --pod-network-cidr 参数,并且网络段要和后面步骤中使用的flannel yaml文件中定义的保持一致。

kubeadm init --pod-network-cidr 10.244.0.0/16

执行后界面

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.100.89:6443 --token w5yoxp.a4g7fokmf4co1otq \
--discovery-token-ca-cert-hash sha256:351e1e5113e9b2c672280c4bc4f57a6c2defb6d289d03c94590d0710d2033873  

拷贝最后join的信息,待用
如果想部署指定版本k8s init时添加参数--kubernetes-version=v1.15.2

3 配置让非root用户可以使用kubelet

 mkdir -p $HOME/.kube
 sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 sudo chown $(id -u):$(id -g) $HOME/.kube/config

4 将其他两个节点加入master

kubeadm join 192.168.100.89:6443 --token w5yoxp.a4g7fokmf4co1otq \
--discovery-token-ca-cert-hash sha256:351e1e5113e9b2c672280c4bc4f57a6c2defb6d289d03c94590d0710d2033873 

执行后界面

[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5 配置通信组件flannel
在master节点查看状态

kubectl get nodes

NAME     STATUS     ROLES    AGE    VERSION
node2    NotReady   <none>   14s    v1.15.2
master   NotReady   master   172m   v1.15.2
node3    NotReady   <none>   11s    v1.15.2

状态为NotReady,执行命令:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/62e44c867a2846fefb68bd5f178daf4da3095ccb/Documentation/kube-flannel.yml

这里也可以先把文件下载再执行命令,保证网段与init时的参数一致

 net-conf.json: |
{
  "Network": "10.244.0.0/16",
  "Backend": {
    "Type": "vxlan"
  }
}

过一会再看到状态就是ready了

四 常见问题处理

1 清理kubelet

kubeadm reset
rm -rf $HOME/.kube/
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ip link delete cni0
ip link delete flannel.1
systemctl restart docker

2、让某个节点不参与调度

kubectl cordon k8s-node-1
kubectl uncordon k8s-node-1       #取消

3、驱逐某个节点上的容器

kubectl drain --ignore-daemonsets --delete-local-data k8s-node-1

4、删除节点

kubectl delete node k8s-node-1

5、让master参与调度

kubectl taint node xxx-nodename node-role.kubernetes.io/master-  #将 Master 也当作 Node 使用
kubectl taint node xxx-nodename node-role.kubernetes.io/master="":NoSchedule #将 Master 恢复成 Master Only 状态

6、因为频繁创建pod出现cannot allocate memory错误
参考下面文章的方案二修改grub
https://blog.csdn.net/qq_39382769/article/details/124812543

修改/etc/default/grub,在GRUB_CMDLINE_LINUX中添加cgroup.memory=nokmem,以下为完整的文件

GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=saved
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL_OUTPUT="console"
GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet cgroup.memory=nokmem"
GRUB_DISABLE_RECOVERY="true"

生成配置:

/usr/sbin/grub2-mkconfig -o /boot/grub2/grub.cfg

重启机器:

reboot 
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐