前言

  • 本次使用kubeadm部署
  • 使用docker
  • nginx反向代理apiserver实现管理面高可用
  • Ubuntu 20.04系统
  • 使用外部etcd集群

一、环境准备

本次部署准备了8台Ubuntu虚拟机,分为3个master节点、3个work节点、1个负载均衡节点、1个harbor节点*(正常其实可以多上一个负载均衡节点通过keepalived做高可用,但是我这次用的云主机没法使用keepalived)*
具体的准备工作可以看这一篇,为了方便免密,这里也准备一个脚本

root@master1:~# cat ssh-ssh.sh 
#!/bin/bash
ssh-keygen -t rsa -P "" -f /root/.ssh/id_rsa -q
for host in  `awk '{print $1}' /etc/hosts`
do
    expect -c "
    spawn ssh-copy-id -i /root/.ssh/id_rsa.pub root@${host}
        expect {
                *yes/no* {send -- "yes\r"; exp_continue}
                *assword* {send zettakit\r; exp_continue}
               }"
done
root@master1:~# sh ssh-ssh.sh

二、安装etcd集群

本次etcd复用的3个master节点

root@master1:~# cat  /etc/hosts 
10.10.21.170	master1
10.10.21.172	master2
10.10.21.175	master3
10.10.21.171	node1
10.10.21.173	node2
10.10.21.176	node3
10.10.21.178	kubeapi
10.10.21.174	harbor
EOF

下载etcd安装包

root@master1:~# wget https://github.com/etcd-io/etcd/releases/download/v3.3.5/etcd-v3.3.5-linux-amd64.tar.gz
root@master1:~#  mkdir -p /etc/etcd/pki 
root@master1:~# tar -xf etcd-v3.3.5-linux-amd64.tar.gz  &&  mv etcd-v3.3.5-linux-amd64 /etc/etcd/

使用cfssl工具在/etc/etcd/pki 目录创建私有证书
具体过程参考harbor博客的证书部分

root@master1:~ # cd /etc/etcd/pki
root@master1:/etc/etcd/pki # cat > etcd-csr.json <<EOF
{
  "CN": "etcd",
  "hosts": [
    "127.0.0.1",
    "10.10.21.170",
    "10.10.21.172",
    "10.10.21.175"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "HuBei",
      "L": "WuHan",
      "O": "etcd",
      "OU": "org"
    }
  ]
}
EOF

root@master1:/etc/etcd/pki # cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=peer etcd.json | cfssljson -bare peer		#生成需要的证书文件
  • hosts需要将所有etcd节点地址都写上

编辑etcd的service文件,为了避免出问题,我把一些易错的地方写了注释,实际部署的时候需要把注释删除

root@master1:~# cat /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos

[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
ExecStart=/etc/etcd/etcd-v3.3.5-linux-amd64/etcd \   #目录对应etcd二进制文件的目录
  --name=master1 \							# 需要和本节点名对应,否则会报错
  --cert-file=/etc/etcd/pki/server.pem \	# 证书文件和目录对应路径,写错了起不来
  --key-file=/etc/etcd/pki/server-key.pem \
  --peer-cert-file=/etc/etcd/pki/peer.pem \
  --peer-key-file=/etc/etcd/pki/peer-key.pem \
  --trusted-ca-file=/etc/etcd/pki/ca.pem \
  --peer-trusted-ca-file=/etc/etcd/pki/ca.pem \
  --initial-advertise-peer-urls=https://10.10.21.170:2380 \  #本节点IP地址
  --listen-peer-urls=https://10.10.21.170:2380 \			#本节点IP地址
  --listen-client-urls=https://10.10.21.170:2379 \			#本节点IP地址
  --advertise-client-urls=https://10.10.21.170:2379 \		#本节点IP地址
  --initial-cluster-token=etcd-cluster-0 \					#集群名字自定义
  --initial-cluster=master1=https://10.10.21.170:2380,master2=https://10.10.21.172:2380,master3=https://10.10.21.175:2380 \					#依然是节点名字和IP对应
  --initial-cluster-state=new \
  --data-dir=/data/etcd \
  --snapshot-count=50000 \
  --auto-compaction-retention=1 \
  --max-request-bytes=10485760 \
  --quota-backend-bytes=8589934592
Restart=always
RestartSec=15
LimitNOFILE=65536
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

将文件传输到其他节点

root@master1:~# for i in master2 master3;do scp -r /etc/etcd $i:/etc/ ;scp /lib/systemd/system/etcd.service $i:/lib/systemd/system/etcd.service ;done

按要求修改完配置文件之后启动etcd

root@master1:~# for i in master2 master3;do ssh $i systemctl daemon-reload ;ssh $i systemctl enable --now etcd ;done

查询etcd集群状态信息

root@master1:~#  export NODE_IPS="10.10.21.170 10.10.21.172 10.10.21.175"
root@master1:~# for ip in ${NODE_IPS};do ETCDCTL_API=3 /etc/etcd/etcd-v3.3.5-linux-amd64/etcdctl --write-out=table endpoint status --endpoints=https://${ip}:2379 --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/server.pem --key=/etc/etcd/pki/server-key.pem;done
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.10.21.170:2379 | 3f5dcb4f9728903b |   3.3.5 |  3.0 MB |     false |        32 |    1239646 |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.10.21.172:2379 | 13dde2c0d8695730 |   3.3.5 |  3.0 MB |      true |        32 |    1239646 |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
| https://10.10.21.175:2379 | 6acd32f3e7cb1ab7 |   3.3.5 |  3.0 MB |     false |        32 |    1239646 |
+---------------------------+------------------+---------+---------+-----------+-----------+------------+
root@master1:/opt# ETCDCTL_API=3  /etc/etcd/etcd-v3.3.5-linux-amd64/etcdctl --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/server.pem --key=/etc/etcd/pki/server-key.pem --endpoints="https://10.10.21.170:2379,https://10.10.21.172:2379,https://10.10.21.175:2379" endpoint health --write-out=table
https://10.10.21.172:2379 is healthy: successfully committed proposal: took = 577.956µs
https://10.10.21.175:2379 is healthy: successfully committed proposal: took = 1.122021ms
https://10.10.21.170:2379 is healthy: successfully committed proposal: took = 1.013689ms
# 这里etcdctl的命令路径、证书文件路径、etcd节点的地址需要确保别写错
root@master1:~# ETCDCTL_API=3 /etc/etcd/etcd-v3.3.5-linux-amd64/etcdctl --endpoints="10.10.21.170:2379,10.10.21.172:2379,10.10.21.175:2379" --cacert=/etc/etcd/pki/ca.pem --cert=/etc/etcd/pki/server.pem --key=/etc/etcd/pki/server-key.pem  endpoint status --write-out=table
+-------------------+------------------+---------+---------+-----------+-----------+------------+
|     ENDPOINT      |        ID        | VERSION | DB SIZE | IS LEADER | RAFT TERM | RAFT INDEX |
+-------------------+------------------+---------+---------+-----------+-----------+------------+
| 10.10.21.170:2379 | 3f5dcb4f9728903b |   3.3.5 |  7.5 MB |     false |       112 |    5704640 |
| 10.10.21.172:2379 | 13dde2c0d8695730 |   3.3.5 |  7.5 MB |     false |       112 |    5704640 |
| 10.10.21.175:2379 | 6acd32f3e7cb1ab7 |   3.3.5 |  7.5 MB |      true |       112 |    5704640 |
+-------------------+------------------+---------+---------+-----------+-----------+------------+

至此etcd部署完成

三、配置nginx四层代理

安装nginx并修改配置文件

root@lb:~# apt-get install nginx -y
root@lb:~# egrep -v "^#|^$" /etc/nginx/nginx.conf 
user www-data;
worker_processes 2;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
events {
	worker_connections 768;
	# multi_accept on;
}
stream {
    upstream backend {
        hash \$remote_addr consistent;
        server 10.10.21.170:6443        max_fails=3 fail_timeout=30s;  #以下三行是代理apiserver的,配置连接失败3次之后熔断30秒
        server 10.10.21.172:6443        max_fails=3 fail_timeout=30s;
        server 10.10.21.175:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen 6443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
    upstream dashboard {
        server 10.10.21.170:40000       max_fails=3 fail_timeout=30s;  #这里六行是我用来代理dashboard的,如不需要可以将这个upstream删除
        server 10.10.21.172:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.175:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.171:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.173:40000       max_fails=3 fail_timeout=30s;
        server 10.10.21.176:40000       max_fails=3 fail_timeout=30s;
    }
    server {
        listen 40000;
        proxy_connect_timeout 1s;
        proxy_pass dashboard;
    }
}
root@lb:~# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
root@lb:~# systemctl restart nginx		#验证语法无误之后重启nginx生效配置

额外补充Redhat系列

Centos直接使用yum安装nginx默认不带stream模块,如果需要使用相同的手法可以编译安装nginx

yum -y install pcre-devel zlib-devel gcc gcc-c++ make
useradd -r nginx -M -s /sbin/nologin
wget http://nginx.org/download/nginx-1.16.1.tar.gz
tar xf nginx-1.16.1.tar.gz  && cd /nginx-1.16.1/
./configure  --prefix=/opt \	#指定nginx的安装路径
--user=nginx \					#指定用户名
--group=nginx \					#指定组名
--with-stream \					#安装stream模块
--without-http \
--without-http_uwsgi_module \	
--with-http_stub_status_module	#启用 http_stub_status_module 模块以支持状态统计
make && make install

或者执行以下命令安装模块

yum -y install nginx
yum -y install nginx-all-modules

四、所有k8s节点安装docker并修改配置

#Ubuntu20.04可以利用内置仓库安装docker

root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt update;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt -y install docker.io;done

如果安装不上也可以按阿里镜像源指导来安装
安装完成之后配置docker镜像仓库加速和cgroup驱动为systemd

root@master1:~# cat > /etc/docker/daemon.json <<EOF
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",			#这几个是国内的docker仓库地址,填一个即可
"https://hub-mirror.c.163.com",
"https://reg-mirror.qiniu.com",
"https://registry.docker-cn.com"
],
"insecure-registries":["10.10.21.174:443"],    #这个是我的harbor仓库地址
"exec-opts": ["native.cgroupdriver=systemd"]   #配置cgroup driver
}
EOF
root@master1:~# for i in master2 master3 node1 node2 node3;do scp /etc/docker/daemon.json $i:/etc/docker/daemon.json ;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i systemctl daemon-reload ;ssh $i systemctl restart docker; done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i docker info |grep Cgroup ;echo $i;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i docker info |grep Cgroup ;echo $i;done
WARNING: No swap limit support			#提示没有swap限制,但是实际上swap已经关掉了,所以可以忽略
 Cgroup Driver: systemd
 Cgroup Version: 1
master2
WARNING: No swap limit support
 Cgroup Driver: systemd
 Cgroup Version: 1
master3
WARNING: No swap limit support
 Cgroup Driver: systemd
 Cgroup Version: 1
node1
 Cgroup Driver: systemd
 Cgroup Version: 1
WARNING: No swap limit support
node2
WARNING: No swap limit support
 Cgroup Driver: systemd
 Cgroup Version: 1
node3

五、 开始安装

所有k8s节点安装kubeadm、kubectl、kubelet

可以参考阿里云安装指导

root@master1:~# apt-get update && apt-get install -y apt-transport-https	#安装依赖
root@master1:~# curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 	#导入密钥
root@master1:~# cat <<EOF >/etc/apt/sources.list.d/kubernetes.list	#导入阿里的kubernetes源
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
root@master1:~# for i in  master2 master3 node1 node2 node3;do scp /etc/apt/sources.list.d/kubernetes.list 
$i:/etc/apt/sources.list.d/;done	#传输给其他节点
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt update;done
root@master1:~# for i in master2 master3 node1 node2 node3;do ssh $i apt -y install kubelet kubeadm kubectl;done #安装最新版本

如需要安装指定版本需要按如下操作

root@master1:~# apt-cache madison kubeadm|head	#查询前10条,或者也可以用apt-cache show kubeadm |grep 1.25
   kubeadm |  1.25.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.25.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.25.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.6-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.5-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.4-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.3-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.2-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.1-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
   kubeadm |  1.24.0-00 | https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
root@master1:~# apt install -y  kubeadm=1.25.2-00 kubelet=1.25.2-00 kubectl=1.25.2-00

所有k8s节点安装cri-dockerd

Kubernetes自v1.24移除了对docker-shim的支持,而Docker Engine默认又不支持CRI规范,因而二者
将无法直接完成整合。为此,Mirantis和Docker联合创建了cri-dockerd项目,用于为Docker Engine提
供一个能够支持到CRI规范的垫片,从而能够让Kubernetes基于CRI控制Docker 。
项目地址

root@master1:~#  wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.2.6/cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb	#在github中有二进制包也有deb和rpm包,为了方便我这里直接下载的deb包
root@master1:~# for i in  master2 master3 node1 node2 node3;do scp cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.deb 
$i:/root/;done	#传输给其他节点
root@master1:~# for i in  master2 master3 node1 node2 node3;do ssh $i dpkg -i cri-dockerd_0.2.6.3-0.ubuntu-focal_amd64.de ;done

修改cri-dockerd的service文件,否则会报错,嫌麻烦可以直接复制下面的文件

root@master1:~# cat /lib/systemd/system/cri-docker.service 
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
#ExecStart=/usr/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infracontainer-image registry.aliyuncs.com/google_containers/pause:3.7  
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.7 #这里地址不改会起不来,修改仓库地址为阿里云是为了pull镜像加速
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
root@master1:~# for i in  master2 master3 node1 node2 node3;do scp /lib/systemd/system/cri-docker.service  
$i:/lib/systemd/system/cri-docker.service ;done	#传输给其他节点
root@master1:~# for i in  master2 master3 node1 node2 node3;do ssh $i systemctl daemon-reload;ssh $i systemctl enable --now cri-docker.service ;done

在master1初始化

root@master1:~#  kubeadm config print init-defaults > init.yaml 		#生成初始化配置文件,方便修改参数
root@master1:~# cat init.yaml 
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.21.178		#填nginx的IP地址,如果是其他反向代理也填其他proxy的地址
  bindPort: 6443
nodeRegistration:
  criSocket:  unix:///var/run/cri-dockerd.sock	#由于是用的docker这里必须修改,否则初始化会报错
  imagePullPolicy: IfNotPresent
  name: master1							# 和节点名字对应
  taints: null
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  external:							#这里默认是local,由于我们用的外部etcd,所以需要修改
    endpoints:
      - https://10.10.21.170:2379
      - https://10.10.21.172:2379
      - https://10.10.21.175:2379
    #搭建etcd集群时生成的ca证书
    caFile: /etc/etcd/pki/ca.pem
    #搭建etcd集群时生成的客户端证书
    certFile: /etc/etcd/pki/client.pem
    #搭建etcd集群时生成的客户端密钥
    keyFile: /etc/etcd/pki/client-key.pem
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.25.2		#和版本需要一一对应
networking:
  dnsDomain: cluster.local
  podSubnet: 10.200.0.0/16			#pod的CIDR地址,不要和别的有交叉
  serviceSubnet: 10.100.0.0/16		#service的CIDR地址,不要和别的有交叉
scheduler: {}	
root@master1:~# kubeadm init --config=init.yaml #利用修改完成的文件进行初始化

由于已经配置从阿里拉取镜像,所以这一步并不会太久,出现类似以下提示就代表初始化完成了
在这里插入图片描述

root@master1:~# export KUBECONFIG=/etc/kubernetes/admin.conf #如果是root用户执行这条,普通用户需要执行上面的那几条,并且如果你将以上文件scp给其它节点,那么其它节点也能执行kubectl命令

安装CNI插件flannel

root@master1:~#  wget  https://raw.githubusercontent.com/flannelio/flannel/master/Documentation/kube-flannel.yml
root@master1:~#  cat kube-flannel.yml 
---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.200.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.2 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.2
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate
# 下载不下来的可以复制我的使用,但是记得修改CIDR与前面定义的pod的一致,否则在后面起coredns的pod会失败
root@master1:~# kubectl apply -f kube-flannel.yml
root@master1:~# kubectl get pod -n kube-system  #查看coredns是否正常,如果是running就可以进行下一步,否则node是notready状态
root@master1:~# kubectl get node
NAME             STATUS   ROLES           AGE     VERSION
master1   Ready    control-plane   6d4h    v1.25.2
# 此时应该是这种状态

扩容work节点

正常初始化完成会直接有kubeadm join的命令提示,如果忘记保存或者是token过期可以重新生成

root@master1:~# kubeadm token create --print-join-command 
kubeadm join 10.10.21.178:6443 --token x4mb27.hidqr7ao758eafcx --discovery-token-ca-cert-hash sha256:44aba1ef82b6b34c40fe748a9c2cd321be91aa3c22dd23e706001b65affb9dc9

然后直接依次去其余work节点执行这条命令即可,但是我们这边的cri接口不上containerd,所以真实命令应该为

kubeadm join 10.10.21.178:6443 --token bnxg1m.6y89w1fsz73ztc34 --discovery-token-ca-cert-hash sha256:44aba1ef82b6b34c40fe748a9c2cd321be91aa3c22dd23e706001b65affb9dc9 --cri-socket unix:///run/cri-dockerd.sock

一定要在结尾加上*–cri-socket unix:///run/cri-dockerd.sock*
执行完成之后再查询node信息

root@master1:~# kubectl get node
NAME             STATUS   ROLES           AGE     VERSION
master1  		 Ready    control-plane   6d4h    v1.25.2
node1			 Ready    <none>          5d12h   v1.25.2
node2    		 Ready    <none>          5d12h   v1.25.2
node3  		     Ready    <none>          5d12h   v1.25.2

扩容控制节点

扩容控制节点需要更新证书,正常命令应该如下

root@master1:~# kubeadm init phase upload-certs --upload-certs 
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

可以看到我这边提示cri接口异常,然后我尝试加上

root@master1:~# kubeadm init phase upload-certs --upload-certs --cri-socket  unix:///var/run/cri-dockerd.sock
unknown flag: --cri-socket
To see the stack trace of this error execute with --v=5 or higher

还是报错
后来发现这其实主要原因是因为使用了外部etcd导致的,因此可以按如下方式更新证书,这里卡了我蛮久的,如果有遇到相同问题的可以试一下,init.yaml就是在kubeadm初始化的时候的那个文件

root@master1:~# kubeadm init phase upload-certs --upload-certs --config init.yaml 
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
a618b3a050dcc91d7304ca7f3ddf2f02f2825528d9801f9d1bd8d3e742898744

接下来的操作就比较简单了,将之前生成的token和这次生成的key拼接起来即可

root@master2:~# kubeadm join 10.10.21.178:6443 --token q5k1so.r9r6vg7lsj3zzec1 --discovery-token-ca-cert-hash sha256:44aba1ef82b6b34c40fe748a9c2cd321be91aa3c22dd23e706001b65affb9dc9  --control-plane --certificate-key 509228b6886d23b3f5d7e64d8d6d2b74429e9bf136494838e296a4f1b0e89c46 --cri-socket unix:///run/cri-dockerd.sock
#这里的token和key一定要和自己的对应好,我这边因为复制了太多次所以可能不是完全对应的
#此外,结尾也一定要加上 --cri-socket unix:///run/cri-dockerd.sock指定cri接口,否则会有报错

如果出现类似这种报错
unable to add a new control plane instance to a cluster that doesn’t have a stable controlPlaneEndpoint address
在这里插入图片描述
这是因为kubeadm-config 的configmap文件没有声明controlPlaneEndpoint,需要执行如下操作

kubectl edit cm kubeadm-config -n kube-system

在这里插入图片描述
在这行加入对应的controlPlaneEndpoint,填写对应的vip地址

在其他两个master节点依次执行命令等待初始化完成之后,master扩容就完成了

root@master2:~# kubectl get nodes,cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                  STATUS   ROLES           AGE     VERSION
node/master1   Ready    control-plane   6d4h    v1.25.2
node/master2   Ready    control-plane   5d2h    v1.25.2
node/master3   Ready    control-plane   5d2h    v1.25.2
node/node1     Ready    <none>          5d12h   v1.25.2
node/node2     Ready    <none>          5d12h   v1.25.2
node/node3     Ready    <none>          5d12h   v1.25.2

NAME                                 STATUS    MESSAGE             ERROR
componentstatus/scheduler            Healthy   ok                  
componentstatus/controller-manager   Healthy   ok                  
componentstatus/etcd-2               Healthy   {"health":"true"}   
componentstatus/etcd-1               Healthy   {"health":"true"}   
componentstatus/etcd-0               Healthy   {"health":"true"}   
root@master2:~# kubectl cluster-info 
Kubernetes control plane is running at https://10.10.21.178:6443
CoreDNS is running at https://10.10.21.178:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

到这里三主三从的k8s集群就部署完毕了,如果还需要部署dashboard可以参考之前的博客

修改k8s的service中nodeport端口范围

在 Kubernetes 集群中,NodePort 默认范围是 30000-32767,但是我们也可以进行修改

root@master2:~# cat /etc/kubernetes/manifests/kube-apiserver.yaml 
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 10.10.21.172:6443
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=10.10.21.172
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/etc/kubernetes/pki/ca.crt
    - --enable-admission-plugins=NodeRestriction
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/etc/etcd/pki/ca.pem
    - --etcd-certfile=/etc/etcd/pki/client.pem
    - --etcd-keyfile=/etc/etcd/pki/client-key.pem
    - --etcd-servers=https://10.10.21.170:2379,https://10.10.21.172:2379,https://10.10.21.175:2379
    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=6443
    - --service-account-issuer=https://kubernetes.default.svc.cluster.local
    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
    - --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
    - --service-cluster-ip-range=10.100.0.0/16
    - --service-node-port-range=30000-50000			#这里定义的就是端口范围,如果没有的话也可以新增
    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
    image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.25.2
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 10.10.21.172
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    name: kube-apiserver
    readinessProbe:
      failureThreshold: 3
      httpGet:
        host: 10.10.21.172
        path: /readyz
        port: 6443
        scheme: HTTPS
      periodSeconds: 1
      timeoutSeconds: 15
    resources:
      requests:
        cpu: 250m
    startupProbe:
      failureThreshold: 24
      httpGet:
        host: 10.10.21.172
        path: /livez
        port: 6443
        scheme: HTTPS
      initialDelaySeconds: 10
      periodSeconds: 10
      timeoutSeconds: 15
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /etc/etcd/pki
      name: etcd-certs-0
      readOnly: true
    - mountPath: /etc/kubernetes/pki
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-node-critical
  securityContext:
    seccompProfile:
      type: RuntimeDefault
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
    name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /etc/etcd/pki
      type: DirectoryOrCreate
    name: etcd-certs-0
  - hostPath:
      path: /etc/kubernetes/pki
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}

修改完毕之后将这个文件传输至其余matser节点并delete原来的apiserver对应的pod

root@master1:~# for i in  master2 master3;do scp /etc/kubernetes/manifests/kube-apiserver.yaml  $i:/etc/kubernetes/manifests/kube-apiserver.yaml;done	#传输给其他节点
root@master1:~# kubectl -n kube-system delete pod kube-apiserver-master2
pod "kube-apiserver-master2" deleted
root@master1:~# kubectl -n kube-system delete pod kube-apiserver-master3.hu.org
pod "kube-apiserver-master3" deleted
root@master1:~# kubectl -n kube-system delete pod kube-apiserver-master1.hu.org
pod "kube-apiserver-master1" deleted
root@master1:~# kubectl -n kube-system describe pod kube-apiserver-master1.hu.org |grep -i service
      --service-account-issuer=https://kubernetes.default.svc.cluster.local
      --service-account-key-file=/etc/kubernetes/pki/sa.pub
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key
      --service-cluster-ip-range=10.100.0.0/16
      --service-node-port-range=30000-50000

配置k8s使用ipvs进行流量转发

k8s的kube-proxy支持iptables、ipvs 模式,默认是iptables 模式

root@master1:~# apt-get update
root@master1:~# apt-get install -y ipvsadm ipset sysstat conntrack libseccomp-dev #安装ipvs
root@master1:~# uname -r
5.4.0-126-generic
root@master1:~# modprobe -- ip_vs
root@master1:~# modprobe -- ip_vs_rr
root@master1:~# modprobe -- ip_vs_wrr
root@master1:~# modprobe -- ip_vs_sh
root@master1:~# modprobe -- nf_conntrack
root@master1:~# cat >> /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack #内核小于4.18,把这行改成nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
root@master1:~# systemctl enable --now systemd-modules-load.service
The unit files have no installation config (WantedBy=, RequiredBy=, Also=,
Alias= settings in the [Install] section, and DefaultInstance= for template
units). This means they are not meant to be enabled using systemctl.
 
Possible reasons for having this kind of units are:
• A unit may be statically enabled by being symlinked from another unit's
  .wants/ or .requires/ directory.
• A unit's purpose may be to act as a helper for some other unit which has
  a requirement dependency on it.
• A unit may be started when needed via activation (socket, path, timer,
  D-Bus, udev, scripted systemctl call, ...).
• In case of template units, the unit is meant to be enabled with some
  instance name specified.
root@master1:~# systemctl status systemd-modules-load.service
● systemd-modules-load.service - Load Kernel Modules
     Loaded: loaded (/lib/systemd/system/systemd-modules-load.service; static; vendor preset: enabled)
     Active: active (exited) since Wed 2022-10-12 03:53:34 UTC; 5 days ago
       Docs: man:systemd-modules-load.service(8)
             man:modules-load.d(5)
   Main PID: 377 (code=exited, status=0/SUCCESS)
      Tasks: 0 (limit: 4640)
     Memory: 0B
     CGroup: /system.slice/systemd-modules-load.service
root@master1:~# lsmod |grep ip_vs
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  10
ip_vs                 155648  16 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          139264  6 xt_conntrack,nf_nat,xt_nat,nf_conntrack_netlink,xt_MASQUERADE,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
libcrc32c              16384  6 nf_conntrack,nf_nat,btrfs,xfs,raid456,ip_vs
#确定已经生效

修改kube-proxy配置

root@master1:~# kubectl edit configmaps kube-proxy -n kube-system
#在mode这行修改,后面加上ipvs:   *mode: "ipvs"*

然后删除kube-proxy对应的pod让他重新生成,重新生成完成之后可以进行查询检查

root@master1:~# kubectl -n kube-system logs kube-proxy-xkkhp 
I1013 03:09:02.586507       1 node.go:163] Successfully retrieved node IP: 10.10.21.173
I1013 03:09:02.586554       1 server_others.go:138] "Detected node IP" address="10.10.21.173"
I1013 03:09:02.602042       1 server_others.go:269] "Using ipvs Proxier"   #这里指明用的ipvs
I1013 03:09:02.602175       1 server_others.go:271] "Creating dualStackProxier for ipvs"
I1013 03:09:02.602192       1 server_others.go:501] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6"
I1013 03:09:02.602873       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
I1013 03:09:02.603004       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
I1013 03:09:02.603023       1 ipset.go:113] "Ipset name truncated" ipSetName="KUBE-6-LOAD-BALANCER-SOURCE-CIDR" truncatedName="KUBE-6-LOAD-BALANCER-SOURCE-CID"
I1013 03:09:02.603030       1 ipset.go:113] "Ipset name truncated" ipSetName="KUBE-6-NODE-PORT-LOCAL-SCTP-HASH" truncatedName="KUBE-6-NODE-PORT-LOCAL-SCTP-HAS"
I1013 03:09:02.603166       1 server.go:661] "Version info" version="v1.25.2"
I1013 03:09:02.603177       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I1013 03:09:02.608838       1 conntrack.go:52] "Setting nf_conntrack_max" nf_conntrack_max=131072
I1013 03:09:02.609416       1 config.go:317] "Starting service config controller"
I1013 03:09:02.609429       1 shared_informer.go:255] Waiting for caches to sync for service config
I1013 03:09:02.609461       1 config.go:444] "Starting node config controller"
I1013 03:09:02.609464       1 shared_informer.go:255] Waiting for caches to sync for node config
I1013 03:09:02.609479       1 config.go:226] "Starting endpoint slice config controller"
I1013 03:09:02.609482       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
I1013 03:09:02.709868       1 shared_informer.go:262] Caches are synced for endpoint slice config
I1013 03:09:02.709923       1 shared_informer.go:262] Caches are synced for node config
I1013 03:09:02.711053       1 shared_informer.go:262] Caches are synced for service config
I1017 15:06:11.940342       1 graceful_termination.go:102] "Removed real server from graceful delete real server list" realServer="10.100.0.1:443/TCP/10.10.21.172:6443"
root@master1:~# kubectl -n kube-system logs kube-proxy-xkkhp  |grep -i ipvs
I1013 03:09:02.602042       1 server_others.go:269] "Using ipvs Proxier"
I1013 03:09:02.602175       1 server_others.go:271] "Creating dualStackProxier for ipvs"
I1013 03:09:02.602873       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
I1013 03:09:02.603004       1 proxier.go:449] "IPVS scheduler not specified, use rr by default"
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐