Centos7已经停止维护,升级内核小节会失败,请跳过。或者采用高版本内核的linux发行版本

1、集群规划

1.1 主机ip规划

主机ip地址主机名主机角色主机配置安装软件
192.168.3.254haLB2C-1G 10Gnginx单节点,keepalived虚拟vip配置HA比较简单,不再赘述
192.168.3.51k8s-master1master4C-2G 50Gkube-apiserver、kube-controller-manager、kube-scheduler、etcd
192.168.3.52k8s-master2master4C-2G 50Gkube-apiserver、kube-controller-manager、kube-scheduler、etcd
192.168.3.53k8s-master3master4C-2G 50Gkube-apiserver、kube-controller-manager、kube-scheduler、etcd
192.168.3.54k8s-work1work8C-8G 100Gkubelet、kube-proxy、containerd、runc
192.168.3.55k8s-work2work8C-8G 100Gkubelet、kube-proxy、containerd、runc

其中etcd严格意义上并不归属于k8s,提供的功能是k8s所需的超集,通常部署在另外3台带有SSD盘的机器上。

1.2 软件版本

软件名称软件版本备注
centos7.9内核版本: 6.0.12k8s对于高版本内核比低版本支持的更好
Kubernetesv1.23.14目前较为稳定的版本
etcd最新版 v3.5.6v3.5.5修复了数据损坏的问题
calico最新版本 v3.24.5网络插件
coredns最新版本 v1.10.0k8s内部域名解析器
containerd最新版本 v1.6.12管理容器的生命周期
runc最新版本 v1.1.4containerd自带的runc有点问题
nginxv1.22.1yum版本自带

1.3 网络分配

网络划分网段备注
node网络192.168.3.0/24work节点网络
service网络10.96.0.0/16网络插件分配
pod网络10.244.0.0/16网络插件分配

***** 以上出现的ip和ip地址段需要根据 实际 ip全局修改 *****

2、主机准备

2.1 主机名设置

此操作是为了解决:方便后续操作

hostnamectl set-hostname xxx

2.2 主机名与本机ip地址映射解析

此操作是为了解决:方便后续操作

cat >> /etc/hosts << EOF
192.168.3.254 ha
192.168.3.51 k8s-master1
192.168.3.52 k8s-master2
192.168.3.53 k8s-master3
192.168.3.54 k8s-work1
192.168.3.55 k8s-work2
EOF

2.3 关闭防火墙

此操作是为了解决:nftables后端兼容性问题,产生重复的防火墙规则

The iptables tooling can act as a compatibility layer, behaving like iptables but actually configuring nftables. This nftables backend is not compatible with the current kubeadm packages: it causes duplicated firewall rules and breakskube-proxy.

systemctl stop firewalld
systemctl disable firewalld

systemctl status firewalld 查看关闭状态

2.4 关闭selinux

此操作是为了解决:允许容器访问宿主机的文件系统

Setting SELinux in permissive mode by runningsetenforce 0andsed …effectively disables it. This is required to allow containers to access the host filesystem, which is needed by pod networks for example. You have to do this until SELinux support is improved in the kubelet

setenforce 0
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

sestatus 查看关闭状态

2.5 关闭swap分区

此操作是为了解决:k8s可能无法正常工作

据说已经有pr,但是官方合入情况未知, https://github.com/kubernetes/kubernetes/issues/53533

swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
echo "vm.swappiness=0" >> /etc/sysctl.conf
sysctl -p

2.6 主机系统时间同步

此操作是为了解决:k8s更好的工作

timedatectl set-local-rtc 1
timedatectl set-timezone Asia/Shanghai

yum -y install ntpdate
crontab -e
0 */1 * * * ntpdate time1.aliyun.com

查看计划任务的配置: crontab -l

2.7 主机系统优化

此操作是为了解决:k8s更好的工作

cat <<EOF >> /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

2.8 ipvs管理工具安装及模块加载

此操作是为了解决:kube-proxy更高性能的工作

k8s集群节点安装,ha节点可以不用安装,内核4.18及以下nf_conntrack_ipv4, 内核4.19+ 使用nf_conntrack

yum -y install ipvsadm ipset sysstat conntrack libseccomp
cat > /etc/modules-load.d/ipvs.conf <<EOF
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

2.9 加载containerd相关模块

此操作是为了解决:使用containerd管理容器生命周期

cat > /etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF

设置开机自启动

systemctl enable --now systemd-modules-load.service

2.10 内核升级

此操作是为了解决:更好的使用k8s

yum -y install perl
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm
yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64
grub2-set-default 0
grub2-mkconfig -o /boot/grub2/grub.cfg

2.11 内核优化

此操作是为了解决:更好的使用k8s

cat <<EOF > /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory = 1
vm.panic_on_oom = 0
fs.inotify.max_user_watches = 89100
fs.file-max=52706963
fs.nr_opne=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time=600
net.ipv4.tcp_keepalive_probes=3
net.ipv4.tcp_keepalive_intvl=15
net.ipv4.tcp_max_tw_buckets=36000
net.ipv4.tcp_tw_reuse=1
net.ipv4.tcp_max_orphans=327680
net.ipv4.tcp_orphan_retries=3
net.ipv4.tcp_syncookies=1
net.ipv4.tcp_max_syn_backlog=16384
net.ipv4.ip_conntrack_max=131072
net.ipv4.tcp_max_syn_backlog=16384
net.ipv4.tcp_timestamps=0
net.core.somaxconn=16384
EOF

生效: sysctl --system

2.12 配置免密登录

此操作是为了解决:方便后续跨机器文件复制拷贝操作

在k8s-master1上产生key,因为后续绝大多数操作都是在这台机器上做,ssh-keygen + 空密码

ssh-keygen
ssh-copy-id root@k8s-master1
ssh-copy-id root@k8s-master2
ssh-copy-id root@k8s-master3
ssh-copy-id root@k8s-work1
ssh-copy-id root@k8s-work2

2.13 重启验证

做完以上配置后,重启机器,确保所有设置生效:reboot -h now

lsmod | grep --color=auto -e ip_vs -e nf_conntrack
lsmod | egrep 'br_netfilter | overlay'

2.14 其他工具安装(可选)

yum -y install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git lrzsz

2.15 ha节点安装nginx

双机keepalived + vip比较简单,不再赘述

cat > /etc/yum.repos.d/nginx.repo <<"EOF"
[nginx-stable]
name=nginx stable repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=1
enabled=1
gpgkey=https://nginx.org/keys/nginx_signing.key

[nginx-mainline]
name=nginx mainline repo
baseurl=http://nginx.org/packages/mainline/centos/$releasever/$basearch/
gpgcheck=1
enabled=0
gpgkey=https://nginx.org/keys/nginx_signing.key
EOF
yum -y install nginx
systemctl enable nginx
systemctl start nginx
cat >> /etc/nginx/nginx.conf <<"EOF"
stream {
	log_format proxy '[$time_local] $remote_addr '
                 '$protocol $status $bytes_sent $bytes_received '
                 '$session_time "$upstream_addr" '
                 '"$upstream_bytes_sent" "$upstream_bytes_received" "$upstream_connect_time"';

	access_log /var/log/nginx/tcp_access.log proxy;
	error_log /var/log/nginx/tcp_error.log;

	upstream HA {
	hash $remote_addr consistent;
	server 192.168.3.51:6443 weight=5 max_fails=1 fail_timeout=3s;
	server 192.168.3.52:6443 weight=5 max_fails=1 fail_timeout=3s;
	server 192.168.3.53:6443 weight=5 max_fails=1 fail_timeout=3s;
	}

	server {
	listen 6443;
	proxy_connect_timeout 3s;
	proxy_timeout 30s;
	proxy_pass HA;
	}
}
EOF

3、证书制作

k8s-master1上操作

3.1 创建k8s相关文件目录

mkdir -p /data/k8s

3.2 获取cfssl证书工具

k8s后续安装会涉及到大量证书相关操作,用cfssl生成证书 (cfssl、cfssljson、cfssl-certinfo)

wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl-certinfo_1.6.3_linux_amd64

chmod +x cfssl*
mv cfssl_1.6.3_linux_amd64 /usr/local/bin/cfssl
mv cfssljson_1.6.3_linux_amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_1.6.3_linux_amd64 /usr/local/bin/cfssl-certinfo

3.3 自签ca证书

3.3.1 配置ca证书请求文件

cat > ca-csr.json << EOF
{
    "CN": "kubernetes",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "shanjie",
            "OU":"pingtaibu"
        }
    ],
    "ca": {
        "expiry":"876000h"
    }
}
EOF

3.3.2 创建ca证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

3.3.3 配置ca策略

cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
         "expiry": "87600h",
         "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ]
      }
    }
  }
}
EOF

server auth 表示client可以使用该ca对server提供的证书进行验证

client auth 表示server可以使用该ca对client提供的证书进行验证

4、etcd集群部署

4.1 获取etcd

获取最新的etcd二进制文件

wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
tar xvf etcd-v3.5.6-linux-amd64.tar.gz
chmod +x etcd-v3.5.6-linux-amd64/etcd*

4.2 配置etcd证书请求文件

k8s仅仅用来存储集群的元数据,etcd严格意义上并不归属于k8s,提供的功能是k8s所需的超集

因为机器有限所以和k8s-master节点部署在一起。etcd通常部署在另外3台带有SSD盘的机器上。

cat > etcd-csr.json << EOF
{
    "CN": "etcd",
    "hosts": [
    "127.0.0.1",
    "192.168.3.51",
    "192.168.3.52",
    "192.168.3.53"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "shanjie",
            "OU":"pingtaibu"
        }
    ]
}
EOF
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd

4.3 配置文件和服务文件

cat > etcd.conf << EOF
name: etcd1
data-dir: /var/lib/etcd

listen-client-urls: https://192.168.3.51:2379, http://127.0.0.1:2379
listen-peer-urls: https://192.168.3.51:2380

advertise-client-urls: https://192.168.3.51:2379
initial-advertise-peer-urls: https://192.168.3.51:2380

initial-cluster: etcd1=https://192.168.3.51:2380,etcd2=https://192.168.3.52:2380,etcd3=https://192.168.3.53:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new

client-transport-security:
  cert-file: /etc/etcd/ssl/etcd.pem
  key-file: /etc/etcd/ssl/etcd-key.pem
  trusted-ca-file: /etc/etcd/ssl/ca.pem
  client-cert-auth: true
peer-transport-security:
  cert-file: /etc/etcd/ssl/etcd.pem
  key-file: /etc/etcd/ssl/etcd-key.pem
  trusted-ca-file: /etc/etcd/ssl/ca.pem
  client-cert-auth: true
EOF
cat > etcd.service << EOF
[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.conf
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

4.4 分发etcd相关文件

3个master节点均需要创建:mkdir -p /etc/etcd/ssl

注意不一样的master节点,修改etcd.conf中的name和urls

scp etcd-v3.5.6-linux-amd64/etcd* k8s-master1:/usr/local/bin/
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master2:/usr/local/bin/
scp etcd-v3.5.6-linux-amd64/etcd* k8s-master3:/usr/local/bin/

scp etcd.pem etcd-key.pem ca.pem k8s-master1:/etc/etcd/ssl/
scp etcd.pem etcd-key.pem ca.pem k8s-master2:/etc/etcd/ssl/
scp etcd.pem etcd-key.pem ca.pem k8s-master3:/etc/etcd/ssl/

scp etcd.conf k8s-master1:/etc/etcd/
scp etcd.conf k8s-master2:/etc/etcd/
scp etcd.conf k8s-master3:/etc/etcd/

scp etcd.service k8s-master1:/usr/lib/systemd/system/
scp etcd.service k8s-master2:/usr/lib/systemd/system/
scp etcd.service k8s-master3:/usr/lib/systemd/system/

4.5 启动etcd服务

systemctl daemon-reload
systemctl enable etcd
systemctl start etcd

4.6 检查集群状态

etcdctl --endpoints="https://192.168.3.51:2379,https://192.168.3.52:2379,https://192.168.3.53:2379" --cacert=/etc/etcd/ssl/ca.pem --key=/etc/etcd/ssl/etcd-key.pem  --cert=/etc/etcd/ssl/etcd.pem  endpoint status --write-out=table

5、kubernetes集群部署(控制节点)

针对k8s-master节点操作

5.1 获取kubernetes

wget https://dl.k8s.io/v1.23.14/kubernetes-server-linux-amd64.tar.gz
tar xvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl kubelet kube-proxy

scp  kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master1:/usr/local/bin/
scp  kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master2:/usr/local/bin/
scp  kube-apiserver kube-controller-manager kube-scheduler kubectl k8s-master3:/usr/local/bin/
scp  kubelet kube-proxy k8s-work1:/usr/local/bin/
scp  kubelet kube-proxy k8s-work2:/usr/local/bin/

5.2 安装apiserver

5.2.1 配置apiserver证书请求文件

回到工作目录: cd /data/k8s/

56~70为预留ip,可以根据实际情况预留

cat > kube-apiserver-csr.json << EOF
{
    "CN": "kubernetes",
    "hosts": [
    "127.0.0.1",
    "192.168.3.254",
    "10.96.0.1",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local",
    
    "192.168.3.51",
    "192.168.3.52",
    "192.168.3.53",
    "192.168.3.54",
    "192.168.3.55",

    "192.168.3.56",
    "192.168.3.57",
    "192.168.3.58",
    "192.168.3.59",
    "192.168.3.60",
    "192.168.3.61",
    "192.168.3.62",
    "192.168.3.63",
    "192.168.3.64",
    "192.168.3.65",
    "192.168.3.66",
    "192.168.3.67",
    "192.168.3.68",
    "192.168.3.69",
    "192.168.3.70"
    
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "shanjie",
            "OU":"pingtaibu"
        }
    ]
}
EOF

5.2.2 生成apiserver证书及token文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver
echo "`head -c 16 /dev/urandom | od -An -t x | tr -d ' '`,kubelet-bootstrap,10001,\"system:kubelet-bootstrap\"" > bootstrap-token.csv

bootstrap-token用于work节点向apiserver申请证书,kubelet的证书由apiserver动态签署

5.2.3 apiserver配置文件和服务文件

cat > kube-apiserver.conf << EOF
KUBE_API_ARGS="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook \
--anonymous-auth=false \
--bind-address=192.168.3.51 \
--secure-port=6443 \
--advertise-address=192.168.3.51 \
--insecure-port=0 \
--authorization-mode=Node,RBAC \
--runtime-config=api/all=true \
--enable-bootstrap-token-auth \
--service-cluster-ip-range=10.96.0.0/16 \
--token-auth-file=/etc/kubernetes/bootstrap-token.csv \
--service-node-port-range=10000-60000 \
--tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--client-ca-file=/etc/kubernetes/ssl/ca.pem \
--kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \
--kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \
--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--service-account-issuer=api \
--etcd-cafile=/etc/etcd/ssl/ca.pem \
--etcd-certfile=/etc/etcd/ssl/etcd.pem \
--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \
--etcd-servers=https://192.168.3.51:2379,https://192.168.3.52:2379,https://192.168.3.53:2379 \
--enable-swagger-ui=true \
--allow-privileged=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/log/kube-apiserver-audit.log \
--event-ttl=1h \
--logtostderr=false \
--alsologtostderr=true \
--v=4 \
--log-dir=/var/log/kubernetes"
EOF
cat > kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
Type=notify
EnvironmentFile=/etc/kubernetes/kube-apiserver.conf
ExecStart=/usr/local/bin/kube-apiserver \$KUBE_API_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5.2.4 分发相关文件

3个master节点均需要创建:

mkdir -p /etc/kubernetes/ssl

mkdir -p /var/log/kubernetes

注意不一样的master节点,修改kube-apiserver.conf中的ip


scp ca*.pem kube-apiserver*.pem k8s-master1:/etc/kubernetes/ssl/
scp ca*.pem kube-apiserver*.pem k8s-master2:/etc/kubernetes/ssl/
scp ca*.pem kube-apiserver*.pem k8s-master3:/etc/kubernetes/ssl/

scp bootstrap-token.csv kube-apiserver.conf k8s-master1:/etc/kubernetes/
scp bootstrap-token.csv kube-apiserver.conf k8s-master2:/etc/kubernetes/
scp bootstrap-token.csv kube-apiserver.conf k8s-master3:/etc/kubernetes/

scp kube-apiserver.service k8s-master1:/usr/lib/systemd/system/
scp kube-apiserver.service k8s-master2:/usr/lib/systemd/system/
scp kube-apiserver.service k8s-master3:/usr/lib/systemd/system/

5.2.5 启动kube-apiserver服务

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver

5.3 安装客户端kubectl

5.3.1 配置kubectl证书请求文件

admin证书用于生成管理员用的config配置文件,必须是 “O”: “system:masters”

cat > admin-csr.json << EOF
{
    "CN": "admin",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "system:masters",
            "OU":"system"
        }
    ]
}
EOF

5.3.2 生成kubectl证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

5.3.3 生成kube.config配置文件

kube.config 是kubectl的配置文件,包含访问apiserver的所有信息(apiserver地址、CA证书、自身使用的证书)

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube.config

kubectl config set-credentials admin --client-certificate=admin.pem --client-key=admin-key.pem --embed-certs=true --kubeconfig=kube.config

kubectl config set-context kubernetes --cluster=kubernetes --user=admin --kubeconfig=kube.config

kubectl config use-context kubernetes --kubeconfig=kube.config

5.3.4 角色绑定

3个master节点均需要创建:mkdir /root/.kube

scp kube.config k8s-master1:/root/.kube/config
scp kube.config k8s-master2:/root/.kube/config
scp kube.config k8s-master3:/root/.kube/config
kubectl create clusterrolebinding kube-apiserver:kubectl-apis --clusterrole=system:kubelet-api-admin --user kubernetes --kubeconfig=/root/.kube/config

echo "export KUBECONFIG=/root/.kube/config" >> /etc/profile
source /etc/profile

验证kubectl是否正常工作:

kubectl cluster-info

kubectl get componentstatuses

5.4 安装kube-controller-manager

5.4.1 配置kube-controller-manager证书请求文件

hosts 包含所有kube-controller-manager的节点

cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "hosts": [
      "127.0.0.1",
      "192.168.3.51",
      "192.168.3.52",
      "192.168.3.53"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "system:kube-controller-manager",
            "OU":"system"
        }
    ]
}
EOF

5.4.2 生成kube-controller-manager证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

5.4.3 生成kube-controller-manager.config配置文件

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube-controller-manager.config

kubectl config set-credentials system:kube-controller-manager --client-certificate=kube-controller-manager.pem --client-key=kube-controller-manager-key.pem --embed-certs=true --kubeconfig=kube-controller-manager.config

kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kube-controller-manager.config

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.config

5.4.4 kube-controller-manager配置文件和服务文件

cat > kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_ARGS="--secure-port=10257 \
--bind-address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-controller-manager.config \
--service-cluster-ip-range=10.96.0.0/16 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \
--allocate-node-cidrs=true \
--cluster-cidr=10.244.0.0/16 \
--cluster-signing-duration=876000h \
--root-ca-file=/etc/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \
--leader-elect=true \
--feature-gates=RotateKubeletServerCertificate=true \
--controllers=*,bootstrapsigner,tokencleaner \
--horizontal-pod-autoscaler-sync-period=10s \
--tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \
--tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \
--use-service-account-credentials=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
cat > kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf
ExecStart=/usr/local/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5.4.5 分发相关文件

scp kube-controller-manager*.pem k8s-master1:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/ssl/

scp kube-controller-manager.config kube-controller-manager.conf k8s-master1:/etc/kubernetes/
scp kube-controller-manager.config kube-controller-manager.conf k8s-master2:/etc/kubernetes/
scp kube-controller-manager.config kube-controller-manager.conf k8s-master3:/etc/kubernetes/

scp kube-controller-manager.service k8s-master1:/usr/lib/systemd/system/
scp kube-controller-manager.service k8s-master2:/usr/lib/systemd/system/
scp kube-controller-manager.service k8s-master3:/usr/lib/systemd/system/

5.4.6 启动kube-controller-manager服务

systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager

5.5 安装kube-scheduler

5.5.1 配置kube-scheduler证书请求文件

cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.3.51",
      "192.168.3.52",
      "192.168.3.53"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "system:kube-scheduler",
            "OU":"system"
        }
    ]
}
EOF

5.5.2 生成kube-scheduler证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

5.5.3 生成kube-scheduler.config配置文件

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube-scheduler.config

kubectl config set-credentials system:kube-scheduler --client-certificate=kube-scheduler.pem --client-key=kube-scheduler-key.pem --embed-certs=true --kubeconfig=kube-scheduler.config

kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=kube-scheduler.config

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.config

5.5.4 kube-scheduler配置文件和服务文件

cat > kube-scheduler.conf << EOF
KUBE_SCHEDULE_ARGS="--address=127.0.0.1 \
--kubeconfig=/etc/kubernetes/kube-scheduler.config \
--leader-elect=true \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
cat > kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
EnvironmentFile=/etc/kubernetes/kube-scheduler.conf
ExecStart=/usr/local/bin/kube-scheduler \$KUBE_SCHEDULE_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

5.5.5 分发相关文件

scp kube-scheduler*.pem k8s-master1:/etc/kubernetes/ssl/
scp kube-scheduler*.pem k8s-master2:/etc/kubernetes/ssl/
scp kube-scheduler*.pem k8s-master3:/etc/kubernetes/ssl/

scp kube-scheduler.config kube-scheduler.conf k8s-master1:/etc/kubernetes/
scp kube-scheduler.config kube-scheduler.conf k8s-master2:/etc/kubernetes/
scp kube-scheduler.config kube-scheduler.conf k8s-master3:/etc/kubernetes/

scp  kube-scheduler.service k8s-master1:/usr/lib/systemd/system/
scp  kube-scheduler.service k8s-master2:/usr/lib/systemd/system/
scp  kube-scheduler.service k8s-master3:/usr/lib/systemd/system/

5.5.6 启动kube-scheduler服务

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl start kube-scheduler

6、kubernetes集群部署(工作节点)

针对k8s-work节点

6.1 安装containerd

6.1.1 获取containerd

wget https://github.com/containerd/containerd/releases/download/v1.6.12/cri-containerd-cni-1.6.12-linux-amd64.tar.gz

scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-work1:/root/
scp cri-containerd-cni-1.6.12-linux-amd64.tar.gz k8s-work2:/root/

#k8s-work1 和 k8s-work2 安装
tar xvf cri-containerd-cni-1.6.12-linux-amd64.tar.gz -C /

6.1.2 containerd配置文件

mkdir /etc/containerd/
containerd config default > /etc/containerd/config.toml

#确保config.toml改为如下配置
sandbox_image="registry.aliyuncs.com/google_containers/pause:3.6"

[plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
          endpoint = ["http://hub-mirror.c.163.com", "https://mirror.ccs.tencentyun.com", "https://registry.cn-hangzhou.aliyuncs.com"]

6.1.3 runc替换

wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64
chmod +x runc.amd64
scp runc.amd64 k8s-work1:/usr/local/sbin/runc
scp runc.amd64 k8s-work2:/usr/local/sbin/runc

6.1.4 启动containerd服务

systemctl daemon-reload
systemctl enable containerd
systemctl start containerd

6.2 安装kubelet

6.2.1 分发kubelet-bootstrap.config相关文件

work节点需要创建目录:mkdir -p /etc/kubernetes/ssl

scp bootstrap-token.csv k8s-work1:/etc/kubernetes/
scp bootstrap-token.csv k8s-work2:/etc/kubernetes/

scp ca.pem k8s-work1:/etc/kubernetes/ssl/
scp ca.pem k8s-work2:/etc/kubernetes/ssl/

6.2.2 生成kubelet-bootstrap.config配置文件

BOOTSTRAP_TOKEN=$(awk -F "," '{print $1}' /etc/kubernetes/bootstrap-token.csv)
kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kubelet-bootstrap.config

kubectl config set-credentials kubelet-bootstrap --token=${BOOTSTRAP_TOKEN} --kubeconfig=kubelet-bootstrap.config

kubectl config set-context default --cluster=kubernetes --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.config

kubectl config use-context default --kubeconfig=kubelet-bootstrap.config

kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap --kubeconfig=kubelet-bootstrap.config

6.2.3 生成kubelet.json配置文件

cat > kubelet.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/ssl/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.3.54",
  "port": 10250,
  "readOnlyPort": 10255,
  "cgroupDriver": "systemd",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.96.0.2"]
}
EOF

6.2.4 kubelet配置文件和服务文件

cat >  kubelet.conf <<EOF
KUBELET_ARGS="--bootstrap-kubeconfig=/etc/kubernetes/kubelet-bootstrap.config \
--cert-dir=/etc/kubernetes/ssl \
--kubeconfig=/etc/kubernetes/kubelet.config \
--config=/etc/kubernetes/kubelet.json \
--cni-bin-dir=/opt/cni/bin \
--cni-conf-dir=/etc/cni/net.d \
--container-runtime=remote \
--cgroup-driver=systemd \
--container-runtime-endpoint=unix:///run/containerd/containerd.sock \
--network-plugin=cni \
--rotate-certificates=true \
--pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6 \
--max-pods=1500 \
--root-dir=/etc/cni/net.d \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2"
EOF
cat > kubelet.service << EOF
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=/etc/kubernetes/kubelet.conf
ExecStart=/usr/local/bin/kubelet \$KUBELET_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

6.2.5 分发相关文件

每个work节点,注意kubelet.json中address的修改

创建目录:

1. mkdir -p /var/log/kubernetes

2. mkdir -p /var/lib/kubelet

scp kubelet-bootstrap.config kubelet.json kubelet.conf k8s-work1:/etc/kubernetes/
scp kubelet-bootstrap.config kubelet.json kubelet.conf k8s-work2:/etc/kubernetes/

scp kubelet.service k8s-work1:/usr/lib/systemd/system/
scp kubelet.service k8s-work2:/usr/lib/systemd/system/

6.2.6 启动kubelet服务

systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet

6.3 安装kube-proxy

6.3.1 配置kube-proxy证书请求文件

cat > kube-proxy-csr.json << EOF
{
    "CN": "system:kube-proxy",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "CN",
            "ST": "Beijing",
            "L": "Beijing",
            "O": "shanjie",
            "OU":"pingtaibu"
        }
    ]
}
EOF

6.3.2 生成kube-proxy证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

6.3.3 生成kube-proxy.config配置文件

kubectl config set-cluster kubernetes --certificate-authority=ca.pem --embed-certs=true --server=https://192.168.3.254:6443 --kubeconfig=kube-proxy.config

kubectl config set-credentials kube-proxy --client-certificate=kube-proxy.pem --client-key=kube-proxy-key.pem --embed-certs=true --kubeconfig=kube-proxy.config

kubectl config set-context default --cluster=kubernetes --user=kube-proxy --kubeconfig=kube-proxy.config

kubectl config use-context default --kubeconfig=kube-proxy.config

6.3.4 kube-proxy配置文件和服务文件

cat > kube-proxy.yml <<EOF
apiversion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.3.54
clientConnection:
  kubeconfig: /etc/kubernetes/kube-proxy.config
clusterCIDR: 10.244.0.0/16
healthzBindAddress: 192.168.3.54:10256
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.3.54:10249
mode: "ipvs"
EOF
cat > kube-proxy.service << EOF
[Unit]
Description=Kubernetes KubeProxy
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
--config=/etc/kubernetes/kube-proxy.yml \
--alsologtostderr=true \
--logtostderr=false \
--log-dir=/var/log/kubernetes \
--v=2

Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF

6.3.5 分发相关文件

每个work节点,注意kube-proxy.yml中address的修改

创建目录:mkdir -p /var/lib/kube-proxy

scp kube-proxy*.pem  k8s-work1:/etc/kubernetes/ssl/
scp kube-proxy*.pem  k8s-work2:/etc/kubernetes/ssl/

scp kube-proxy.config kube-proxy.yml k8s-work1:/etc/kubernetes/
scp kube-proxy.config kube-proxy.yml k8s-work2:/etc/kubernetes/

scp kube-proxy.service k8s-work1:/usr/lib/systemd/system/
scp kube-proxy.service k8s-work2:/usr/lib/systemd/system/

6.3.6 启动kube-proxy服务

systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy

6.4 安装网络插件calico

wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.5/manifests/calico.yaml

#放开注释,修改为以下网段
-name: CALICO_IPV4POOL_CIDR
value: "10.244.0.0/16"

kubectl apply -f calico.yaml

6.5 安装coreDNS域名解析

https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

#修改
__DNS__DOMAIN__ --> cluster.local
image: registry.k8s.io/coredns/coredns:vXX --> image: coredns/coredns:XX
__DNS__MEMORY__LIMIT__ --> 200Mi
__DNS__SERVER__ --> 10.96.0.2

kubectl apply -f coredns.yaml
cat >  coredns.yaml << "EOF"
# __MACHINE_GENERATED_WARNING__

apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            podAffinityTerm:
              labelSelector:
                matchExpressions:
                  - key: k8s-app
                    operator: In
                    values: ["kube-dns"]
              topologyKey: kubernetes.io/hostname
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: coredns/coredns:1.10.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.2
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF

6.6 部署nginx验证

kubectl apply -f nginx.yaml
#验证niginx欢迎页面
192.168.3.54:30001
192.168.3.55:30001
cat >  nginx.yaml  << "EOF"
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx-web
spec:
  replicas: 2
  selector:
    name: nginx
  template:
    metadata:
      labels:
        name: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.19.6
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service-nodeport
spec:
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30001
      protocol: TCP
  type: NodePort
  selector:
    name: nginx
EOF
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐