前言

本笔记前提是安装号docker,容器运行时,kubectl,kubeadm,kubelet,没有k8s的安装教程(之前整理过markdown,不小心删掉了),kubectl,kubeadm,kubelet等安装可以参考下面链接

https://zhuanlan.zhihu.com/p/620460418?utm_id=0

https://blog.csdn.net/qq_33958966/article/details/136219254

官方教程

官方教程个人感觉以及很详细了,记录一下安装过程

https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/ha-topology/

https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/setup-ha-etcd-with-kubeadm/

主机节点

虚拟机多了带不动,简单测试一下外部etcd拓扑,熟悉流程

https://etcd.io/docs/v3.3/faq/#why-an-odd-number-of-cluster-members

  • master1: 192.168.44.135
  • master2:192.168.44.136
  • etcd: 192.168.44.137

vip 192.168.44.100

安装

kube-vip静态pod实现虚拟IP(每个节点都需要执行)

https://kube-vip.io/docs/installation/static/

https://kubernetes.io/zh-cn/docs/tasks/configure-pod-container/static-pod/

# 虚拟ip
export VIP=192.168.44.100
# 网卡
export INTERFACE=ens33
# 自定义kube-vip版本
export KVVERSION=v0.6.3
# 命令行可以使用kube-vip
alias kube-vip="docker run --network host --rm ghcr.io/kube-vip/kube-vip:$KVVERSION"
# 创建静态pod清单文件
mkdir -p /etc/kubernetes/manifests
# 生成静态pod的manifest 
kube-vip manifest pod \
    --interface $INTERFACE \
    --address $VIP \
    --controlplane \
    --services \
    --arp \
    --leaderElection | tee /etc/kubernetes/manifests/kube-vip.yaml

可以修改kube-vip的镜像拉取规则, Always -> IfNotPresent

image: ghcr.io/kube-vip/kube-vip:v0.6.3
imagePullPolicy: Always

k8s 使用本地镜像的时候 - imagePullPolicy:Nevel_imagepullpolicy: ifnotpresent-CSDN博客

后续等待kubelet启动后就会自动加载静态pod的内容了

安装etcd
1.将 kubelet 配置为 etcd 的服务管理器 (etcd服务期都需要执行)
cat << EOF > /etc/systemd/system/kubelet.service.d/kubelet.conf
# 将下面的 "systemd" 替换为你的容器运行时所使用的 cgroup 驱动。
# kubelet 的默认值为 "cgroupfs"。
# 如果需要的话,将 "containerRuntimeEndpoint" 的值替换为一个不同的容器运行时。
#
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
authentication:
  anonymous:
    enabled: false
  webhook:
    enabled: false
authorization:
  mode: AlwaysAllow
cgroupDriver: systemd
address: 127.0.0.1
containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
staticPodPath: /etc/kubernetes/manifests
EOF
cat << EOF > /etc/systemd/system/kubelet.service.d/20-etcd-service-manager.conf
[Service]
ExecStart=
ExecStart=/usr/bin/kubelet --config=/etc/systemd/system/kubelet.service.d/kubelet.conf
Restart=always
EOF
systemctl daemon-reload
systemctl restart kubelet
2. 为 kubeadm 创建配置文件(只要一台etcd节点运行即可)
export HOST0=192.168.44.137

export NAME0="master03"

mkdir -p /tmp/${HOST0}

HOSTS=(${HOST0})

NAMES=(${NAME0})


for i in "${!HOSTS[@]}"; do
HOST=${HOSTS[$i]}
NAME=${NAMES[$i]}
cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: InitConfiguration
nodeRegistration:
  name: ${NAME}
  criSocket: unix:///var/run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent 
  taints: null
localAPIEndpoint:
  advertiseAddress: ${HOST}
---
apiVersion: "kubeadm.k8s.io/v1beta3"
kind: ClusterConfiguration
etcd:
  local:
    serverCertSANs:
    - "${HOST}"
    peerCertSANs:
    - "${HOST}"
    extraArgs:
      initial-cluster: ${NAME}=https://${HOST}:2380
      initial-cluster-state: new
      name: ${NAME}
      listen-peer-urls: https://${HOST}:2380
      listen-client-urls: https://${HOST}:2379
      advertise-client-urls: https://${HOST}:2379
      initial-advertise-peer-urls: https://${HOST}:2380
EOF
done
3. 生成证书颁发机构(一台机器即可)

如果你已经拥有 CA,那么唯一的操作是复制 CA 的 crtkey 文件到 etc/kubernetes/pki/etcd/ca.crt/etc/kubernetes/pki/etcd/ca.key。 复制完这些文件后继续下一步,“为每个成员创建证书”。

如果你还没有 CA,则在 $HOST0(你为 kubeadm 生成配置文件的位置)上运行此命令。

kubeadm init phase certs etcd-ca

这一操作创建如下两个文件:

  • /etc/kubernetes/pki/etcd/ca.crt
  • /etc/kubernetes/pki/etcd/ca.key
4. 为每个成员创建证书(一台机器即可)
kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST2}/
# 清理不可重复使用的证书
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete

kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml
cp -R /etc/kubernetes/pki /tmp/${HOST1}/
find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete

kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml
# 不需要移动 certs 因为它们是给 HOST0 使用的

# 清理不应从此主机复制的证书
find /tmp/${HOST2} -name ca.key -type f -delete
find /tmp/${HOST1} -name ca.key -type f -delete
5.复制证书和 kubeadm 配置(除了节点1都需要运行)

我就只有一个etcd节点,不需要复制,复制用scp等命令

# 官方例子
USER=ubuntu
HOST=${HOST1}
scp -r /tmp/${HOST}/* ${USER}@${HOST}:
ssh ${USER}@${HOST}
USER@HOST $ sudo -Es
root@HOST $ chown -R root:root pki
root@HOST $ mv pki /etc/kubernetes/
5.创建静态 Pod 清单(每台机器)
# 节点0
kubeadm init phase etcd local --config=/tmp/${HOST0}/kubeadmcfg.yaml
# 其余节点
kubeadm init phase etcd local --config=$HOME/kubeadmcfg.yaml
6. 测试是否安装成(各个节点都可以运行)
# 命令1
crictl ps

# CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
# 915e96067b659       86b6af7dd652c       15 minutes ago      Running             etcd                0                   253c0a803ce00       etcd-master03

# 命令2:进入容器
crictl exec -it 915e96067b659 /bin/sh

# 命令3 查看etcd节点状态
ETCDCTL_API=3 etcdctl \
--cert /etc/kubernetes/pki/etcd/peer.crt \
--key /etc/kubernetes/pki/etcd/peer.key \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--endpoints https://192.168.44.137:2379 endpoint health
7. 复制证书(任意etcd节点复制到master leader即可)
export CONTROL_PLANE="root@192.168.44.135"
scp /etc/kubernetes/pki/etcd/ca.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.crt "${CONTROL_PLANE}":
scp /etc/kubernetes/pki/apiserver-etcd-client.key "${CONTROL_PLANE}":
8. mv证书到指定位置(master leader)
# cd /root
mv ./ca.crt /etc/kubernetes/pki/etcd/
mv ./apiserver-etcd-client.crt /etc/kubernetes/pki/apiserver-etcd-client.crt
mv ./apiserver-etcd-client.key /etc/kubernetes/pki/apiserver-etcd-client.key
9. kubeadm init config(master leader)
vim external-etcd-config.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.44.135
  bindPort: 6443
nodeRegistration: 
  criSocket: unix:///var/run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent 
  taints: null
---
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
apiServer:
  timeoutForControlPlane: 4m0s
kubernetesVersion: 1.27.3
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16
controlPlaneEndpoint: "192.168.44.100:6443"
etcd:
  external:
    endpoints:
      # etcd集群节点数组,我就只一个
      - https://192.168.44.137:2379
    caFile: /etc/kubernetes/pki/etcd/ca.crt
    certFile: /etc/kubernetes/pki/apiserver-etcd-client.crt
    keyFile: /etc/kubernetes/pki/apiserver-etcd-client.key
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
10.kubeadm init(master leader)
sudo kubeadm init --config ./external-etcd-config.yaml --upload-certs
其余master节点执行
kubeadm join 192.168.44.100:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:38ffda8132b1336259118e99cbbc6941c55592f0af71c8ccb33eb6df595adb38 \
--control-plane --certificate-key 998a9ffe696e3bd1a9ec2d387e535c937cc7276e0e6d59bcc88334142f5a953e \
# 改成自己的容器运行时 前面的内容leader节点安装成功后都会显示
--cri-socket unix:///var/run/cri-dockerd.sock  

后记

简单快速的记录一下,涉及很多细节,需要看文档了解概念

虽然没实现真正的高可用,只要etcd节点掉线,apiserver就不可用了。但是保证etcd可用的情况下,就算只有一台master也是可用的。

所有说,k8s高可用集群的容灾数主要取决于etcd集群。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐