前面两个章节我们分别介绍了如何初始化第一个master节点和在已有的集群扩容master节点。本章节我们继续来搭建k8s集群,主要讲述如何添加工作节点。

系列文章回顾:
从0到1手动搭建k8s集群 - 初始化master节点
从0到1手动搭建k8s集群 - 添加master节点

下面我们就正式开始介绍node1节点的添加,node2节点同理:

1. 机器信息

  • node1:192.168.56.13

2. 环境初始化

  • 关闭防火墙、虚拟交换分区、selinux
# 关闭防火墙
sudo systemctl stop firewalld && systemctl disable firewalld
sudo systemctl stop ufw && systemctl disable ufw
# 关闭虚拟交换(注释fstab中swap配置)
sudo swapoff -a
sudo sed -i /^[^#]*swap*/s/^/\#/g /etc/fstab
  • 设置/etc/hosts
192.168.56.10 master1
192.168.56.11 master2
192.168.56.12 master3
192.168.56.13 node1
192.168.56.14 node2
  • 部署docker
#一键式部署docker
curl -fsSL https://get.docker.com | sudo bash -s docker --mirror Aliyunsudo
sudo systemctl enable docker && sudo systemctl restart docker
  • 安装必备软件
sudo apt-get install socat conntrack ebtables ipset ipvsadm
  • 设置hostname
sudo hostnamectl set-hostname node1

3. 部署k8s二进制

  • 将 kubelet、kubectl、kubeadm 拷贝到/usr/local/bin 路径下,并赋予执行权限
curl https://dl.k8s.io/v1.20.4/kubernetes-node-linux-amd64.tar.gz -o ./kubernetes-node-linux-amd64.tar.gz
tar -zxvf kubernetes-client-linux-amd64.tar.gz -C ./ 
# 部署kubeadm
sudo cp ./kubernetes/node/bin/kubeadm /usr/local/bin/ && sudo chmod +x /usr/local/bin/kubeadm
# 部署kubectl
sudo cp ./kubernetes/node/bin/kubectl /usr/local/bin/ && sudo chmod +x /usr/local/bin/kubectl
# 部署kubelet
sudo cp ./kubernetes/node/bin/kubelet /usr/local/bin/ && sudo chmod +x /usr/local/bin/kubelet
  • 生成 kubelet 服务/etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
CPUAccounting=true
MemoryAccounting=true
ExecStart=/usr/local/bin/kubelet
Restart=always
StartLimitInterval=0
RestartSec=10

[Install]
WantedBy=multi-user.target
  • 生成 kubelet 配置文件/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generate at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
Environment="KUBELET_EXTRA_ARGS=--node-ip=192.168.56.11 --hostname-override=master2"
ExecStart=
ExecStart=/usr/local/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS
  • 使能kubelet
sudo systemctl disable kubelet 
sudo systemctl enable kubelet 
sudo ln -snf /usr/local/bin/kubelet /usr/bin/kubelet

4. 将node1节点加入集群

  • 生成 token 和证书(master1 节点执行)
#生成token
sudo kubeadm token create
#生成证书
sudo kubeadm init phase upload-certs --upload-certs
  • 生成/etc/kubernetes/kubeadm-config.yaml,请将 ${TOKEN}、${CERT}的值替换成前一步获取到的值
---
apiVersion: kubeadm.k8s.io/v1beta2
kind: JoinConfiguration
discovery:
  bootstrapToken:
    apiServerEndpoint: 192.168.56.10:6443
    token: "${TOKEN}"
    unsafeSkipCAVerification: true
  tlsBootstrapToken: "${TOKEN}"
nodeRegistration:
  kubeletExtraArgs:
    cgroup-driver: cgroupfs
  • 拷贝kubeconfig配置
mkdir ~/.kube
sudo cp /etc/kubernetes/admin.conf ~/.kube/config
  • 增加worker标签
kubectl label --overwrite node node2 node-role.kubernetes.io/worker=

5. 配置haproxy

使用haproxy的主要目的是用来代理多个master节点的apiserver,做到负载均衡

  • 生成配置文件/etc/haproxy/haproxy.cfg
global
    maxconn                 4000
    log                     127.0.0.1 local0

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option                  http-server-close
    option                  redispatch
    retries                 5
    timeout http-request    5m
    timeout queue           5m
    timeout connect         30s
    timeout client          30s
    timeout server          15m
    timeout http-keep-alive 30s
    timeout check           30s
    maxconn                 4000

frontend healthz
  bind *:8081
  mode http
  monitor-uri /healthz

frontend kube_api_frontend
  bind 127.0.0.1:6443
  mode tcp
  option tcplog
  default_backend kube_api_backend

backend kube_api_backend
  mode tcp
  balance leastconn
  default-server inter 15s downinter 15s rise 2 fall 2 slowstart 60s maxconn 1000 maxqueue 256 weight 100
  option httpchk GET /healthz
  http-check expect status 200
  server master1 192.168.56.10:6443 check check-ssl verify none
  server master2 192.168.56.11:6443 check check-ssl verify none
  server master3 192.168.56.12:6443 check check-ssl verify none
  • 生成静态pod文件/etc/kubernetes/manifests/haproxy.yaml

注意:此处必须使用静态pod,而不能使用daemonset等部署方式。原因是daemonset拉起pod依赖于节点的启动,而kubelet又依赖于haproxy连接apiserver,故产生了循环依赖,一旦节点重启,则无法正常启动。

apiVersion: v1
kind: Pod
metadata:
  name: haproxy
  namespace: kube-system
  labels:
    addonmanager.kubernetes.io/mode: Reconcile
    k8s-app: kube-haproxy
spec:
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet
  nodeSelector:
    kubernetes.io/os: linux
  priorityClassName: system-node-critical
  containers:
  - name: haproxy
    image: haproxy:2.3
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 25m
        memory: 32M
    livenessProbe:
      httpGet:
        path: /healthz
        port: 8081
    readinessProbe:
      httpGet:
        path: /healthz
        port: 8081
    volumeMounts:
    - mountPath: /usr/local/etc/haproxy/
      name: etc-haproxy
      readOnly: true
  volumes:
  - name: etc-haproxy
    hostPath:
      path: /etc/kubekey/haproxy
  • 确认haproxy的pod启动成功

6. 更新apiserver地址,指向本地haproxy

  • 更新kubelet config
sudo sed -i 's#server:.*#server: https://127.0.0.1:6443#g' /etc/kubernetes/kubelet.conf
sudo systemctl daemon-reload && sudo systemctl restart kubelet
  • 更新kubelet proxy的apiserver连接配置
sudo set -o pipefail && kubectl --kubeconfig /etc/kubernetes/admin.conf get configmap kube-proxy -n kube-system -o yaml | sed 's#server:.*#server: https://127.0.0.1:6443#g' | kubectl replace -f -
kubectl --kubeconfig delete pod -n kube-system -l k8s-app=kube-proxy --force -grace-period=0
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐