基于 debian 12 利用 kubeadm 部署 k8s 1.29 版本

预先准备

  1. 准备三台debian 12的虚拟机,配置如下:

    HostnameIP配置
    k8s-master1192.168.31.604vCPU、8GiB 内存、50GiB 硬盘
    k8s-worker1192.168.31.614vCPU、8GiB 内存、50GiB 硬盘
    k8s-worker2192.168.31.624vCPU、8GiB 内存、50GiB 硬盘
  2. 开放root账户允许其远程ssh登录

    • 打开 /etc/ssh/sshd_config 文件,并找到以下行:

      #PermitRootLogin prohibit-password
      
    • no 修改为 yes,以允许 root 用户远程登录。修改后的行应该如下所示:

      PermitRootLogin yes
      
    • 保存修改后,关闭编辑器,并重新启动 SSH 服务以应用更改:

      sudo systemctl restart ssh
      
  3. 执行如下指令安装必备软件

    apt-get install -y vim curl sudo net-tools telnet chrony ipvsadm
    
  4. 关闭三台机器的 swap

    swapoff -a
    sed -i 's/.*swap.*/#&/' /etc/fstab
    
  5. 关闭防火墙

    iptables -F
    systemctl stop iptables nftables
    systemctl disable iptables nftables
    
  6. 三台主机之间设置免密登录

    • 先在三台主机上执行 ssh-keygen指令,然后一直回车直至结束

    • 再在三台虚拟机上/etc/hosts文件末尾加入如下三行解析

      192.168.31.60 k8s-master1
      192.168.31.61 k8s-worker1
      192.168.31.62 k8s-worker2
      
    • 最后在三台主机上分别执行如下指令

    ssh-copy-id k8s-master1
    ssh-copy-id k8s-worker1
    ssh-copy-id k8s-worker2
    
  7. 修改三台主机内核参数,分别在三台机器上执行如下指令

    # 加载 br_netfilter 模块
    modprobe br_netfilter
    
    # 创建配置文件
    cat > /etc/sysctl.d/k8s.conf <<EOF
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward = 1
    EOF
    
    # 修改内核配置
    sysctl -p /etc/sysctl.d/k8s.conf
    
  8. 三台主机安装 docker 、containerd 和 crictl

    # 删除残留包,防止安装冲突
    for pkg in docker.io docker-doc docker-compose podman-docker containerd runc; do sudo apt-get remove $pkg; done
    
    # 安装前更新相关配置
    # Add Docker's official GPG key:
    sudo apt-get update
    sudo apt-get install ca-certificates curl
    sudo install -m 0755 -d /etc/apt/keyrings
    sudo curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc
    sudo chmod a+r /etc/apt/keyrings/docker.asc
    
    # Add the repository to Apt sources:
    echo \
      "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
      $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
      sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
    sudo apt-get update
    
    # 安装容器相关软件
    sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
    
    # 安装crictl 
    VERSION="v1.29.0"
    wget https://github.com/kubernetes-sigs/cri-tools/releases/download/$VERSION/crictl-$VERSION-linux-amd64.tar.gz
    sudo tar zxvf crictl-$VERSION-linux-amd64.tar.gz -C /usr/local/bin
    rm -f crictl-$VERSION-linux-amd64.tar.gz
    
  9. 修改 containerd 相关配置

    • 执行containerd config default > /etc/containerd/config.toml,打开/etc/containerd/config.toml,把SystemdCgroup = false修改成SystemdCgroup = true,最后执行systemctl enable containerd --now && systemctl restart containerd

    • 生成/etc/crictl.yaml配置文件如下

      cat > /etc/crictl.yaml <<EOF
      runtime-endpoint: unix:///run/containerd/containerd.sock
      image-endpoint: unix:///run/containerd/containerd.sock
      timeout: 10
      debug: false
      EOF
      
  10. 三台主机配置时间同步服务器,执行如下指令

    echo 'server ntp.aliyun.com iburst' > /etc/chrony/sources.d/local-ntp-server.source
    chronyc reload sources
    
    # 查看时钟状态
    chronyc tracking
    

安装 kubeadm, kubelet 和 kubectl

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

# If the directory `/etc/apt/keyrings` does not exist, it should be created before the curl command, read the note below.
# sudo mkdir -p -m 755 /etc/apt/keyrings
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg

# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

sudo systemctl enable --now kubelet

kubeadm 初始化集群

  1. 三台虚拟机设置crictl的运行环境为containerd

    crictl config runtime-endpoint unix:///run/containerd/containerd.sock
    
  2. kubeadm生成初始化配置文件并进行修改

    • 生成配置文件kubeadm.yaml

      kubeadm config print init-defaults > kubeadm.yaml
      
    • 修改advertiseAddress为主节点 IP,并将控制节点主机名name修改为k8s-master1

    • 新增 podSubnet字段

      kind: ClusterConfiguration
      kubernetesVersion: 1.29.0
      networking:
        dnsDomain: cluster.local
        podSubnet: 10.244.0.0/16 #指定pod网段, 需要新增加这个
        serviceSubnet: 10.96.0.0/12
      scheduler: {}
      
    • 新增 kubeproxykubelet 配置,---不能省略

      ---
      apiVersion: kubeproxy.config.k8s.io/v1alpha1
      kind: KubeProxyConfiguration
      mode: ipvs
      ---
      apiVersion: kubelet.config.k8s.io/v1beta1
      kind: KubeletConfiguration
      cgroupDriver: systemd
      
  3. 执行初始化kubeadm init --config=kubeadm.yaml

  4. 授权kubectl指令,使其可以管理集群

    mkdir -p $HOME/.kube
    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
    sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    # 将访问权限拷贝到工作节点
    scp -r /root/.kube k8s-worker1:/root
    scp -r /root/.kube k8s-worker2:/root
    
  5. 测试kubectl指令

    kubectl get nodes
    
  6. 如果出现错误,请执行如下指令重置,并排查错误

    kubeadm reset
    
  7. k8s-worker1k8s-worker2加入集群

    # 生成加入指令
    kubeadm token create --print-join-command
    
    # 执行指令加入集群
    kubeadm join 192.168.31.60:6443 --token k1biqb.7vbcgtguh54ju81c --discovery-token-ca-cert-hash sha256:82b02d429821cc106a540a9507d1066a3fe8103d7b79a6581adfdd405744079d
    
  8. 安装 calico打通集群网络

    • 下载 v3.27配置模板,在https://github.com/projectcalico/calico/tree/release-v3.27/manifests下载tigera-operator.yamlcustom-resources.yaml两个配置文件

    • 执行kubectl create -f tigera-operator.yaml安装calico相关镜像和基础配置

    • 修改custom-resources.yaml配置文件,将cidr修改为10.244.0.0/16并新增nodeAddressAutodetectionV4字段

      # This section includes base Calico installation configuration.
      # For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
      apiVersion: operator.tigera.io/v1
      kind: Installation
      metadata:
        name: default
      spec:
        # Configures Calico networking.
        calicoNetwork:
          # Note: The ipPools section cannot be modified post-install.
          ipPools:
          - blockSize: 26
      # 填写自己 pod 的 IP 信息
            cidr: 10.244.0.0/16
            encapsulation: VXLANCrossSubnet
            natOutgoing: Enabled
            nodeSelector: all()
      # 绑定自己的网卡信息,默认绑定第一个网卡
          nodeAddressAutodetectionV4:
            interface: ens* 
      ---
      
      # This section configures the Calico API server.
      # For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
      apiVersion: operator.tigera.io/v1
      kind: APIServer
      metadata:
        name: default
      spec: {}
      
    • 执行 watch kubectl get pods -n calico-system等待网络构建完成

    • 执行 kubectl get nodes确认网络打通

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐