此教程为三台Linux机器(本文以Ubuntu20.04LTS系统为例),其中一台为master.另外2台为加入master需要使用的node节点,以上几台局域网需要可以相互通信

hostname IP system notes
ubuntu-118 192.168.3.118 Ubuntu 20.04 LTS master
ubuntu-119 192.168.3.119 Ubuntu 20.04 LTS node
ubuntu-120 192.168.3.120 Ubuntu 20.04 LTS node

kubeadm 搭建k8s

  1. 安装containerd, kubeadm, kubelet, kubectl
    把下面的shell脚本保存成一个文件,比如叫master.sh,放到三台机器里。

然后分别在三台机器上执行sudo sh master.sh 运行脚本

如果要修改Kubernetes版本,请修改下面脚本的最后一行,当前我们使用的版本是 1.28.0, 可以通过命令 apt list -a kubeadm 查看可用版本

以下命令适用于ubuntu

#!/bin/bash

# setup timezone
echo "[TASK 0] Set timezone"
timedatectl set-timezone Asia/Shanghai
apt-get install -y ntpdate >/dev/null 2>&1
ntpdate ntp.aliyun.com


echo "[TASK 1] Disable and turn off SWAP"
sed -i '/swap/d' /etc/fstab
swapoff -a

echo "[TASK 2] Stop and Disable firewall"
systemctl disable --now ufw >/dev/null 2>&1

echo "[TASK 3] Enable and Load Kernel modules"
cat >>/etc/modules-load.d/containerd.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

echo "[TASK 4] Add Kernel settings"
cat >>/etc/sysctl.d/kubernetes.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
EOF
sysctl --system >/dev/null 2>&1

echo "[TASK 5] Install containerd runtime"
mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo   "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
apt -qq update >/dev/null 2>&1
apt install -qq -y containerd.io >/dev/null 2>&1
containerd config default >/etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/' /etc/containerd/config.toml
sed -i 's/registry.k8s.io\/pause:3.6/registry.aliyuncs.com\/google_containers\/pause:3.9/g' /etc/containerd/config.toml 
systemctl restart containerd
systemctl enable containerd >/dev/null 2>&1


echo "[TASK 6] Add apt repo for kubernetes"
curl -fsSL https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add > /dev/null 2>&1
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list > /dev/null 2>&1
apt-get update >/dev/null 2>&1

echo "[TASK 7] Install Kubernetes components (kubeadm, kubelet and kubectl)"
apt install -qq -y kubeadm=1.28.0-00 kubelet=1.28.0-00 kubectl=1.28.0-00 >/dev/null 2>&1
  1. 安装完成后检验是否有报错,如无报错查询以下3个版本的版本号:
kubeadm version
kubelet --version
kubectl version --client
  1. 初始化kubeadm

–image-repository registry.aliyuncs.com/google_containers 这个是使用阿里云代理地址下载镜像

–apiserver-advertise-address 这个地址是本地用于和其他节点通信的IP地址

–pod-network-cidr pod network 地址空间

sudo kubeadm init --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address=192.168.3.118  --pod-network-cidr=10.244.0.0/16

4.按照弹出的提示配置kube及存储好kubeadm join的相关参数

初始化后的相关参数

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.118:6443 --token j681q2.mj5g0chbvax6uv3a \
        --discovery-token-ca-cert-hash sha256:16b8f444918be2f57c6297ec8deb6a3f8c73248a53f484e2f5c3a17c638a3c39

配置 .kube

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查配置的状态

kubectl get nodes
kubectl get pods -A
  1. shell 自动补全(Bash)
    使用以下命令可以使用kubectl命令能有自动补全功能
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
  1. 部署pod network方案

去 https://kubernetes.io/docs/concepts/cluster-administration/addons/ 选择一个network方案, 根据提供的具体链接去部署。这里我们选择overlay的方案,名字叫 flannel 部署方法如下:

下载文件 https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml ,并进行如下修改:
curl -LO https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
确保network是我们配置的 –pod-network-cidr 10.244.0.0/16

net-conf.json: |
  {
    "Network": "10.244.0.0/16",
    "Backend": {
      "Type": "vxlan"
    }
  }

在 kube-flannel的容器args里,确保有iface=enp0s8, 其中enp0s8是我们的–apiserver-advertise-address=192.168.3.118 接口名

- name: kube-flannel
 #image: flannelcni/flannel:v0.18.0 for ppc64le and mips64le (dockerhub limitations may apply)
  image: rancher/mirrored-flannelcni-flannel:v0.18.0
  command:
  - /opt/bin/flanneld
  args:
  - --ip-masq
  - --kube-subnet-mgr
  - --iface=enp0s8
  

把修改好的文件保存一个新文件,文件名kube-flannel.yaml,上传到master节点,然后运行

kubectl apply -f kube-flannel.yml

检查结果, 如果显示下面的结果,pod都是running的状态,说明我们的network方案部署成功(特别是coredns和flannel)。

kewen@ubuntu-118:~/kubernetesProject$  kubectl get pods -A
NAMESPACE      NAME                                 READY   STATUS    RESTARTS      AGE
default        web                                  1/1     Running   0             18h
kube-flannel   kube-flannel-ds-8dlms                1/1     Running   2 (21h ago)   12d
kube-flannel   kube-flannel-ds-gpzfk                1/1     Running   2 (21h ago)   12d
kube-flannel   kube-flannel-ds-jdqpr                1/1     Running   0             19h
kube-system    coredns-66f779496c-5rf9j             1/1     Running   1             12d
kube-system    coredns-66f779496c-7zwhx             1/1     Running   1 (22h ago)   12d
kube-system    etcd-ubuntu-118                      1/1     Running   2 (22h ago)   12d
kube-system    kube-apiserver-ubuntu-118            1/1     Running   2 (22h ago)   12d
kube-system    kube-controller-manager-ubuntu-118   1/1     Running   3 (22h ago)   12d
kube-system    kube-proxy-295td                     1/1     Running   1 (22h ago)   12d
kube-system    kube-proxy-q6pmn                     1/1     Running   1 (21h ago)   12d
kube-system    kube-proxy-vv9qx                     1/1     Running   1 (21h ago)   12d
kube-system    kube-scheduler-ubuntu-118            1/1     Running   3 (22h ago)   12d
  1. 添加worker节点
kubeadm join 192.168.3.118:6443 --token j681q2.mj5g0chbvax6uv3a \
        --discovery-token-ca-cert-hash sha256:16b8f444918be2f57c6297ec8deb6a3f8c73248a53f484e2f5c3a17c638a3c39
  1. 到master节点查看加入状态
    当看到加入的节点status状态为Ready,说明搭建完成
kewen@ubuntu-118:~/kubernetesProject$ kubectl get nodes
NAME         STATUS   ROLES           AGE   VERSION
ubuntu-118   Ready    control-plane   11d   v1.28.0
ubuntu-119   Ready    <none>          11d   v1.28.0
ubuntu-120   Ready    <none>          11d   v1.28.0

kubeadm搭建k8s后的相关问题

internal IP显示不正确

如果node的internal IP不对, 例如我们希望的node internal IP地址是en0s8的地址

  1. 修改文件 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf , 在最后一行末尾增加一个新的变量KUBELET_EXTRA_ARGS, 指定node ip是本机的enp0s8的地址,保存退出。
echo "$KUBELET_EXTRA_ARGS --node-ip=192.168.56.10" >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 这是什么意思
  1. 重启kubelet,就会发现本机master节点的internal IP显示正确了
sudo systemctl daemon-reload
sudo systemctl restart kubelet

不小心忘记join的tokendiscovery-token-ca-cert-hash 怎么办?

token 可以通过 kubeadm token list获取到

kubeadm token list

如果没有输出,肯定是token已经过期。输入如下命令重新生成token, 并且打印出节点加入集群的命令

kubeadm token create --print-join-command --ttl=0

discovery-token-ca-cert-hash 可以通过以下方式获取到

openssl x509 -in /etc/kubernetes/pki/ca.crt -pubkey -noout |
openssl pkey -pubin -outform DER |
openssl dgst -sha256

kubeadm init 初始化运行到 Initial timeout of 40s passed 后报错

kubeadm init 初始化运行到 Initial timeout of 40s passed 后报错解决

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐