搭建一个1.27+版本的k8s集群(containerd)
【代码】搭建一个1.27+版本的k8s集群(containerd)
·
创建虚拟机
# 安装虚拟机管理工具
$ brew install multipass
# 创建虚拟节点
$ multipass launch -n node1 -m 4G -c 4 -d 40G
$ multipass launch -n node2 -m 4G -c 4 -d 40G
$ multipass launch -n node3 -m 4G -c 4 -d 40G
# 查看节点
$ multipass list
Name State IPv4 Image
node1 Running 192.168.64.4 Ubuntu 22.04 LTS
node2 Running 192.168.64.5 Ubuntu 22.04 LTS
node3 Running 192.168.64.6 Ubuntu 22.04 LTS
进入虚拟机节点
$ multipass shell node1
每个虚拟节点都需要执行的命令
$ sudo -i
# 1.安装docker(这里可以将下面的内容写入shell文件一起执行)
sudo cp ./source.list /etc/apt/source.list
sudo apt-get update
sudo apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg-agent \
software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
#execute docker without sudo
sudo apt install acl
sudo usermod -aG docker $USER
sudo setfacl -m user:$USER:rw /var/run/docker.sock
#test docker install ok
sudo docker run hello-world
# 2.修改kubelet的运行时CRI实现为containerd
# 在上面安装 Docker时,containerd 一并安装了,所以无需额外安装 containerd 只需要配置一些所需参数即可
# 复制默认的配置 > config.toml 中
containerd config default > /etc/containerd/config.toml
# 编辑配置文件
vim /etc/containerd/config.toml
SystemdCgroup = false 改为 SystemdCgroup = true
# sandbox_image = "k8s.gcr.io/pause:3.6"
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.9"
# 添加 endpoint 加速器, containerd配置下载镜像
# 找到这行配置 [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
endpoint = ["https://ttxrrkr1.mirror.aliyuncs.com"]
#重新加载并重启containerd
systemctl daemon-reload && systemctl restart containerd
# 3.安装k8s组件(这里可以将下面的内容写入shell文件一起执行)
sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
sudo cp ./kubernetes.list /etc/apt/sources.list.d/
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
# 4.配置环境(这里可以将下面的内容写入shell文件一起执行)
sudo swapoff -a
sudo timedatectl set-timezone Asia/Shanghai
sudo systemctl restart rsyslog
sudo mkdir -r /etc/sysctl.d
cp ./k8s.conf /etc/sysctl.d/k8s.conf
sudo sysctl --system
sudo cp ./10-network-security.conf /etc/sysctl.d/10-network-security.conf
sudo sysctl --system
仅需master节点执行
# 我这里将node1作为我的master节点,因此进入node1执行安装kubernetes master的组件
# 1.生成默认配置,便于修改
kubeadm config print init-defaults > kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.64.4 # master节点IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock
imagePullPolicy: IfNotPresent
name: node1
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 指定镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.27.0 # 这里可以指定具体版本
networking:
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
podSubnet: 10.1.0.0/16 # 添加pod创建网段
scheduler: {}
# 2.初始化master节点
$ kubeadm init --config ./kubeadm.yaml
# 如果中途失败或者有其他问题可以重置
$ kubeadm reset
# 如果初始化成功后复制打印出的命令在其他要作为worker节点的虚拟节点执行
$ kubeadm join 192.168.64.4:6443 --token rsgn8y.1q19abj5ovvlbmds --discovery-token-ca-cert-hash sha256:34a1537511cfc908225d352a4b1547eeaabbf87cc48ab89df92396f02ab099e9
# 如果咋不到这个命令也没事,可以重新生成一个
$ kubeadm token create --print-join-command
# 3.查看节点信息
$ kubectl get node
# 如果上述命令执行有问题就需要配置访问权限
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl get nodes
# 查看一下集群状态,确认个组件都处于healthy状态
$ kubectl get cs
$ kubectl get pod -n kube-system
# 如果节点没有read,说明kubelet PLEG检测不通过,查看具体原因
$ journalctl -f -u kubelet
# 4.如果是网络插件的问题,则需要安装网络插件
$ wget https://docs.projectcalico.org/v3.14/manifests/calico.yaml --no-check-certificate
# 把calico.yaml里pod所在网段改成kubeadm init时选项--pod-network-cidr所指定的网段,
# 直接用vim编辑打开此文件查找192,按如下标记进行修改:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
# value: "192.168.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
# 把两个#及#后面的空格去掉,并把192.168.0.0/16改成10.244.0.0/16
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.1.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
value: "true"
每个虚拟节点都需要执行的命令
# 1.安装运行时命令行客户端工具
$ wget https://github.com/containerd/nerdctl/releases/download/v1.4.0/nerdctl-1.4.0-linux-amd64.tar.gz
$ tar -zxvf nerdctl-1.4.0-linux-amd64.tar.gz -C /usr/local/bin
# 2.查看此文件用哪些镜像,然后在每个节点拉取镜像:
$ grep image calico.yaml
image: calico/cni:v3.14.0
image: calico/cni:v3.14.0
image: calico/pod2daemon-flexvol:v3.14.0
image: calico/node:v3.14.0
image: calico/kube-controllers:v3.14.0
$ nerdctl pull calico/cni:v3.14.0
$ nerdctl pull calico/pod2daemon-flexvol:v3.14.0
$ nerdctl pull calico/node:v3.14.0
$ nerdctl pull calico/kube-controllers:v3.14.0
更多推荐
已为社区贡献5条内容
所有评论(0)