从零开始部署k8s集群1.29.x版本
从零开始部署k8s集群1.29.x
·
从零开始部署k8s集群1.29.x
一、环境准备
本次使用的实验环境是Vmware 16
1.1、实验节点规划
服务器角色 | 主机名 | IP地址 | 操作系统版本 | 角色 | 硬件配置 | 备注 |
---|---|---|---|---|---|---|
跳板机 | jumpserver.shiyan.com | 172.172.8.11 | centos 7 | NTP server | 内存:4GB CPU:4个 磁盘:20G(分区无要求) | 非必要的虚机 由于本人操作环境限制只能使用跳板机登陆其它节点 |
k8s-master01 | k8s-master01.shiyan.com | 172.172.8.61 | centos 7 | k8s-master | 内存:4GB CPU:4个 磁盘:50G(分区无要求) | 内核版本要求 3.10以上版本 swap推荐关闭 防火墙和SELinux推荐关闭 |
k8s-node1 | k8s-node1.shiyan.com | 172.172.8.65 | centos 7 | k8s-node | 内存:4GB CPU:4个 磁盘:50G(分区无要求) | 内核版本要求 3.10以上版本 swap推荐关闭 防火墙和SELinux推荐关闭 |
k8s-node2 | k8s-node2.shiyan.com | 172.172.8.66 | centos 7 | k8s-node | 内存:4GB CPU:4个 磁盘:50G(分区无要求) | 内核版本要求 3.10以上版本 swap推荐关闭 防火墙和SELinux推荐关闭 |
1.2、操作系统和各组件版本:
核心组件 kubernetes:1.29.3 |
---|
容器引擎 Docker-ce:25.0.4 |
容器运行接口 cri-dockerd 0.3.8 |
网络插件 Calico:v3.27.2 |
操作系统版本:CentOS 7.5 , 内核版本:5.4.272 |
1.3、部署前准备
1.3.1、创建工作目录
[root@jumpserver ~]# mkdir /root/k8s-cluster ; cd /root/k8s-cluster
1.3.2、配置ansible
[root@jumpserver k8s-cluster]# vim ansible.cfg
[defaults]
inventory = iplist
host_key_checking = False
remote_user = root
[root@jumpserver k8s-cluster]# vim iplist
[k8s]
172.172.8.61 hostname=k8s-master01.shiyan.com ansible_ssh_pass=123456
172.172.8.65 hostname=k8s-node1.shiyan.com ansible_ssh_pass=123456
172.172.8.66 hostname=k8s-node2.shiyan.com ansible_ssh_pass=123456
1.3.3、跳板机配置免密登陆其它主机
[root@jumpserver k8s-cluster]# ssh-keygen # 一路回车
[root@jumpserver k8s-cluster]# ansible all -m authorized_key -a "user=root state=present key='{{ lookup('file', '/root/.ssh/id_rsa.pub') }}'"
1.3.4、配置实验节点主机名
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'hostnamectl set-hostname {{hostname}}'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'hostname'
172.172.8.61 | CHANGED | rc=0 >>
k8s-master01.shiyan.com
172.172.8.66 | CHANGED | rc=0 >>
k8s-node2.shiyan.com
172.172.8.65 | CHANGED | rc=0 >>
k8s-node1.shiyan.com
1.3.5、配置/etc/hosts并同步到所有主机
[root@jumpserver k8s-cluster]# vim hosts.yaml
---
- name: Configure /etc/hosts
hosts: all
become: yes
tasks:
- name: Configure /etc/hosts
blockinfile:
path: /etc/hosts
block: |
172.172.8.61 k8s-master01.shiyan.com k8s-master01
172.172.8.65 k8s-node1.shiyan.com k8s-node1
172.172.8.66 k8s-node2.shiyan.com k8s-node2
[root@jumpserver k8s-cluster]# ansible-playbook hosts.yaml
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'cat /etc/hosts'
1.3.6、配置NTP服务器
使用跳板机作为NTP服务器,其它节点作为客户端,本次实验使用的是chrony
# 服务端配置
[root@jumpserver k8s-cluster]# yum -y install chrony ntpdate
[root@jumpserver k8s-cluster]# ntpdate ntp.aliyun.com # 手动同步一次
[root@jumpserver k8s-cluster]# vim /etc/chrony.conf
[root@jumpserver k8s-cluster]# cat /etc/chrony.conf
server ntp.aliyun.com
rtcsync
allow 172.172.8.0/24
local stratum 10
logdir /var/log/chrony
[root@jumpserver k8s-cluster]# systemctl restart chronyd
[root@jumpserver k8s-cluster]# systemctl enable chronyd
# 配置客户端
## 创建配置文件模版
[root@jumpserver k8s-cluster]# vim chrony.conf.template
[root@jumpserver k8s-cluster]# cat chrony.conf.template
server 172.172.8.11
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
[root@jumpserver k8s-cluster]# vim chrony.yaml
---
- hosts: all
gather_facts: no
tasks:
- name: Install chrony ntpdate
yum:
name:
- chrony
- ntpdate
state: present
- name: Copy file chrony.conf.template to server
copy:
src: chrony.conf.template
dest: /etc/chrony.conf
- name: ntpdate 172.172.8.11
shell: ntpdate 172.172.8.11
- name: Restart service chronyd
service:
name: chronyd
state: restarted
enabled: true
[root@jumpserver k8s-cluster]# ansible-playbook chrony.yaml
1.3.7、machine-id检查
由于实验用的服务器是克隆出来的,machine-id都一样,需要重新生成
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'cat /etc/machine-id'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'rm -f /etc/machine-id && systemd-machine-id-setup'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'cat /etc/machine-id'
1.3.8、关闭防火墙和selinux
[root@jumpserver k8s-cluster]# ansible all -m shell -a "systemctl disable firewalld --now"
# 配置永久关闭 && 临时关闭
[root@jumpserver k8s-cluster]# ansible all -m shell -a "sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config && setenforce 0"
1.3.9、关闭swap
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'swapon -s'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'swapoff -a'
[root@jumpserver k8s-cluster]# ansible all -m shell -a "cp /etc/fstab /etc/fstab_bak_`date +%Y%m%d%H%M%S` "
[root@jumpserver k8s-cluster]# ansible all -m shell -a "sed -i 's/.*swap.*/#&/g' /etc/fstab"
以上内容完成后建议关机做快照
1.3.10、内核参数调整
[root@jumpserver k8s-cluster]# cat k8s.conf
vm.swappiness=0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=k8s.conf dest=/etc/sysctl.d/k8s.conf'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'modprobe br_netfilter; modprobe overlay'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'sysctl -p /etc/sysctl.d/k8s.conf '
1.3.11、系统内核升级
# 导入gpg key
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org'
# 变更elrepo yum源仓库
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm'
# 安装lt长期维护版本
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum --enablerepo="elrepo-kernel" -y install kernel-lt.x86_64'
# 设置grub2默认引导为0
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'grub2-set-default 0'
# 重新生成grub2引导文件
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'grub2-mkconfig -o /boot/grub2/grub.cfg'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'reboot'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'uname -r'
172.172.8.66 | CHANGED | rc=0 >>
5.4.272-1.el7.elrepo.x86_64
172.172.8.65 | CHANGED | rc=0 >>
5.4.272-1.el7.elrepo.x86_64
172.172.8.61 | CHANGED | rc=0 >>
5.4.272-1.el7.elrepo.x86_64
1.3.12、安装ipvs模块
# 添加内核脚本
cat > ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=ipvs.modules dest=/etc/sysconfig/modules/ipvs.modules mode=755'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'bash /etc/sysconfig/modules/ipvs.modules '
# 验证模块是否有被加载
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'lsmod | grep -e ip_vs -e nf_conntrack'
172.172.8.65 | CHANGED | rc=0 >>
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 147456 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
172.172.8.61 | CHANGED | rc=0 >>
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 147456 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
172.172.8.66 | CHANGED | rc=0 >>
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 155648 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 147456 1 ip_vs
nf_defrag_ipv6 24576 2 nf_conntrack,ip_vs
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 3 nf_conntrack,xfs,ip_vs
二、安装docker-ce 和 cri-dockerd
kubernetes从1.24版本后开始默认是containerd容器。抛弃对接docker-sim,如果想要把docker作为kubernetes的容器环境需要安装cri-docker。
2.1、安装docker-ce
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum -y install docker-ce'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'docker -v '
172.172.8.66 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5
172.172.8.65 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5
172.172.8.61 | CHANGED | rc=0 >>
Docker version 25.0.4, build 1a576c5
# 添加配置文件
cat >> daemon.json << EOF
{
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn"
],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=daemon.json dest=/etc/docker/daemon.json'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl enable --now docker'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'docker info | grep Registry -A1'
172.172.8.61 | CHANGED | rc=0 >>
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
172.172.8.65 | CHANGED | rc=0 >>
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
172.172.8.66 | CHANGED | rc=0 >>
Registry Mirrors:
https://docker.mirrors.ustc.edu.cn/
2.2、安装cri-dockerd
[root@jumpserver k8s-cluster]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.x86_64.rpm
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=cri-dockerd-0.3.8-3.el7.x86_64.rpm dest=/root/'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum -y install /root/cri-dockerd-0.3.8-3.el7.x86_64.rpm'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl enable --now cri-docker'
# 需要变更的配置文件
[root@jumpserver k8s-cluster]# scp 172.172.8.61:/usr/lib/systemd/system/cri-docker.service ./
# 在ExecStart=/usr/bin/cri-dockerd 后面新增配置这里如果网络可以就用国外的 --pod-infra-container-image=registry.k8s.io/pause:3.9
# 网络不方便使用国外的阿里源 --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
# 新增配置文件的位置
ExecStart=/usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=cri-docker.service dest=/usr/lib/systemd/system/cri-docker.service'
# 重启cri-dockerd
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl daemon-reload && systemctl restart cri-docker'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'ps -ef | grep cri-dockerd'
172.172.8.61 | CHANGED | rc=0 >>
root 4288 1 0 14:59 ? 00:00:00 /usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
root 4454 4453 0 15:00 pts/0 00:00:00 /bin/sh -c ps -ef | grep cri-dockerd
root 4456 4454 0 15:00 pts/0 00:00:00 grep cri-dockerd
172.172.8.66 | CHANGED | rc=0 >>
root 4100 1 0 14:59 ? 00:00:00 /usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
root 4263 4262 0 15:00 pts/0 00:00:00 /bin/sh -c ps -ef | grep cri-dockerd
root 4265 4263 0 15:00 pts/0 00:00:00 grep cri-dockerd
172.172.8.65 | CHANGED | rc=0 >>
root 4095 1 0 14:59 ? 00:00:00 /usr/bin/cri-dockerd --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 --container-runtime-endpoint fd://
root 4259 4258 0 15:00 pts/0 00:00:00 /bin/sh -c ps -ef | grep cri-dockerd
root 4261 4259 0 15:00 pts/0 00:00:00 grep cri-dockerd
三、kuberneter集群部署
3.1、安装kubelet kubeadm kubectl
cat >> kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.29/rpm/repodata/repomd.xml.key
EOF
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=kubernetes.repo dest=/etc/yum.repos.d/'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'yum install -y kubelet kubeadm kubectl'
3.2、配置kubelet
[root@jumpserver k8s-cluster]# scp 172.172.8.61:/etc/sysconfig/kubelet ./
[root@jumpserver k8s-cluster]# vim kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=kubelet dest=/etc/sysconfig/kubelet'
# add配置文件在ExecStart后增加 --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
[root@jumpserver k8s-cluster]# scp 172.172.8.61:/lib/systemd/system/kubelet.service ./
[root@jumpserver k8s-cluster]# cat kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=https://kubernetes.io/docs/
Wants=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml
Restart=always
StartLimitInterval=0
RestartSec=10
[Install]
WantedBy=multi-user.target
[root@jumpserver k8s-cluster]# ansible all -m copy -a 'src=kubelet.service dest=/lib/systemd/system/kubelet.service'
[root@jumpserver k8s-cluster]# ansible all -m shell -a 'systemctl enable kubelet && systemctl start kubelet'
3.3、镜像下载(master节点操作)
# 列出需要的镜像文件,这里是国外的镜像,国内下载不到,需要换成阿里源
[root@k8s-master01 ~]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.29.3
registry.k8s.io/kube-controller-manager:v1.29.3
registry.k8s.io/kube-scheduler:v1.29.3
registry.k8s.io/kube-proxy:v1.29.3
registry.k8s.io/coredns/coredns:v1.11.1
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.12-0
# 下载镜像用阿里源下载
[root@k8s-master01 ~]# kubeadm config images pull --cri-socket unix:///var/run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.29.3
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.11.1
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.12-0
[root@k8s-master01 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-apiserver v1.29.3 39f995c9f199 4 days ago 127MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.29.3 6052a25da3f9 4 days ago 122MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.29.3 8c390d98f50c 4 days ago 59.6MB
registry.aliyuncs.com/google_containers/kube-proxy v1.29.3 a1d263b5dc5b 4 days ago 82.4MB
registry.aliyuncs.com/google_containers/etcd 3.5.12-0 3861cfcd7c04 6 weeks ago 149MB
registry.aliyuncs.com/google_containers/coredns v1.11.1 cbb01a7bd410 7 months ago 59.8MB
registry.aliyuncs.com/google_containers/pause 3.9 e6f181688397 17 months ago 744kB
3.4、初始化集群
kubeadm init --kubernetes-version=v1.29.3 --pod-network-cidr=100.100.0.0/16 --apiserver-advertise-address=172.172.8.61 --cri-socket unix:///var/run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers
初始化阶段报错清理环境
[root@k8s-master01 ~]# systemctl stop kubelet
[root@k8s-master01 ~]# for i in `crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock ps -a | awk 'NR == 1 {next} {print $1}'`
do
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock stop $i
crictl --runtime-endpoint unix:///var/run/cri-dockerd.sock rm $i
done
[root@k8s-master01 ~]# for i in `docker ps -a | awk 'NR == 1 {next} {print $1}'`
do
docker stop $i
docker rm $i
done
[root@k8s-master01 ~]# mv /etc/kubernetes/ /etc/kubernetes_old_`date +%Y%m%d%H%M%S`
[root@k8s-master01 ~]# mv /var/lib/kubelet /var/lib/kubelet_old_`date +%Y%m%d%H%M%S`
[root@k8s-master01 ~]# rm -rf /var/lib/etcd/*
3.4.1、初始化完成
日志输出内容 Your Kubernetes control-plane has initialized successfully!
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.172.8.61:6443 --token noq9po.4098vmtkwa8v53pc \
--discovery-token-ca-cert-hash sha256:683c779a951e125caedcc15a3ce59bd51f917a4e1c629788fd4f9f4ff6369c22
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.shiyan.com NotReady control-plane 3m32s v1.29.3
3.5、添加node节点
[root@k8s-node1 ~]# kubeadm join 172.172.8.61:6443 --token noq9po.4098vmtkwa8v53pc --discovery-token-ca-cert-hash sha256:683c779a951e125caedcc15a3ce59bd51f917a4e1c629788fd4f9f4ff6369c22 --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-node2 ~]# kubeadm join 172.172.8.61:6443 --token noq9po.4098vmtkwa8v53pc --discovery-token-ca-cert-hash sha256:683c779a951e125caedcc15a3ce59bd51f917a4e1c629788fd4f9f4ff6369c22 --cri-socket unix:///var/run/cri-dockerd.sock
[root@k8s-master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.shiyan.com NotReady control-plane 11m v1.29.3
k8s-node1.shiyan.com NotReady <none> 2m13s v1.29.3
k8s-node2.shiyan.com NotReady <none> 57s v1.29.3
3.6、网络插件calico
只需要在master节点安装
https://github.com/projectcalico/calico/releases
当前最新版本是3.27.2
[root@k8s-master01 ~]# wget https://github.com/projectcalico/calico/releases/download/v3.27.2/release-v3.27.2.tgz
[root@k8s-master01 ~]# tar xf release-v3.27.2.tgz
[root@k8s-master01 ~]# mv release-v3.27.2 Calico-v3.27.2
[root@k8s-master01 ~]# ls Calico-v3.27.2/
bin images manifests
[root@k8s-master01 ~]# cd Calico-v3.27.2/images/
[root@k8s-master01 images]# ls
calico-cni.tar calico-dikastes.tar calico-flannel-migration-controller.tar calico-kube-controllers.tar calico-node.tar calico-pod2daemon.tar calico-typha.tar
[root@k8s-master01 images]# for file in *.tar ; do docker load -i "$file"; done
[root@k8s-master01 manifests]# pwd
/root/Calico-v3.27.2/manifests
[root@k8s-master01 manifests]# kubectl create -f tigera-operator.yaml # 直接运行
namespace/tigera-operator created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created
[root@k8s-master01 manifests]# cat custom-resources.yaml
# This section includes base Calico installation configuration.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.Installation
apiVersion: operator.tigera.io/v1
kind: Installation
metadata:
name: default
spec:
# Configures Calico networking.
calicoNetwork:
# Note: The ipPools section cannot be modified post-install.
ipPools:
- blockSize: 26
cidr: 100.100.0.0/16 # pod使用的网段
encapsulation: VXLANCrossSubnet
natOutgoing: Enabled
nodeSelector: all()
nodeAddressAutodetectionV4:
interface: eth* # master使用的网卡
---
# This section configures the Calico API server.
# For more information, see: https://docs.tigera.io/calico/latest/reference/installation/api#operator.tigera.io/v1.APIServer
apiVersion: operator.tigera.io/v1
kind: APIServer
metadata:
name: default
spec: {}
[root@k8s-master01 manifests]# kubectl create -f custom-resources.yaml
installation.operator.tigera.io/default created
apiserver.operator.tigera.io/default created
[root@k8s-master01 ~]# kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-apiserver calico-apiserver-98db6cd5d-b8g29 1/1 Running 0 8m18s 100.100.121.66 k8s-node1.shiyan.com <none> <none>
calico-apiserver calico-apiserver-98db6cd5d-sslcw 1/1 Running 0 8m18s 100.100.81.194 k8s-node2.shiyan.com <none> <none>
calico-system calico-kube-controllers-5d9774f4d9-ggrxh 1/1 Running 0 58m 100.100.224.130 k8s-master01.shiyan.com <none> <none>
calico-system calico-node-5nl2k 1/1 Running 0 26m 172.172.8.66 k8s-node2.shiyan.com <none> <none>
calico-system calico-node-sgfq8 1/1 Running 0 58m 172.172.8.61 k8s-master01.shiyan.com <none> <none>
calico-system calico-node-vkdsw 1/1 Running 0 28m 172.172.8.65 k8s-node1.shiyan.com <none> <none>
calico-system calico-typha-77875d84b6-5ngt9 1/1 Running 0 58m 172.172.8.66 k8s-node2.shiyan.com <none> <none>
calico-system calico-typha-77875d84b6-w8vb4 1/1 Running 0 58m 172.172.8.65 k8s-node1.shiyan.com <none> <none>
calico-system csi-node-driver-89dck 2/2 Running 0 58m 100.100.121.65 k8s-node1.shiyan.com <none> <none>
calico-system csi-node-driver-mdms7 2/2 Running 0 58m 100.100.81.193 k8s-node2.shiyan.com <none> <none>
calico-system csi-node-driver-s6jzx 2/2 Running 0 58m 100.100.224.129 k8s-master01.shiyan.com <none> <none>
kube-system coredns-857d9ff4c9-pjsql 1/1 Running 0 46h 100.100.224.132 k8s-master01.shiyan.com <none> <none>
kube-system coredns-857d9ff4c9-stbqh 1/1 Running 0 46h 100.100.224.131 k8s-master01.shiyan.com <none> <none>
kube-system etcd-k8s-master01.shiyan.com 1/1 Running 1 (43h ago) 46h 172.172.8.61 k8s-master01.shiyan.com <none> <none>
kube-system kube-apiserver-k8s-master01.shiyan.com 1/1 Running 1 (43h ago) 46h 172.172.8.61 k8s-master01.shiyan.com <none> <none>
kube-system kube-controller-manager-k8s-master01.shiyan.com 1/1 Running 1 (43h ago) 46h 172.172.8.61 k8s-master01.shiyan.com <none> <none>
kube-system kube-proxy-74g69 1/1 Running 1 (43h ago) 46h 172.172.8.66 k8s-node2.shiyan.com <none> <none>
kube-system kube-proxy-bhqnc 1/1 Running 1 (43h ago) 46h 172.172.8.65 k8s-node1.shiyan.com <none> <none>
kube-system kube-proxy-fxc47 1/1 Running 1 (43h ago) 46h 172.172.8.61 k8s-master01.shiyan.com <none> <none>
kube-system kube-scheduler-k8s-master01.shiyan.com 1/1 Running 1 (43h ago) 46h 172.172.8.61 k8s-master01.shiyan.com <none> <none>
tigera-operator tigera-operator-748c69cf45-74qk2 1/1 Running 5 (60m ago) 62m 172.172.8.65 k8s-node1.shiyan.com <none> <none>
[root@k8s-master01 ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master01.shiyan.com Ready control-plane 46h v1.29.3 172.172.8.61 <none> CentOS Linux 7 (Core) 5.4.272-1.el7.elrepo.x86_64 docker://25.0.4
k8s-node1.shiyan.com Ready <none> 46h v1.29.3 172.172.8.65 <none> CentOS Linux 7 (Core) 5.4.272-1.el7.elrepo.x86_64 docker://25.0.4
k8s-node2.shiyan.com Ready <none> 46h v1.29.3 172.172.8.66 <none> CentOS Linux 7 (Core) 5.4.272-1.el7.elrepo.x86_64 docker://25.0.4
3.7、设置kube-proxy代理模式为IPVS
kube-proxy默认采用iptables作为代理,而iptables的性能有限,不适合生产环境,需要改为IPVS模式。
[root@k8s-master01 ~]# kubectl get pods -n kube-system | grep proxy
kube-proxy-74g69 1/1 Running 1 (43h ago) 46h
kube-proxy-bhqnc 1/1 Running 1 (43h ago) 46h
kube-proxy-fxc47 1/1 Running 1 (43h ago) 46h
[root@k8s-master01 ~]# kubectl get configmap kube-proxy -n kube-system -o yaml | grep mode
mode: ""
[root@k8s-master01 ~]# kubectl edit configmap kube-proxy -n kube-system # 找到 mode这一行,修改为 mode: "ipvs"
[root@k8s-master01 ~]# kubectl get configmap kube-proxy -n kube-system -o yaml | grep mode
mode: "ipvs"
#重启kube-proxy组件
[root@k8s-master01 ~]# kubectl rollout restart daemonset kube-proxy -n kube-system
# 查看日志
[root@k8s-master01 ~]# kubectl logs -n kube-system kube-proxy-74kf7 | grep ipvs
I0321 07:35:14.711375 1 server_others.go:236] "Using ipvs Proxier"
参考链接
https://blog.csdn.net/weixin_38924998/article/details/135493764
更多推荐
已为社区贡献1条内容
所有评论(0)