Centos7.6系统环境下,K8S+CRI-O+flannel(yum方式)的环境部署搭建
一、吐槽前言:1.最开始安装部署cri-o时,就想到的是通过yum的方式来进行部署,但是安装官方推荐的yum源配置后,根本就不能够安装成功,后面某个机缘下,看到同事就是通过yum源的方式安装部署,查看的它的步骤,突然发现在配置官方yum源的时候,少配置了两个变量,就导致配置的yum源不能够生效,所以在次编写此文章做记录分享(这方面的资料实在是太少了,都是坑)2. 官方安装说明方法链接https:/
·
一、吐槽前言:
1.最开始安装部署cri-o时,就想到的是通过yum的方式来进行部署,但是安装官方推荐的yum源配置后,根本就不能够安装成功,后面某个机缘下,看到同事就是通过yum源的方式安装部署,查看的它的步骤,突然发现在配置官方yum源的时候,少配置了两个变量,就导致配置的yum源不能够生效,所以在次编写此文章做记录分享(这方面的资料实在是太少了,都是坑)
2. 官方安装说明方法链接
https://cri-o.io/
二、CRI-O的安装部署方法
3. 配置yum源:
1. 老规矩,k8s环境下的特殊配置
systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
hostnamectl set-hostname k8s-master
modprobe overlay
modprobe br_netfilter
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
vm.swappiness=0
EOF
sysctl -p /etc/sysctl.d/k8s.conf
2. 这一步非常重要,最开始,我自己就坑在这一步上面(我是centos7的环境,其它系统大家参照修改,因为计划安装cri-o的21版本,所以这里VERSION写的1.21)
VERSION=1.21
OS=CentOS_7
3. 安装官方的步骤,下载yum源
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/$OS/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:$VERSION/$OS/devel:kubic:libcontainers:stable:cri-o:$VERSION.repo
4. 下载完后官方源如下:
[root@k8s-master yum.repos.d]# ll devel\:kubic\:libcontainers\:stable*
-rw-r--r-- 1 root root 381 Nov 23 17:52 devel:kubic:libcontainers:stable:cri-o:1.21.repo
-rw-r--r-- 1 root root 359 Nov 23 17:52 devel:kubic:libcontainers:stable.repo
5. 内容如下:
[root@k8s-master yum.repos.d]# cat devel\:kubic\:libcontainers\:stable*
[devel_kubic_libcontainers_stable_cri-o_1.21]
name=devel:kubic:libcontainers:stable:cri-o:1.21 (CentOS_7)
type=rpm-md
baseurl=https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.21/CentOS_7/
gpgcheck=1
gpgkey=https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/1.21/CentOS_7/repodata/repomd.xml.key
enabled=1
[devel_kubic_libcontainers_stable]
name=Stable Releases of Upstream github.com/containers packages (CentOS_7)
type=rpm-md
baseurl=https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_7/
gpgcheck=1
gpgkey=https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/CentOS_7/repodata/repomd.xml.key
enabled=1
6. 但是只有上面两个yum源还不够,我在安装的时候runc的依赖找不到,所以还需要在配置一个yum源,我配置的内容如下:
[root@k8s-master yum.repos.d]# cat Centos-Mirro.repo
[Centos Mirro]
name=Centos Mirro
baseurl=http://mirror.centos.org/centos/7/extras/x86_64/
enabled=1
gpgcheck=0
proxy=_none
7. yum的方式安装cri-o,现在真正的是太痛快了
yum install -y cri-o
systemctl start crio
systemctl enable crio
systemctl start cri-o
systemctl enable cri-o
systemctl list-unit-files | grep cri-o
systemctl list-unit-files | grep cri
三、 kubelet、kubectl、cri-tools、kubeadm组件安装
8 . 老规矩,先配置k8s的yum源:
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
9 . 安装部署时,也需要指定版本安装部署,版本需要和cri-o的版本保持一致,不知道原因,反正就听老人言,三个组件的顺序也很重要,因为会有依赖,如果先安装kubeadm,目前centos最新的为v2.22版本,如果先安装kubeadm,他会将kubelet,kubectl也一起安装,安装的版本会为2.22,而不是你想安装的v2.21
yum install -y kubelet-1.21.2-0.x86_64 --nogpgcheck
yum install -y kubectl-1.21.2-0.x86_64 --nogpgcheck
yum install -y kubeadm-1.21.2-0.x86_64 --nogpgcheck
yum install -y cri-tools-1.21.0
systemctl enable kubelet
10. 这一步非常重要,指定kubelet使用crio创建容器
[root@k8s-master mwt]# cat /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m"
11. 安装cni插件,其实就是把编译好的二进制文件,放到/opt/cni/bin目录下,让kublet程序根据/etc/cni/net.d/中的confg文件中指定的type类型去调用,然后创建相关的pod的接口和ip地址的分配(应该不是很严谨,只是自己大致的感觉理解,参见)
下载地址(这个我是从自己环境中压缩上传的):
https://wws.lanzoui.com/i1NPlwwcu8j
密码:3n6s
下载完毕后,将压缩文件解压后,将二进制文件拷贝到/opt/cni/bin目录下面,如果不存在该目录就先创建该目录
第四步、部署K8S集群
- 进行crio.conf文件中,pause镜像源的配置,不然部署pod容器会失败,因为无法连接国外的镜像仓库,步骤如下:(虽然在yum安装crio时,生成了该文件,但是还是建议重新生成一下)
crio config --default > /etc/crio/crio.conf
修改添加内容如下:
registries = ['4v2510z7.mirror.aliyuncs.com:443/library']
pause_image = "registry.aliyuncs.com/google_containers/pause:3.2"
13.生成kueadm的初始文件,如果不通过文件来部署,好像会存在问题,因为k8s默认CRI是docker,所以需要生成,然后修改文件才行, podSubnet的字段需要指定,不然部署flannel时,网段划分会存在问题,无法成功分配ip所以就无法正常创建pod
kubeadm config print init-defaults > kubeadm-config.yaml
修改后的内容如下:
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.0.2.93
bindPort: 6443
nodeRegistration:
# criSocket: /var/run/dockershim.sock
criSocket: /var/run/crio/crio.sock
# name: node
taints:
- effect: PreferNoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: 1.21.0
networking:
dnsDomain: cluster.local
podSubnet: 10.85.0.0/16
serviceSubnet: 10.96.0.0/12
scheduler: {}
- 提前下载k8s相关的组件镜像,提高init的速度,下载时,指定阿里云镜像源来下载,如下:
kubeadm config images pull --config kubeadm.yaml --image-repository registry.aliyuncs.com/google_containers
- 提前下载好flannel的组件镜像(每个机器都执行一下,其实后面想了一下,其实也不用提前下载,因为不用翻墙好像也可以下载,所以这一步参见吧)。
curl -O https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
crictl pull quay.io/coreos/flannel:v0.14.0
- 移除所有所有机器/etc/cni/net.d/下的配置文件,默认情况下,kubelet会读取这个目录下的配置文件,然后读取cni插件类型,调用相关cni插件常见网络和分配IP地址,因为我们计划使用flannel来充当网络插件,如果不移除所有配置文件,当初始化创建coredns时,会使用crio的brige来创建网络和分配ip地址,会导致后续部署flannel不生效,推断原因为:k8s init完毕后,会自动启动kubelet服务,kubelet服务会读取/etc/cni/net.d/的配置文件,然后后续创建pod,就只会使用crio的brige来创建容器,每个宿主机的pod都会在同一个子网段,根本就无法跨宿主机,这个根本就不是我们的部署k8s集群的初衷,所以当我们k8s init初始化完毕后,coredns会是未创建的状态,因为kubelet不知道该用什么cni插件来分配IP地址,但是当我们创建flannel组件后,这个问题就会被解决,创建pod,当被划分到不同的主机上时,也能够分配不同网段的IP地址,也会创建不同flannel接口、分配不同的地址,创建相关的路由,这样就可以实现集群内部pod和pod跨宿主机通信了。
27 .说了这么多废话,开始干吧
1) master执行
kubeadm init --config kubeadm-config.yaml
2) 初始化成功后的打印信息:
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.0.2.120:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3db371d75d6029e5527233b9ec8400cdc6826a4cb88d626216432f0943232eba
3) master执行:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
4) node上执行kubeadm join
kubeadm join 10.0.2.93:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3db371d75d6029e5527233b9ec8400cdc6826a4cb88d626216432f0943232eba
5) 在master部署flannl插件
kubectl apply -f kube-flannel.yml
- 查看部署成果吧,真是各种坑:
1) 查看前的小问题解决
查看kubectl get cs状态(总觉得k8s好像就是存在该问题,修改配置文件,重启kubelet进行恢复)
[root@cri-2 crio-v1.19.0]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
原因是kube-controller-manager.yaml和kube-scheduler.yaml设置的默认端口是0,只需在文件中注释掉即可。
在每个主节点上执行
vim /etc/kubernetes/manifests/kube-scheduler.yaml
# and then comment this line
# - --port=0
重启kubelet后的结果
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
2) 如果大家也是搭建测试环境,只有两个机器,但是想让pod能够在两个master和node上都能够启动,充分利用硬件资源的话,可以去除污点的方式,让pod也能够部署到master上,生产环境不推荐,因为k8s的核心组件都在master上运行,如果将业务pod部署到master上,可能会导致master的资源紧张有干扰。
kubectl taint nodes --all node-role.kubernetes.io/master-
3) 好了,好了,来让我们来查看最后的部署结果吧
[root@k8s-master ~]# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 34h v1.21.2 10.0.2.93 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 cri-o://1.21.4
k8s-node1 Ready <none> 34h v1.21.2 10.0.2.94 <none> CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 cri-o://1.21.4
[root@k8s-master ~]# kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-59d64cd4d4-5llk7 1/1 Running 0 34h 10.85.0.2 k8s-master <none> <none>
coredns-59d64cd4d4-5vmk6 1/1 Running 0 34h 10.85.0.3 k8s-master <none> <none>
etcd-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master <none> <none>
kube-apiserver-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master <none> <none>
kube-controller-manager-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master <none> <none>
kube-flannel-ds-f7k6k 1/1 Running 0 34h 10.0.2.94 k8s-node1 <none> <none>
kube-flannel-ds-pqwsg 1/1 Running 0 34h 10.0.2.93 k8s-master <none> <none>
kube-proxy-6jz7q 1/1 Running 0 34h 10.0.2.94 k8s-node1 <none> <none>
kube-proxy-j6hhl 1/1 Running 0 34h 10.0.2.93 k8s-master <none> <none>
kube-scheduler-k8s-master 1/1 Running 0 34h 10.0.2.93 k8s-master <none> <none>
[root@k8s-master ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
mysql-4pgv4 1/1 Running 0 4h51m 10.85.0.8 k8s-master <none> <none>
mysql-jptwn 1/1 Running 0 4h51m 10.85.0.9 k8s-master <none> <none>
myweb1-ddttd 1/1 Running 0 4h50m 10.85.1.4 k8s-node1 <none> <none>
myweb1-ngk9r 1/1 Running 0 4h50m 10.85.1.5 k8s-node1 <none> <none>
[root@k8s-master ~]# kubectl get svc -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 34h <none>
mysql NodePort 10.107.163.221 <none> 3306:30060/TCP 4h45m app=mysql-crio
myweb NodePort 10.100.252.36 <none> 8080:30001/TCP 4h45m app=myweb-crio
[root@k8s-master ~]# kubectl get svc -o wide -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 34h k8s-app=kube-dns
- 最后的,最后,完结撒花,但是得特别鸣谢这篇文章的博主,很多坑都是这位博主走过的,该篇文章80%的知识点,都是来源于该篇文章,大家可以给这位大哥点个赞,来个收藏。如果有问题欢迎交流学习,自己的知识还是太过于匮乏:
https://blog.csdn.net/weixin_42072280/article/details/120088219
更多推荐
已为社区贡献5条内容
所有评论(0)