k8s集群安装之安装 kubeadm 1.26 集群
环境规划-->初始化环境-->集群搭建-->网络插件实现通信
·
k8s环境规划:
podSubnet(pod网段) 10.244.0.0/16
serviceSubnet(service网段): 10.96.0.0/16
虚拟机网段:172.18.0.0/20
实验环境规划:
操作系统:centos7.9
配置: 4c4G
初始化安装k8s集群的实验环境
以下操作需要每台机器都做:
网卡配置静态IP,路径:
/etc/sysconfig/network-scripts/ifcfg-ens33
关闭selinux
[root@localhost ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
[root@localhost ~]# setenforce 0
配置主机名称(配置到对应主机)
[root@localhost ~]# hostnamectl set-hostname zwlmaster1 && bash
[root@zwlmaster1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.18.0.4 zwlmaster1
172.18.0.11 zwlnode1
[root@zwlmaster1 ~]# scp -r /etc/hosts zwlnode1:/etc/hosts
配置ssh互信,例如:
[root@zwlmaster1 ~]# ssh-keygen
[root@zwlmaster1 ~]# ssh-copy-id zwlnode1
关闭swap分区
[root@zwlmaster1 ~]# swapoff -a
永久关闭修改配置文件 /etc/fstab 注释掉swap分区
为什么要关闭swap交换分区?
Swap是交换分区,如果机器内存不够,会使用swap分区,但是swap分区的性能较低,k8s设计的时候为了能提升性能,默认是不允许使用交换分区的。Kubeadm初始化的时候会检测swap是否关闭,如果没关闭,那就初始化失败。如果不想要关闭交换分区,安装k8s的时候可以指定--ignore-preflight-errors=Swap来解决。
修改机器内核参数
[root@zwlmaster1 ~]# modprobe br_netfilter
[root@zwlmaster1 ~]# echo "modprobe br_netfilter" >> /etc/profile
[root@zwlmaster1 ~]# cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
[root@zwlmaster1 ~]# sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
关闭防火墙
[root@zwlmaster1 ~]# systemctl stop firewalld && systemctl disable firewalld
配置阿里云的repo源
#备份基础repo源
[root@zwlmaster1 ~]# mkdir /etc/yum.repos.d/bak
[root@zwlmaster1 ~]# mv /etc/yum.repos.d/* /etc/yum.repos.d/bak/
添加阿里的源
[root@zwlmaster1 bak]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
配置docker 国内yum源
[root@zwlmaster1 bak]# yum install yum-utils -y
[root@zwlmaster1 bak]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
配置安装k8s组件需要的阿里云的repo源
[root@zwlmaster1 bak]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
配置时间同步
[root@zwlmaster1 bak]# yum install ntpdate -y
[root@zwlmaster1 bak]# ntpdate cn.pool.ntp.org
计划任务:
[root@zwlmaster1 bak]# crontab -e
* */1 * * * /usr/sbin/ntpdate cn.pool.ntp.org
[root@zwlmaster1 bak]# systemctl restart crond
安装基础软件包
[root@zwlmaster1 ~]# yum install -y device-mapper-persistent-data lvm2 wget net-tools nfs-utils lrzsz gcc gcc-c++ make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntplibaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-server socat ipvsadm conntrack telnet
安装containerd服务
[root@zwlmaster1 ~]# yum install containerd.io-1.6.6 -y
接下来生成 containerd 的配置文件:
[root@zwlmaster1 ~]# mkdir -p /etc/containerd
[root@zwlmaster1 ~]# containerd config default > /etc/containerd/config.toml
[root@zwlmaster1 ~]# vim /etc/containerd/config.toml
把SystemdCgroup = false修改成SystemdCgroup = true
把sandbox_image = "k8s.gcr.io/pause:3.6"修改成sandbox_image="registry.aliyuncs.com/google_containers/pause:3.7"
配置 containerd 开机启动,并启动 containerd
[root@zwlmaster1 ~]# systemctl enable containerd --now
修改/etc/crictl.yaml文件
cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
[root@zwlmaster1 ~]# systemctl restart containerd
配置containerd镜像加速器,k8s所有节点均按照以下配置:
编辑vim /etc/containerd/config.toml文件
找到config_path = "",修改成如下目录:
config_path = "/etc/containerd/certs.d"
#保存退出
[root@zwlmaster1 ~]# mkdir /etc/containerd/certs.d/docker.io/ -p
[root@zwlmaster1 ~]# vim /etc/containerd/certs.d/docker.io/hosts.toml
#写入如下内容:
[host."https://rsbud4vc.mirror.aliyuncs.com",host."https://registry.docker-cn.com"]
capabilities = ["pull"]
重启containerd:
[root@zwlmaster1 ~]# systemctl restart containerd
备注:docker也要安装,docker跟containerd不冲突,安装docker是为了能基于dockerfile构建镜像
安装docker
[root@zwlmaster1 ~]# yum install docker-ce-20.10.6 -y
启动守护进程
[root@zwlmaster1 ~]# systemctl start docker && systemctl enable docker && systemctl status docker
添加镜像源加速地址
[root@zwlnode1 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors":["https://rsbud4vc.mirror.aliyuncs.com","https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn","https://dockerhub.azk8s.cn","http://hub-mirror.c.163.com"]
}
生效:
#修改docker文件驱动为systemd,默认为cgroupfs,kubelet默认使用systemd,两者必须一致才可以。
[root@zwlmaster1 ~]# systemctl daemon-reload&& systemctl restart docker
安装初始化k8s需要的软件包
[root@zwlmaster1 ~]# yum install -y kubelet-1.26.0 kubeadm-1.26.0 kubectl-1.26.0
[root@zwlmaster1 ~]# systemctl enable kubelet
至此机器的基本环境已经准备完毕,以上过程检测无报错的可以开启下面的步骤
kubeadm初始化集群
#设置容器运行时
[root@zwlmaster1 ~]# crictl config runtime-endpoint unix:///run/containerd/containerd.sock
#使用kubeadm初始化k8s集群
[root@zwlmaster1 ~]# kubeadm config print init-defaults > kubeadm.yaml
根据我们自己的需求修改配置,比如修改 imageRepository 的值,kube-proxy 的模式为 ipvs,需要注意的是由于我们使用的containerd作为运行时,所以在初始化节点的时候需要指定cgroupDriver为systemd
在zwlmaster1上创建kubeadm-config.yaml文件:
[root@zwlmaster1 ~]# kubeadm config print init-defaults > kubeadm.yaml
[root@zwlmaster1 ~]# cat kubeadm.yaml
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 172.18.0.4 #控制节点IP
bindPort: 6443
nodeRegistration:
criSocket: unix:///var/run/containerd/containerd.sock #容器运行时
imagePullPolicy: IfNotPresent
name: zwlmaster1 #控制节点名称
taints: null
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta3
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers #镜像源地址,默认是国外站不修改会超时
kind: ClusterConfiguration
kubernetesVersion: 1.26.0 #版本号
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 #新增的pod的网段,默认是没有的
serviceSubnet: 10.96.0.0/12
scheduler: {}
#在文件最后,插入以下内容
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
#基于kubeadm.yaml文件初始化k8s
[root@zwlmaster1 ~]# kubeadm init --config=kubeadm.yaml --ignore-preflight-errors=SystemVerification
由于需要拉取镜像,等待时间会比较长
初始化完成后,按照提示执行以下命令
[root@zwlmaster1 ~]# mkdir -p $HOME/.kube
[root@zwlmaster1 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@zwlmaster1 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@zwlmaster1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
zwlmaster1 NotReady control-plane 60s v1.26.0
此时集群状态还是NotReady状态,因为没有安装网络插件。
``
此时 控制节点已经安装完毕 可以根据自己的需求扩容节点
```bash
在zwlmaster1查看加入节点的命令
[root@zwlmaster1 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.152.131:6443 --token xqp5t6.ln9dysqy49ic1ew5 --discovery-token-ca-cert-hash sha256:138929bee1e7a0d90912049e53efeb4f442e28d4f351c81d2f6354c60ed7326d
以上生成的命令需要在环境准备好的机器上执行, 参数 :
--ignore-preflight-errors=SystemVerification 如果容器版本不兼容可以加这个参数忽略掉
--control-plane 指定节点为控制节点,如果不加这个参数默认是node节点;
注意: 扩展控制节点需要拷贝主节点的证书文件,步骤如下 :
scp /etc/kubernetes/pki/ca.crt zwlmaster2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/ca.key zwlmaster2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.key zwlmaster2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.pub zwlmaster2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.crt zwlmaster2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.key zwlmaster2:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.crt zwlmaster2:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/pki/etcd/ca.key zwlmaster2:/etc/kubernetes/pki/etcd/
接下来我这里扩展一个node节点作为示例:
[root@zwlnode1 ~]# kubeadm join 172.18.0.4:6443 --token isugv0.lpoa9woz29ys3vel --discovery-token-ca-cert-hash sha256:85cd1f24d43fb273dd4987c89d361767c0f5e21775d9fbc47f87c3937e1af0b8 --ignore-preflight-errors=SystemVerification
执行成功后去控制节点检测下是否添加
[root@zwlmaster1 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
zwlmaster1 NotReady control-plane 21m v1.26.0
zwlnode1 NotReady <none> 66s v1.26.0
到这里node添加完成,需要安装网络插件后才可以使用。
安装kubernetes网络组件-Calico
从官网下载calico.yaml文件:
https://docs.tigera.io/archive/v3.18/manifests/calico.yaml
下载完成后上传到控制节点,执行这个yaml文件安装calico
[root@zwlmaster1 ~]# kubectl apply -f calico.yaml
执行这个文件会创建很多镜像,刚执行完我们get pods 查看calico 发现状态有非running状态的,稍微等下需要一定的准备时间。
正常状态下应该以下是这样的
[root@zwlmaster1 ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-5dd4b7dfd9-qgqjs 1/1 Running 0 3m3s
calico-node-2vws8 1/1 Running 0 3m3s
calico-node-mkj7k 1/1 Running 0 3m3s
coredns-567c556887-kwwjs 1/1 Running 0 24m
coredns-567c556887-p4gsw 1/1 Running 0 24m
etcd-zwlmaster1 1/1 Running 0 24m
kube-apiserver-zwlmaster1 1/1 Running 0 24m
kube-controller-manager-zwlmaster1 1/1 Running 0 24m
kube-proxy-4b9jg 1/1 Running 0 24m
kube-proxy-wt5f9 1/1 Running 0 4m40s
kube-scheduler-zwlmaster1 1/1 Running 0 24m
这个时候再去检查集群状态发现是Ready的
[root@zwlmaster1 mnt]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
zwlmaster1 Ready control-plane 48m v1.26.0
zwlnode1 Ready worker 28m v1.26.0
更多推荐
已为社区贡献2条内容
所有评论(0)