Docker方式部署K8s集群
Kubernetes (k8s) 是一个开源的容器编排平台,用于自动部署、扩展和管理容器化应用程序。当使用 Docker 作为容器运行时,k8s 可以利用 Docker 的特性来创建、运行、停止和删除容器。Docker 提供了容器的运行环境,而 k8s 则负责管理这些容器的生命周期,包括调度、服务发现、负载均衡和自我修复等。通过这种组合,开发者可以轻松地在集群中部署和管理容器化应用程序,实现高效的
1.1 Kubernetes基础环境部署
kubernetes有多种部署方式,目前主流的方式有kubeadm、minikube、二进制包
-
minikube:一个用于快速搭建单节点kubernetes的工具
-
kubeadm:一个用于快速搭建kubernetes集群的工具
-
二进制包 :从官网下载每个组件的二进制包,依次去安装,此方式对于理解kubernetes组件更加有效
-
K8s-all:主机名为三台都做
主机名 | IP地址 | 系统 | 配置 |
---|---|---|---|
K8s-master | 192.168.110.21/24 | CentOS 7.9 | 4颗CPU 8G内存 100G硬盘 |
K8s-node-01 | 192.168.110.22/24 | CentOS 7.9 | 4颗CPU 8G内存 100G硬盘 |
K8s-node-02 | 192.168.110.23/24 | CentOS 7.9 | 4颗CPU 8G内存 100G硬盘 |
注意:关闭防火墙和SElinux
1.1.1 配置hosts解析
[root@K8s-master ~]# cat >> /etc/hosts << EOF > 192.168.110.21 K8s-master > 192.168.110.22 K8s-node-01 > 192.168.110.23 K8s-node-02 > EOF [root@K8s-master ~]# scp /etc/hosts K8s-node-01:/etc/ [root@K8s-master ~]# scp /etc/hosts K8s-node-02:/etc/
1.1.2 配置NTP时间服务
[root@K8s-master ~]# sed -i '3,6 s/^/# /' /etc/chrony.conf [root@K8s-master ~]# sed -i '6 a server ntp.aliyun.com iburst' /etc/chrony.conf [root@K8s-master ~]# systemctl restart chronyd.service [root@K8s-master ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 203.107.6.88 2 6 17 18 +266us[+1386us] +/- 24ms [root@K8s-node-01 ~]# sed -i '3,6 s/^/# /' /etc/chrony.conf [root@K8s-node-01 ~]# sed -i '6 a server ntp.aliyun.com iburst' /etc/chrony.conf [root@K8s-node-01 ~]# systemctl restart chronyd.service [root@K8s-node-01 ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^? 203.107.6.88 0 6 0 - +0ns[ +0ns] +/- 0ns [root@K8s-node-02 ~]# sed -i '3,6 s/^/# /' /etc/chrony.conf [root@K8s-node-02 ~]# sed -i '6 a server ntp.aliyun.com iburst' /etc/chrony.conf [root@K8s-node-02 ~]# systemctl restart chronyd.service [root@K8s-node-02 ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* 203.107.6.88 2 6 7 1 -291us[-4455us] +/- 30ms
1.1.3 禁用Swap交换分区
由于容器设计为尽可能高效地使用资源,Kubernetes通常要求在节点上禁用swap
分区,原因包括:
-
性能问题:如前所述,使用
swap
会降低系统性能,这可能会影响容器的性能和稳定性。 -
资源隔离:禁用
swap
可以确保容器之间的资源隔离更加清晰,避免一个容器使用过多swap
空间而影响其他容器。 -
调试和监控:禁用
swap
可以简化系统监控和调试,因为不需要考虑磁盘空间作为内存使用的复杂性。
[root@K8s-master ~]# sed -i 's/.*swap.*/# &/' /etc/fstab [root@K8s-node-01 ~]# sed -i 's/.*swap.*/# &/' /etc/fstab [root@K8s-node-02 ~]# sed -i 's/.*swap.*/# &/' /etc/fstab
1.1.4 升级操作系统内核
注意:三台机器同时做
1.1.4.1 导入elrepo gpg key
[root@K8s-all ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
1.1.4.2 安装elrepo YUM源仓库
[root@K8s-all ~]# yum install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm -y
1.1.4.3 安装kernel-ml版本
-
ml为长期稳定版本,lt为长期维护版本
[root@K8s-all ~]# yum --enablerepo="elrepo-kernel" install kernel-ml.x86_64 -y [root@K8s-all ~]# uname -r 3.10.0-1160.71.1.el7.x86_64
1.1.4.5 设置grub2默认引导为0
[root@K8s-all ~]# grub2-set-default 0
1.1.4.6 重新生成grub2引导文件
[root@K8s-all ~]# grub2-mkconfig -o /boot/grub2/grub.cfg [root@K8s-all ~]# reboot #更新后,需要重启,使用升级的内核生效 [root@K8s-all ~]# uname -r #重启后,需要验证内核是否为更新对应的版本 6.8.7-1.el7.elrepo.x86_64
1.1.5 开启内核路由转发
[root@K8s-all ~]# sysctl -w net.ipv4.ip_forward=1 net.ipv4.ip_forward = 1 [root@K8s-master ~]# modprobe br_netfilter
1.1.6 添加网桥过滤及内核转发配置文件
[root@K8s-all ~]# cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness = 0 EOF [root@K8s-all ~]# modprobe br-netfilter [root@K8s-all ~]# sysctl -p /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness = 0
1.1.7 开启IPVS
[root@K8s-all ~]# yum install ipset ipvsadm -y [root@K8s-all ~]# vim /etc/sysconfig/modules/ipvs.modules #!/bin/bash ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_vip ip_vs_sed ip_vs_ftp nf_conntrack" for kernel_module in $ipvs_modules; do /sbin/modinfo -F filename $kernel_module >/dev/null 2>&1 if [ $? -eq 0 ]; then /sbin/modprobe $kernel_module fi done chmod 755 /etc/sysconfig/modules/ipvs.modules [root@K8s-all ~]# bash /etc/sysconfig/modules/ipvs.modules
1.1.8 配置国内镜像源
[root@K8s-all ~]# cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
1.1.9 安装软件包
[root@K8s-all ~]# yum install kubeadm-1.24.2 kubelet-1.24.2 kubectl-1.24.2 -y #最新为1.28.2 [root@K8s-all ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.2", GitCommit:"f66044f4361b9f1f96f0053dd46cb7dce5e990a8", GitTreeState:"clean", BuildDate:"2022-06-15T14:20:54Z", GoVersion:"go1.18.3", Compiler:"gc", Platform:"linux/amd64"} #为了实现docker使用的cgroupdriver与kubelet使用的cgroup的一致性,修改如下文件内容 [root@K8s-all ~]# cat <<EOF > /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd" KUBE_PROXY_MODE="ipvs" EOF [root@K8s-all ~]# systemctl enable kubelet.service
1.2.1 Docker安装部署
1.2.1.1 安装镜像源
[root@K8s-all ~]# wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.huaweicloud.com/docker-ce/linux/centos/docker-ce.repo [root@K8s-all ~]# sed -i 's+download.docker.com+mirrors.huaweicloud.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo [root@K8s-all ~]# sed -i 's/$releasever/7Server/g' /etc/yum.repos.d/docker-ce.repo
1.2.1.2 安装Docker-ce
[root@K8s-all ~]# yum install docker-ce -y
1.2.1.3 配置镜像加速器
[root@K8s-all ~]# mkdir -p /etc/docker [root@K8s-all ~]# tee /etc/docker/daemon.json <<-'EOF' { "exec-opts": ["native.cgroupdriver=systemd"], "registry-mirrors": [ "https://dbckerproxy.com", "https://hub-mirror.c.163.com", "https://mirror.baidubce.com", "https://ccr.ccs.tencentyun.com" ] } EOF [root@K8s-all ~]# systemctl daemon-reload [root@K8s-all ~]# systemctl enable --now docker.service [root@K8s-all ~]# docker --version Docker version 20.10.21, build baeda1f
1.2.2 安装cri-dockererd插件
注意:K8s从1.24版本后不支持docker了所以这里需要用cri-dockererd
下载地址:https://github.com/Mirantis/cri-dockerd/releases/download/
[root@K8s-all ~]# wget -c https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.13/cri-dockerd-0.3.13-3.el7.x86_64.rpm [root@K8s-all ~]# yum install cri-dockerd-0.3.13-3.el7.x86_64.rpm -y [root@K8s-all ~]# sed -i 's#^ExecStart=.*#ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9#' /usr/lib/systemd/system/cri-docker.service [root@K8s-all ~]# systemctl daemon-reload [root@K8s-all ~]# systemctl restart docker [root@K8s-all ~]# systemctl enable --now cri-docker.service
1.2.3 初始化Master节点
[root@K8s-master ~]# kubeadm init --kubernetes-version=v1.24.2 --pod-network-cidr=10.224.0.0/16 --apiserver-advertise-address=192.168.110.21 --apiserver-bind-port=6443 --cri-socket unix:///var/run/cri-dockerd.sock --image-repository registry.aliyuncs.com/google_containers Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Alternatively, if you are the root user, you can run: export KUBECONFIG=/etc/kubernetes/admin.conf You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: kubeadm join 192.168.110.21:6443 --token p6zmgi.oepmiwbmg61704br \ --discovery-token-ca-cert-hash sha256:4688b4812501fe5b1e7d545ba2d7f4f077cf22ef9a139bf9e7229f2109354898 [root@K8s-master ~]# mkdir -p $HOME/.kube [root@K8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@K8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config [root@K8s-master ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
1.2.4 添加Worker节点
注意:加入集群时需要添加 --cri-socket unix:///var/run/cri-dockerd.sock
[root@K8s-node-01 ~]# kubeadm join 192.168.110.21:6443 --token p6zmgi.oepmiwbmg61704br \ --discovery-token-ca-cert-hash sha256:4688b4812501fe5b1e7d545ba2d7f4f077cf22ef9a139bf9e7229f2109354898 \ --cri-socket unix:///var/run/cri-dockerd.sock [root@K8s-node-02 ~]# kubeadm join 192.168.110.21:6443 --token p6zmgi.oepmiwbmg61704br \ --discovery-token-ca-cert-hash sha256:4688b4812501fe5b1e7d545ba2d7f4f077cf22ef9a139bf9e7229f2109354898 \ --cri-socket unix:///var/run/cri-dockerd.sock
1.2.5 查看集群
[root@K8s-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady control-plane 6m1s v1.24.2
k8s-node-01 NotReady <none> 91s v1.24.2
k8s-node-02 NotReady <none> 104s v1.24.2
1.2.6 安装网络插件
[root@K8s-master ~]# wget -c http://down.i4t.com/k8s1.24/kube-flannel.yml [root@K8s-master ~]# sed -i 's/eth0/ens33/' /root/kube-flannel.yml #替换成自己的网卡,多网卡指定内网网卡 [root@K8s-master ~]# kubectl apply -f /root/kube-flannel.yml
1.2.7 检查
[root@K8s-master ~]# kubectl get nodes #状态为Ready NAME STATUS ROLES AGE VERSION k8s-master Ready control-plane 12m v1.24.2 k8s-node-01 Ready <none> 11m v1.24.2 k8s-node-02 Ready <none> 11m v1.24.2 [root@K8s-master ~]# kubectl get pods -n kube-system #网络正常,如果没用全部Running就稍等片刻 NAME READY STATUS RESTARTS AGE coredns-74586cf9b6-2jd4m 1/1 Running 0 3m15s coredns-74586cf9b6-nh8x9 1/1 Running 0 3m15s etcd-k8s-master 1/1 Running 0 3m28s kube-apiserver-k8s-master 1/1 Running 0 3m28s kube-controller-manager-k8s-master 1/1 Running 0 3m28s kube-flannel-ds-cvplt 1/1 Running 0 96s kube-flannel-ds-m5w7m 1/1 Running 0 96s kube-flannel-ds-t68sn 1/1 Running 0 96s kube-proxy-4c24h 1/1 Running 0 3m16s kube-proxy-5mvcb 1/1 Running 0 2m46s kube-proxy-vkjqq 1/1 Running 0 2m33s kube-scheduler-k8s-master 1/1 Running 0 3m28s
更多推荐
所有评论(0)