基于kubeadm 部署k8s 1.23集群

部署方式
  • minikube:一个用于快速搭建单节点的kubernetes工具
  • kubeadm:一个用于快速搭建kubernetes集群的工具
  • 二进制包:从官网上下载每个组件的二进制包,依次去安装
  • 选择用kubeadm方式去安装
节点信息

master 192.168.170.10 2核CPU 4G内存 40G硬盘
node1 192.168.170.11 2核CPU 4G内存 40G硬盘
node2 192.168.170.12 2核CPU 4G内存 40G硬盘
node3 192.168.170.13 2核CPU 4G内存 40G硬盘

设置主机名以及host解析
hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2
hostnamectl set-hostname node3

[root@master ~]# vi /etc/hosts
192.168.170.10      master
192.168.170.11		node1
192.168.170.12		node2
192.168.170.13		node3
安装依赖包
[root@master ~]# rm -rf /etc/yum.repos.d/*
[root@master ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
[root@master ~]# yum makecache && yum clean 
[root@master ~]# yum repolist

出现ouldn't resolve host 'mirrors.cloud.aliyuncs.com'
[root@master ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

依赖包
[root@master ~]# yum -y install vim tree net-tools git 

安装tab命令补全 
[root@master ~]# yum -y install epel-release yum-plugin-fastestmirror bash-completion 

立即生效命令
[root@master ~]# source /etc/profile.d/bash_completion.sh
关闭防火墙规则
[root@master ~]# systemctl stop firewalld && systemctl disable firewalld
关闭swap分区
关闭虚拟内存 && 永久关闭虚拟内存(也可以注解掉)
[root@master ~]# swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

确认交换分区是否关闭,都为0表示关闭
[root@master ~]# free -m
关闭selinux分区
[root@master ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disable/' /etc/selinux/config
调整内核参数
将桥接的IPv4流量传递到iptables的链
[root@master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF

[root@master ~]# sysctl --system 生效
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
kernel.kptr_restrict = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
vm.max_map_count = 262144
* Applying /etc/sysctl.d/k8s.conf ...
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf ...
vm.max_map_count = 262144

[root@master ~]# reboot 重启加载一些配置
所有节点配置安装kubeadm、docker、kubelet
配置docker yum镜像
[root@master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo'

安装docker
[root@master ~]# yum -y install docker-ce-18.06.1.ce-3.el7 
[root@master ~]# systemctl start docker && systemctl enable docker
[root@master ~]# docker --version
Docker version 18.06.1-ce, build e68fc7a

修改配置
Docker 在默认情况下使用Vgroup Driver为cgroupfs,而Kubernetes推荐使用systemd来替代cgroupfs,并配置镜像加速
[root@master ~]# cat <<EOF> /etc/docker/daemon.json
> {
> "exec-opts": ["native.cgroupdriver=systemd"],
> "registry-mirrors": ["https://kn0t2bca.mirror.aliyuncs.com"]
> }
> EOF

配置K8s yum镜像
[root@master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
> https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
> EOF

加速缓存
[root@master ~]# yum makecache && yum repolist

安装kubeadm、kubelet、kubectl
[root@master ~]# yum -y install kubeadm-1.23.0 kubelet-1.23.0 kubectl-1.23.0

开机自启
[root@master ~]# systemctl enable kubelet 

[root@master ~]# kubelet --version
Kubernetes v1.23.0
命令TAB补全
[root@master ~]# kubectl completion bash >/etc/bash_completion.d/kubectl
[root@master ~]# kubeadm completion bash >/etc/bash_completion.d/kubeadm

重载生效当前终端
[root@master ~]# source etc/bash_completion.d/kubectl
[root@master ~]# source etc/bash_completion.d/kubeadm
master节点初始化

-apiserver-advertise-address: Kubernetes API服务器公告的IP地址或DNS名称。
-image-repository: Kubernetes容器镜像存储库的地址。在此示例中,使用阿里云容器镜像服务作为默认镜像存储库。
-kubernetes-version: 安装的Kubernetes版本。在此示例中,安装版本为v1.23.0。
-service-cidr: Kubernetes Service使用的IP地址段。在此示例中,使用10.2.0.0/16作为Service IP地址段。
-pod-network-cidr: Kubernetes Pod网络使用的IP地址段。在此示例中,使用10.243.0.0/16作为Pod网络IP地址段。

[root@master ~]# kubeadm init \
> --apiserver-advertise-address=192.168.170.10 \
> --image-repository registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.23.0 \
> --service-cidr=10.2.0.0/16 \
> --pod-network-cidr=10.243.0.0/16
[init] Using Kubernetes version: v1.23.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local master] and IPs [10.1.0.1 192.168.17.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost master] and IPs [192.168.17.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost master] and IPs [192.168.17.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.007258 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: vkzzf2.hz2ic2bvqgnjcpcz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.17.20:6443 --token vkzzf2.hz2ic2bvqgnjcpcz \
	--discovery-token-ca-cert-hash sha256:25690372b5265edbe2179b3ff768caf9ffc4df9dd2d99f64acc56dcb7c8246f7 
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config


node节点加入集群
[root@node1 ~]#  kubeadm join 192.168.17.20:6443 --token vkzzf2.hz2ic2bvqgnjcpcz \
	--discovery-token-ca-cert-hash sha256:25690372b5265edbe2179b3ff768caf9ffc4df9dd2d99f64acc56dcb7c8246f7 

初始化后才会kubelet  ready

[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES                 AGE     VERSION
master  NotReady   control-plane,master   12m     v1.23.0
node1   NotReady   <none>                 7m24s   v1.23.0
node2   NotReady   <none>                 7m24s   v1.23.0
node3   NotReady   <none>                 7m24s   v1.23.0


安装网络插件,kube-flannel.yml见下面
[root@master ~]# wget https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
[root@master ~]# kubectl apply -f kube-flannel.yml 
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created


[root@master ~]# kubectl get pod -n kube-flannel
NAME                    READY   STATUS    RESTARTS       AGE
kube-flannel-ds-dsx6m   1/1     Running   14 (16h ago)   9d
kube-flannel-ds-pq6h2   1/1     Running   14 (16h ago)   9d
kube-flannel-ds-q5kxq   1/1     Running   20 (16h ago)   9d
kube-flannel-ds-s4689   1/1     Running   14 (16h ago)   9d


[root@master ~]# kubectl get pod -n kube-system
NAME                              READY   STATUS    RESTARTS       AGE
coredns-6d8c4cb4d-mg5g6           1/1     Running   14 (16h ago)   9d
coredns-6d8c4cb4d-mlw9z           1/1     Running   14 (16h ago)   9d
etcd-master                       1/1     Running   18 (16h ago)   9d
kube-apiserver-master             1/1     Running   17 (16h ago)   6d17h
kube-controller-manager-master    1/1     Running   19 (16h ago)   9d
kube-proxy-4gvm5                  1/1     Running   14 (16h ago)   9d
kube-proxy-5mzhs                  1/1     Running   14 (16h ago)   9d
kube-proxy-w2pxb                  1/1     Running   18 (16h ago)   9d
kube-proxy-wtp2s                  1/1     Running   14 (16h ago)   9d
kube-scheduler-master             1/1     Running   19 (16h ago)   9d
metrics-server-6ffd99ccd6-5wlwb   1/1     Running   20 (16h ago)   7d17h


[root@master ~]# kubectl get node
NAME     STATUS   ROLES                  AGE   VERSION
master   Ready    control-plane,master   9d    v1.23.0
node1    Ready    computer               9d    v1.23.0
node2    Ready    computer               9d    v1.23.0
node3    Ready    computer               9d    v1.23.0
问题
master节点初始化问题 会很多,因人而异
博主就遇到很多问题

kubeadm reset 清理初始化失败的内容
然后在重新的 kubeadm init 			这里问题没有记录

版本不一致也会导致初始化失败,需要对版本降级处理
Kubelet health check失败
downgrade   kubeadm-1.23.0-0.x86_64 kubectl-1.23.0-0.x86_64 kubelet-1.23.0-0.x86_64

加入集群报错
[root@node01 ~]# kubeadm join 192.168.161.11:6443 --token x1dgfp.f58qfvz3w7htcf8g \
> --discovery-token-ca-cert-hash sha256:9f799adc1da18f3ac27c3ba2d813f74ef05dbf8ad75ecd651e7927477e5c8c85 
[preflight] Running pre-flight checks

解决办法
rm -rf /etc/kubernetes/manifests /etc/kubernetes/kubelet.conf /etc/kubernetes/pki/ca.crt

新的节点加入遇到了报错,路由转发问题
[root@node4 ~]# kubeadm join 192.168.170.10:6443 --token 3e7ifl.2soi1hgt52za79m6 --discovery-token-ca-cert-hash sha256:cd621a691571fbf7b4700296768fde1b17bb010b0268bc2a0964e65def97e2bd 
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

解决办法
echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.conf 
sysctl -p

1.23版本 dashboard安装参考:https://blog.csdn.net/Stephen_Daa/article/details/129737078

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐