k8s

PS:最少还是3台,此文档因资源不足,集群只有俩台。

1.基础配置修改

#修改主机名添加hosts映射
[root@localhost ~]# hostnamectl set-hostname k8s-master
[root@localhost ~ ]# hostnamectl set-hostname k8s-node01
[root@k8s-master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.0.12.88 k8s-master
10.0.12.252 k8s-node01

#关闭防火墙和SElinux
[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld

[root@k8s-master ~]# sed -i 's/enforcing/disabled/' /etc/selinux/config
[root@k8s-master ~]# setenforce 0

#关闭共享内存
[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab

#将桥接的IPv4流量传递到iptables

[root@k8s-master ~]# cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@k8s-master ~]# modprobe br_netfilter
[root@k8s-master ~]# sysctl -p
[root@k8s-master ~]# sysctl --system

#时间同步
[root@k8s-master ~]# yum -y install ntpdate
[root@k8s-master ~]# ntpdate time.windows.com

2.安装docker

#安装docker环境
[root@k8s-master ~]# wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
[root@k8s-master ~]# yum -y install docker-ce
#配置阿里平台的镜像加速器
[root@k8s-master ~]# cat > /etc/docker/daemon.json << EOF 
{
  "registry-mirrors": ["https://z1pa8k3e.mirror.aliyuncs.com"]
}
EOF
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl  start  docker
[root@k8s-master ~]# systemctl enable docker.service
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.

#安装kubeadm、kubelet、kubectl
[root@k8s-master ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master ~]# yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
[root@k8s-master yum.repos.d]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

3.部署k8s-master

3.1 .部署master

[root@k8s-master ~]# kubeadm init   \
--apiserver-advertise-address=10.0.12.88  \
--image-repository registry.aliyuncs.com/google_containers  \
--kubernetes-version v1.23.0  \
--service-cidr=10.96.0.0/12  \
--pod-network-cidr=10.244.0.0/16   \
--ignore-preflight-errors=all

参数说明:

  • –apiserver-advertise-address:集群通告地址
  • –image-repository :默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
  • –kubernetes-version:K8s版本,与上面安装的一致
  • –service-cidr :集群内部虚拟网络,Pod统一访问入口
  • –pod-network-cidr: Pod网络,与下面部署的CNI网络组件yaml中保持一致

K8s初始化报错:

#问题
......
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher


#解决
[root@k8s-master ~]# cd /etc/docker/
#备份原先配置的加速器
[root@k8s-master docker]# mv daemon.json daemon.json_bak
[root@k8s-master docker]# cat > /etc/docker/daemon.json << EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
[root@k8s-master docker]# systemctl  stop docker
[root@k8s-master docker]# systemctl  start docker

#重置k8s
[root@k8s-master docker]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0216 17:38:33.646381   18271 reset.go:101] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://10.0.12.88:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0216 17:38:35.696509   18271 removeetcdmember.go:80] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/etcd /var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

#重新初始化
[root@k8s-master ~]# kubeadm init   \
--apiserver-advertise-address=10.0.12.88  \
--image-repository registry.aliyuncs.com/google_containers  \
--kubernetes-version v1.23.0  \
--service-cidr=10.96.0.0/12  \
--pod-network-cidr=10.244.0.0/16   \
--ignore-preflight-errors=all

PS:初始化错误再次执行kubeadm init初始化k8s集群之前,先执行kubeadm reset指令,如若不执行,则会出现额外的报错信息致使更加头疼;

3.2 .k8s认证文件

#初始化反馈中
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf


......

#我是root用户
[root@k8s-master ~]#  export KUBECONFIG=/etc/kubernetes/admin.conf
#查看工作节点
[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   8m13s   v1.23.0

PS:此处为NotReady 是因为网络容器还没有部署

4.添加node节点

4.1 将node01添加到集群中

PS:如果docker是yum安装的也需要修改systemd为cgroupd,修改方法如上一步中;

[root@k8s-node01 ~]# kubeadm join 10.0.12.88:6443 --token m1z4wf.9k83lb2ibm9fj9hg \
> --discovery-token-ca-cert-hash sha256:490b4a391cbc99838b2b92abced6122a775c6dd766eb8cd3c4ed0d617841ffd1
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 23.0.1. Latest validated version: 20.10
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

4.2 查看该节点是否已加入集群

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   21h     v1.23.0
k8s-node01   NotReady   <none>                 8m32s   v1.23.0

5.部署网络容器

Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。

下载YAML并修改:

PS:calico下载需和k8s版本一致;

网址:https://docs.projectcalico.org

[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.23/manifests/calico.yaml
#修改pod网络
[root@k8s-master ~]# vim calico.yaml
......
             - name: CALICO_IPV4POOL_CIDR
               value: "10.244.0.0/16"
......

PS:value与k8s初始化–pod-network-cidr参数设置的同步;

配置网络容器

[root@k8s-master ~]# kubectl  apply -f calico.yaml
[root@k8s-master ~]# kubectl  get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-54756b744f-gnk79   1/1     Running   0          154m
calico-node-8hhst                          1/1     Running   0          154m
calico-node-h2dzq                          1/1     Running   0          154m
coredns-6d8c4cb4d-fpzj6                    1/1     Running   0          24h
coredns-6d8c4cb4d-lnrsv                    1/1     Running   0          24h
etcd-k8s-master                            1/1     Running   1          24h
kube-apiserver-k8s-master                  1/1     Running   1          24h
kube-controller-manager-k8s-master         1/1     Running   2          24h
kube-proxy-msgmg                           1/1     Running   0          3h39m
kube-proxy-zk966                           1/1     Running   0          24h
kube-scheduler-k8s-master                  1/1     Running   2          24h
[root@k8s-master ~]# kubectl  get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   24h     v1.23.0
k8s-node01   Ready    <none>                 3h39m   v1.23.0

6.设置

6.1.设置kubectl命令行工具自动补全功能

[root@k8s-master ~]# yum install -y bash-completion
[root@k8s-master ~]# source /usr/share/bash-completion/bash_completion
[root@k8s-master ~]# source <(kubectl completion bash)
[root@k8s-master ~]# echo "source <(kubectl completion bash)" >> ~/.bashrc

6.2.设置tab键空格个数

[root@k8s-master ~]# vim .vimrc
set tabstop=2
[root@k8s-master ~]# source .vimrc
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐