一、安装相关软件

1、安装启动docker

yum install docker-ce -y
sudo systemctl start docker

2、配置容器仓库

mkdir -p /etc/docker/
tee /etc/docker/daemon.json
{
"insecure-registries":["harbor.loongnix.cn"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl enable docker
sudo systemctl restart docker

3、登录容器

docker login harbor.loongnix.cn 
user:XXX
passwd:XXX

4、配置kubernetes.repo源

yum install loongnix-release-kubernetes
cat /etc/yum.repos.d/Loongnix-kubernetes.repo 
#
# Loongnix-kubernetes.repo
#

[loongnix-kubernetes]
name=Loongnix server $releasever - kubernetes
baseurl=http://pkg.loongnix.cn/loongnix-server/$releasever/cloud/$basearch/release/kubernetes/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-LOONGNIX
module_hotfixes=1

5、下载相关软件包

yum install kubelet kubeadm  kubectl -y
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet

二、master节点部署

1、查看所需镜像

kubeadm config images list

2、拉取相关镜像

1、登录到docker: docker login harbor.loongnix.cn
2、拉去相关镜像
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-apiserver:v1.20.0
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-controller-manager:v1.20.0
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-scheduler:v1.20.0
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-proxy:v1.20.0
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/pause:v3.2
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/etcd-io/etcd:3.4.13-0
docker pull harbor.loongnix.cn/mirrorloongsoncontainers/coredns:1.7.0
3、docker images查看本地下载的镜像
4、tag修改镜像
docker tag  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-apiserver:v1.20.0   k8s.gcr.io/kube-apiserver:v1.20.0  
docker tag  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-controller-manager:v1.20.0   k8s.gcr.io/kube-controller-manager:v1.20.0 
dcoker tag  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-scheduler:v1.20.0   k8s.gcr.io/kube-scheduler:v1.20.0
docker tag  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-proxy:v1.20.0   k8s.gcr.io/kube-proxy:v1.20.0  
docker tag  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/pause:v3.2   k8s.gcr.io/pause:3.2
docker tag  harbor.loongnix.cn/mirrorloongsoncontainers/etcd-io/etcd:3.4.13-0  k8s.gcr.io/etcd:3.4.13-0 
docker tag  harbor.loongnix.cn/mirrorloongsoncontainers/coredns:1.7.0    k8s.gcr.io/coredns:1.7.0 
5、删除之前的镜像
docker rmi  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-apiserver:v1.20.0
docker rmi  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-controller-manager:v1.20.0
docker rmi   harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-scheduler:v1.20.0  
docker rmi   harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/kube-proxy:v1.20.0
docker  rmi  harbor.loongnix.cn/mirrorloongsoncontainers/k8s.gcr.io/pause:v3.2
docker rmi   harbor.loongnix.cn/mirrorloongsoncontainers/etcd-io/etcd:3.4.13-0
docker  rmi  harbor.loongnix.cn/mirrorloongsoncontainers/coredns:1.7.0
6、docker images 查看本地目前镜像

3、生成配置文件

kubeadm config print init-defaults >init.default.yaml
修改如下几行:
advertiseAddress: 10.130.0.196            #修改为指定的IP地址(master)
imageRepository: k8s.gcr.io                  #修改镜像源
kubernetesVersion: v1..20.0                  #修改版本

4、初始化master

kubeadm init  --config=init.default.yaml --v=5
结果:Your Kubernetes control-plane has initialized successfully!

根据初始化结果执行:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

或者如果是root的话,直接执行
export KUBECONFIG=/etc/kubernetes/admin.conf

5、创建kube-flannel

kubectl apply -f kube-flannel.yaml

kube-flannel.yaml文件,参考博客:07 Kubernetes 安装flannel组件_雪绒~的博客-CSDN博客

6、安装cni插件

mkdir -p /etc/cni/net.d/
cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate": {"isDefaultGateway": true}}
EOF
mkdir /usr/share/oci-umount/oci-umount.d -p
mkdir /run/flannel/
cat <<EOF> /run/flannel/subnet.env
FLANNEL_NETWORK=10.244.0.0/16
FLANNEL_SUBNET=10.244.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
EOF

7、重启服务

systemctl daemon-reload
systemctl restart kubelet

三、结果验证

1、master机器上查看节点的状态

[root@k8s-master ~]# kubectl get nodes
NAME         STATUS   ROLES                  AGE     VERSION
k8s-master   Ready    control-plane,master   8m17s   v1.20.0
k8s-node-1   Ready    <none>                 6m49s   v1.20.0
k8s-node-2   Ready    <none>                 6m44s   v1.20.0

2、master机器上查看组件状态

[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
scheduler            Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

注:节点状态和组件状态无异常,集群创建成功

【问题描述】查看组件状态,scheduler和controller-manager的状态为Unhealthy

【问题解决】

静态pod路径:/etc/kubernetes/manifests

1、修改kube-scheduler.yaml文件,将port=0那行注释掉

2、修改kube-controller-manager.yaml,将port=0那行注释掉

3、再次查看本地端口 netstat -tulpn,确认10251和10252端口已经开启

4、查看组件状态 kubectl get cs,状态为healthy

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐