(我的其他文章有详细的bug解决方案,以及KubeEdge安装部署,有兴趣可以去看看)

mater服务器和node1节点。就两台服务器

使用步骤

安装前准备#(全部服务器执行)

两台centos7虚拟机,配置2G,2核
master: 192.168.40.4
node1: 192.168.40.3
如果需要三台,或者四台服务器,命令布置和node1一模一样!!!

yum更新(全部服务器执行)

yum update
[root@localhost ~]# systemctl stop firewalld
[root@localhost ~]# setenforce 0
[root@localhost ~]# yum -y install ntpdate
[root@localhost ~]# ntpdate pool.ntp.org
//两台主机名分别改为master,node1
[root@localhost ~]# hostname master
[root@localhost ~]# hostname node1
[root@master ~]# vim /etc/hosts
[root@master ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
##添加两台台的ip+主机名
192.168.40.4 master   
192.168.40.3 node1

安装docker(这里使用阿里云仓库)#(全部服务器执行)《已安装docker就忽略》

代码如下(示例):

//配置docker的yum仓库(这里使用阿里云仓库)
[root@master ~]# yum -y install yum-utils device-mapper-persistent-data lvm2
[root@master ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@master ~]# ls /etc/yum.repos.d/
CentOS-Base.repo         CentOS-Debuginfo.repo  CentOS-SIG-ansible-29.repo  docker-ce.repo
CentOS-Base.repo.backup  CentOS-Media.repo      CentOS-Vault.repo
//安装docker
[root@master ~]# yum -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io
[root@master ~]# systemctl start docker
[root@master ~]# systemctl enable docker
[root@master ~]# docker --version
Docker version 18.09.7, build 2d0083d

//修改docker cgroup driver为systemd
[root@master ~]# vim /etc/docker/daemon.json
[root@master ~]# cat /etc/docker/daemon.json 
{
	"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master ~]# systemctl restart docker
[root@master ~]# docker info|grep Cgroup
Cgroup Driver: systemd

docker安装时候如果报这个错误,说明GPG密钥失败

GPG key retrieval failed: [Errno 14] curl#7 - "Failed to connect to 240e:95c:2002:8:3::3fd: Network 

解决方案(我的是centos7)

rpm --import http://mirrors.163.com/centos/RPM-GPG-KEY-CentOS-7 

然后重新安装成功

yum -y install docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

设置k8s环境准备条件(全部服务器执行)
关闭防火墙

sudo systemctl disable firewalld &&
sudo systemctl stop firewalld

关闭selinux(全部服务器执行)
临时禁用selinux(全部服务器执行)

sudo setenforce 0

永久关闭 修改/etc/sysconfig/selinux文件设置(全部服务器执行)

sudo sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux
sudo sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sudo sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

禁用交换分区(全部服务器执行)

sudo swapoff -a

永久禁用,打开/etc/fstab注释掉swap那一行
```c
sudo sed -i 's/.*swap.*/#&/' /etc/fstab

修改内核参数(全部服务器执行)

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

安装kubeadm #(全部服务器执行)(坑来了)

//配置kubenetes的yum仓库(这里使用阿里云仓库)
[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=1
> repo_gpgcheck=1
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
> EOF
[root@master ~]# yum makecache
//安装kubelat、kubectl、kubeadm
[root@master ~]# yum -y install kubelet-1.22.0 kubeadm-1.22.0 kubectl-1.22.0
# 会出现以下问题
 失败的软件包是:kubelet-1.22.0-0.x86_64
 GPG  密钥配置为:https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
# 解决方法:
[root@master ~]# rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
[root@master ~]# rpm --import https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
# 重新安装一下
[root@master ~]# yum -y install kubelet-1.22.0 kubeadm-1.22.0 kubectl-1.22.0
[root@master ~]# rpm -aq kubelet kubectl kubeadm
kubectl-1.22.0-0.x86_64
kubelet-1.22.0-0.x86_64
kubeadm-1.22.0-0.x86_64
//将kubelet加入开机启动,这里刚安装完成不能直接启动。(因为目前还没有集群还没有建立)
[root@master ~]# systemctl enable kubelet

这里如果报错就改成,把gpgcheck=0 和 repo_gpgcheck=0 改成0就行了

[root@master ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
> [kubernetes]
> name=Kubernetes
> baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
> enabled=1
> gpgcheck=0
> repo_gpgcheck=0
> gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg        
> EOF
[root@master ~]# yum makecache
# 重新安装一下
[root@master ~]# yum -y install kubelet-1.22.0 kubeadm-1.22.0 kubectl-1.22.0
[root@master ~]# rpm -aq kubelet kubectl kubeadm
kubectl-1.22.0-0.x86_64
kubelet-1.22.0-0.x86_64
kubeadm-1.22.0-0.x86_64
成功!!!
//将kubelet加入开机启动,这里刚安装完成不能直接启动。(因为目前还没有集群还没有建立)
[root@master ~]# systemctl enable kubelet

初始化Master(仅仅master服务器执行)《坑2----需要注意初始化生成的token和hash,后面node1加入master服务器要用到!!!》

//配置忽略swap报错
[root@master ~]# vim /etc/sysconfig/kubelet 
[root@master ~]# cat /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
//初始化
[root@master ~]# kubeadm init --kubernetes-version=v1.22.0 --image-repository registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap
......
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.40.4:6443 --token 0dscr0.79zkrsbjwea5ggl9 \
    --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
//按照上面初始化成功提示创建配置文件
[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# chown $(id -u):$(id -g) $HOME/.kube/config
//初始化完成后可以看到所需镜像也拉取下来了
[root@master ~]# docker image ls
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-apiserver            v1.22.0            34a53be6c9a7        15 months ago       207MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.22.0             9f5df470155d        15 months ago       159MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.22.0            88fa9cb27bd2        15 months ago       81.1MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.22.0             167bbf6c9338        15 months ago       82.4MB
registry.aliyuncs.com/google_containers/coredns                   1.3.1               eb516548c180        22 months ago       40.3MB
registry.aliyuncs.com/google_containers/etcd                      3.3.10              2c4adeb21b4f        23 months ago       258MB
registry.aliyuncs.com/google_containers/pause                     3.1                 da86e6ba6ca1        2 years ago         742kB
//添加flannel网络组件
flannel项目地址:https://github.com/coreos/flannel
方法一:
[root@master ~]# kubectl apply -f kube-flannel.yml 
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
方法二:
[root@master ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
//验证flannel网络插件是否部署成功(Running即为成功)
[root@master ~]# kubectl get pods -n kube-system |grep flannel
kube-flannel-ds-765qc            1/1     Running   0          2m34s

node1加入master服务器节点(仅仅node1服务器执行)

向集群中添加新节点,执行在kubeadm init 输出的kubeadm join命令,再在后面同样添加忽略swap报错参数。
//配置忽略swap报错
[root@node1 ~]# vi /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
[root@node2 ~]# vi  /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--fail-swap-on=false"
//加入node1节点
[root@node1 ~]# swapoff -a
[root@node1 ~]# kubeadm join 192.168.40.4:6443 --token 0dscr0.79zkrsbjwea5ggl9     --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
# 报错处理
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
# 解决
[root@node1 ~]# echo "1" > /proc/sys/net/bridge/bridge-nf-call-iptables
# 再添加一下
[root@node1 ~]# kubeadm join 192.168.40.4:6443 --token 0dscr0.79zkrsbjwea5ggl9     --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
。。。。。。
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

注意:kubeadm join 192.168.176.137:6443 --token 0dscr0.79zkrsbjwea5ggl9 --discovery-token-ca-cert-hash sha256:24aaf28785ab342cd3d01675578fcffbc8546e9df06c232dd9bbde9867f22f3c
这里的:
token:是上面master生成的token
discovery-token-ca-cert-hash sha256:hash值是上面master初始化生成的hash值
忘记了不要紧,重新生成token和hash命令

kubeadm token create --print-join-command

检查是否部署成功(master服务器)

查看:
在这里插入图片描述

总结

完成。中间踩了几个坑

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐