master节点搭建

首先编辑对应的配置文件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg 
exclude=kube*
EOF

修改基础设置

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

安装kubernetes

yum install -y kubelet-1.18.3 kubeadm-1.18.3 kubectl-1.18.3 --disableexcludes=kubernetes

设置kubernetes开机启动

systemctl enable --now kubelet

修改kubernetes参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

重启kubernetes

sysctl --system
systemctl daemon-reload
systemctl restart kubelet

 

执行kubeadm init 来初始化集群

kubeadm init \
--image-repository registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.18.3 \
--pod-network-cidr=10.244.0.0/16 \
--apiserver-advertise-address=192.168.0.127 \
--ignore-preflight-errors=Swap

参数说明:

kubernetes-version指定kubernetes的版本

pod-network-cidr指定kubernetes部署pod时虚拟的ip段,网络插件使用flannel使用10.244.0.0/16,网络插件使用calico使用192.168.0.0/16

apiserver-advertise-address指定master宿主机的ip地址。

如果执行成功,会看到如下输出:

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [Podnetwork].yaml" with one of the addon options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

按照输出的内容,如果是非root用户,执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

root用户执行

export KUBECONFIG=/etc/kubernetes/admin.conf

安装网络插件

kubernetes安装flannel

执行下面的命令:

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

卸载flannel网络步骤

#第一步,在master节点删除flannel
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#第二步,在node节点清理flannel网络留下的文件
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
rm -f /etc/cni/net.d/*
注:执行完上面的操作,重启kubelet

#第三步,应用calico相关的yaml文件

kubernetes安装calico

#k8s 1.18.3版本需要安装calico

kubectl apply -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
#另一个镜像的尝试

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
kubectl apply -f https://docs.projectcalico.org/v3.16/manifests/calico.yaml

#卸载calico

kubectl delete -f https://docs.projectcalico.org/manifests/calico.yaml
# 删除旧cni 配置
rm -rf  /apps/cni/etc/*
kubectl delete -f https://docs.projectcalico.org/v3.11/manifests/calico.yaml
# 删除旧cni 配置
rm -rf  /apps/cni/etc/*
kubectl delete -f https://docs.projectcalico.org/v3.16/manifests/calico.yaml
# 删除旧cni 配置
rm -rf  /apps/cni/etc/*
使用kubectl get pods --all-namespaces 看到flannel的pod状态为running,表示插件安装成功,注意网络插件只需要在master节点安装,工作节点加入到master节点之后会自动创建。

token过期需要重新生成

最后生成kubeadm join命令,该命令用于工作节点加入主节点,两个参数 tokendiscovery-token-ca-cert-hash 可以分别如下得到:

# 创建token:

kubeadm token create

# 查看已经生成的token:

kubeadm token list

# 生成的token会有一个有效期,失效的token不能继续使用,只能继续生成新的。

# 获取 discovery-token-ca-cert-hash:

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
<master-ip>:<master-port> 是主节点的ip和主节点监听的端口,默认是6443;

# 得到这两个参数之后,就可以在工作节点通过该命令加入master节点。

# 然后使用join命令就可以了

kubeadm join 192.168.0.127:6443 --token 424mp7.nkxx07p940mkl2nd --discovery-token-ca-cert-hash sha256:d88fb55cb1bd659023b11e61052b39bbfe99842b0636574a16c76df186fd5e0d

 

 

工作节点搭建

首先编辑对应的配置文件

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF

修改基础设置

setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

安装kubernetes

yum install -y kubelet-1.18.3 kubeadm-1.18.3 kubectl-1.18.3 --disableexcludes=kubernetes

设置kubernetes开机自启动

systemctl enable --now kubelet

修改kubernetes配置参数

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

设置和重启kubelet

sysctl --system
systemctl daemon-reload
systemctl restart kubelet

将主节点的admin配置文件复制到工作节点宿主机上

scp -P 22222 root@10.0.0.9:/etc/kubernetes/admin.conf admin.conf

使用export设置环境变量

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf

让工作节点加入master节点

kubeadm join 192.168.0.127:6443 --token euveym.9cdr2qtzzwjjft8d --discovery-token-ca-cert-hash sha256:1c39725da9d5e79074e2b1361870aeb5a8cc672e48bf8f45cdd02b94cabe419e

如果服务器报错

[preflight] Running pre-flight checks
        [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

说明linux系统默认是禁止数据转发的

/proc/sys/net/ipv4/ip_forward

文件设置:0是禁止 1是转发

使用命令修改

echo "1" > /proc/sys/net/ipv4/ip_forward

另外,重启网络服务或主机后效果不再。若要其自动执行,可将命令echo "1" > /proc/sys/net/ipv4/ip_forward 写入脚本/etc/rc.d/rc.local 或者 在/etc/sysconfig/network脚本中添加 FORWARD_IPV4="YES"

 

kubernetes查看当前版本命令

kubelet --version

移除worker节点

# 1.在准备移除的 worker 节点上执行

kubeadm reset -f

# 2.在 master 节点上执行

kubectl get nodes -o wide

# 3.删除worker节点,在 master 节点上执行

kubectl delete node demo-worker-x-x

# 将 demo-worker-x-x 替换为要移除的 worker 节点的名字

worker 节点的名字可以通过在节点master上执行 kubectl get nodes 命令获得

K8s安装ingress-nginx

使用DaemonSet+HostNetwork+nodeSelector方式部署ingress-nginx

参考文档:https://segmentfault.com/a/1190000019908991

1.下载deploy.yam

📎deploy.yaml

上面这个文件是已经修改好的yaml文件,其中配置中,对官网的配置进行了更改,主要是下面三点,可以下载下来对比一下

# 修改kind: Deployment
kind: DaemonSet

# 删除Replicas
# replicas: 1

# 在containers中添加hostNetwork和nodeSelector选择对应的节点标签
      hostNetwork: true
      nodeSelector:
        custom/ingress-controller-ready: "true"

2.首先给每台worker节点打上label标签

kubectl label nodes ecs-bd78-0001 custom/ingress-controller-ready=true
kubectl label nodes ecs-bd78-0002 custom/ingress-controller-ready=true
kubectl label nodes ecs-bd78-0003 custom/ingress-controller-ready=true
kubectl label nodes ecs-bd78-0004 custom/ingress-controller-ready=true
kubectl label nodes ecs-bd78-0005 custom/ingress-controller-ready=true
kubectl label nodes ecs-bd78-0006 custom/ingress-controller-ready=true
kubectl label nodes navinfo-cennavi-ft custom/ingress-controller-ready=true
kubectl label nodes toyota-service custom/ingress-controller-ready=true

3.安装ingress-nginx

kubectl apply -f deploy.yaml

4.查看部署状态

kubectl get pods -n ingress-nginx -o wide --watch
NAMESPACE       NAME                                       READY     STATUS     RESTARTS   IP
ingress-nginx   default-http-backend-6f26b                 1/1       Running    0          192.168.168.154
ingress-nginx   nginx-ingress-controller-58b48898c-gdkgk   1/1       Running    0          194.168.1.15

当状态变为`Running`时便是部署成功了,这里你会看到两个ip:
其中192.168.168.154是docker所在网络的ip,宿主机可以访问。
其中194.168.1.15是宿主机的ip,至于为什么这里显示的是宿主机的ip而不是docker容器的ip,这里不必深究,学习k8s还有很长的路要走。

# 另一个样例
 NAME                                        READY   STATUS      RESTARTS   AGE    IP               NODE                 NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-ctzlk        0/1     Completed   0          120m   192.168.218.34   navinfo-cennavi-ft   <none>           <none>
ingress-nginx-admission-patch-5z6m5         0/1     Completed   0          120m   192.168.218.35   navinfo-cennavi-ft   <none>           <none>
ingress-nginx-controller-5d798d46c6-dgrzm   1/1     Running     0          120m   192.168.218.36   navinfo-cennavi-ft   <none>           <none>

这个证明当前安装状态是成功的

5.扩展节点的情况

在安装好ingress-nginx的情况下,要想使用DaemonSet+HostNetwork+nodeSelector模式,只需要把对应的worker节点打上标签就可以了。打标签操作在上面第二步。

卸载kubernetes

 

kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
yum clean all
yum remove kube*
# 有的时候一个节点重复加入会有cni0已经存在冲突的问题产生,视情况删除,cni0是flannel生成的
ip link delete cni0

 

安装过程中错误解决方案

1.The connection to the server localhost:8080 was refused - did you specify the right host or port?

使用kubeadm安装的时候,worker安装之后报上面这个错误

解决方案,运行下面脚本

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

1)编辑docker配置文件/etc/docker/daemon.conf

"exec-opts": ["native.cgroupdriver=systemd"]
systemctl daemon-reload
systemctl restart docker

2)编辑/usr/lib/systemd/system/docker.service

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd
systemctl daemon-reload
systemctl restart docker

3)设置完成后通过docker info命令可以看到Cgroup Driver为systemd

docker info | grep Cgroup

3.coredns一直处于CrashLoopBackOff和running状态直接切换,且反复重启

原因是当前coredns在master节点上面,然后网段和kubeadm init是创建的10.244.0.0网段不一致,所以无法正确加入节点导致的。

把master节点上面的coredns删除掉,分配到其他节点就正常了。

也可以

 

kubernetes-dashboard安装并配置metrics-server

 

安装kubernetes-dashboard可以参照如下博客:

kubernetes-dashboard搭建

dashboard配置metrics-server

 

TLI平台的dashboard地址为:

TLI dashboard

选择token,token的生成可以执行master宿主机上/root/dashboard-certs/dashboard-token.sh来生成,将

生成的token填入即可进入监控页面。

https://www.yuque.com/docs/share/5db97931-58d3-42c2-b7d7-a79a3c2737a1?# 《kubeadm部署Kubernetes》
https://www.yuque.com/docs/share/50c0d73d-6161-43ad-9794-96f68fa858f5?# 《二进制部署Kubernetes》
https://www.yuque.com/docs/share/ea38c13e-456e-472d-acee-02353bcdd188?# 《kubernetes监控终极方案-kube-promethues》

Logo

基于 Vue 的企业级 UI 组件库和中后台系统解决方案,为数万开发者服务。

更多推荐