1.准备基础环境
配置至少2u2G
192.168.3.115 k8s-master
kube-apiserver #组件
kube-controller-manager
kube-scheduler
kube-proxy
etcd
coredns
kube-flannel
192.168.3.113 k8s-node1
kube-proxy #组件
kube-flannel
192.168.3.114 k8s-node2
kube-proxy #组件
kube-flannel

    ***~~无特殊说明以下操作在所有节点执行~~ ***

修改主机名:
#master节点:
hostnamectl set-hostname k8s-master
#node1节点:
hostnamectl set-hostname k8s-node1
#node2节点:
hostnamectl set-hostname k8s-node2

基本配置:
#修改/etc/hosts文件
cat >> /etc/hosts << EOF
192.168.3.115 k8s-master
192.168.3.113 k8s-node1
192.168.3.114 k8s-node2
EOF

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i ‘s/^SELINUX=enforcing$/SELINUX=disabled/’ /etc/selinux/config && setenforce 0

#关闭swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

配置时间同步:
配置master节点:
#安装chrony:
yum install -y chrony
#注释默认ntp服务器
sed -i ‘s/^server/#&/’ /etc/chrony.conf
#指定上游公共 ntp 服务器,并允许其他节点同步时间
cat < /etc/chrony.conf << EOF
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
server 3.asia.pool.ntp.org iburst
allow all
EOF
#重启chronyd服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd
#开启网络时间同步功能
timedatectl set-ntp true

配置所有node节点:(注意修改master IP地址)
#安装chrony:
yum install -y chrony
#注释默认服务器
sed -i ‘s/^server/#&/’ /etc/chrony.conf
#指定内网 master节点为上游NTP服务器
echo server 192.168.92.56 iburst >> /etc/chrony.conf
#重启服务并设为开机启动:
systemctl enable chronyd && systemctl restart chronyd

所有节点执行chronyc sources命令,查看存在以^*开头的行,说明已经与服务器时间同步

修改iptables相关参数
创建/etc/sysctl.d/k8s.conf文件,添加如下内容:
cat < /etc/sysctl.d/k8s.conf
vm.swappiness = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
modprobe br_netfilter #使配置生效
sysctl -p /etc/sysctl.d/k8s.conf

加载ipvs相关模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe – ip_vs
modprobe – ip_vs_rr
modprobe – ip_vs_wrr
modprobe – ip_vs_sh
modprobe – nf_conntrack_ipv4
EOF

#执行脚本
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm。

yum install ipset ipvsadm -y

安装Docker
#配置docker yum源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

#安装指定版本,这里安装18.06
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce-18.06.1.ce-3.el7
systemctl start docker && systemctl enable docker

安装kubeadm、kubelet、kubectl
#配置kubernetes.repo的源,由于官方源国内无法访问,这里使用阿里云yum源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#在所有节点上安装指定版本 kubelet、kubeadm 和 kubectl
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1

#启动kubelet服务
systemctl enable kubelet && systemctl start kubelet

2.部署master节点

Master节点执行初始化:

kubeadm init
–apiserver-advertise-address=192.168.3.115
–image-repository registry.aliyuncs.com/google_containers
–kubernetes-version v1.13.1
–pod-network-cidr=10.244.0.0/16

初始化过程如下:
关闭版本探测,因为它的默认值是stable-1,会导致从https://dl.k8s.io/release/stable-1.txt下*载最新的版本号,我们可以将其指定为固定版本(最新版:v1.13.1)来跳过网络请求。***
[root@k8s-master ~]# kubeadm init \

–image-repository registry.aliyuncs.com/google_containers
–kubernetes-version v1.13.1
–pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder “/etc/kubernetes/pki”
[certs] Generating “etcd/ca” certificate and key
[certs] Generating “etcd/healthcheck-client” certificate and key
[certs] Generating “apiserver-etcd-client” certificate and key
[certs] Generating “etcd/server” certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.3.115 127.0.0.1 ::1]
[certs] Generating “etcd/peer” certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.92.56 127.0.0.1 ::1]
[certs] Generating “ca” certificate and key
[certs] Generating “apiserver” certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.3.115]
[certs] Generating “apiserver-kubelet-client” certificate and key
[certs] Generating “front-proxy-ca” certificate and key
[certs] Generating “front-proxy-client” certificate and key
[certs] Generating “sa” key and public key
[kubeconfig] Using kubeconfig folder “/etc/kubernetes”
[kubeconfig] Writing “admin.conf” kubeconfig file
[kubeconfig] Writing “kubelet.conf” kubeconfig file
[kubeconfig] Writing “controller-manager.conf” kubeconfig file
[kubeconfig] Writing “scheduler.conf” kubeconfig file
[control-plane] Using manifest folder “/etc/kubernetes/manifests”
[control-plane] Creating static Pod manifest for “kube-apiserver”
[control-plane] Creating static Pod manifest for “kube-controller-manager”
[control-plane] Creating static Pod manifest for “kube-scheduler”
[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s
[apiclient] All control plane components are healthy after 21.009858 seconds
[uploadconfig] storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace
[kubelet] Creating a ConfigMap “kubelet-config-1.13” in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-master” as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label “node-role.kubernetes.io/master=’’”
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 60syk6.vnplamkn3zhwu3s3
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the “cluster-info” ConfigMap in the “kube-public” namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

kubeadm join 192.168.3.115:6443 --token 60syk6.vnplamkn3zhwu3s3 --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69

配置 kubectl

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown ( i d − u ) : (id -u): (idu):(id -g) $HOME/.kube/config

#启用 kubectl 命令自动补全功能(注销重新登录生效)
echo “source <(kubectl completion bash)” >> ~/.bashrc

查看集群状态: #确认各个组件都处于healthy状态。
[centos@k8s-master ~]$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {“health”: “true”}

查看节点状态可以看到,当前只存在1个master节点,并且这个节点的状态是 NotReady。
[centos@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 36m v1.13.1

通过 kubectl describe 指令的输出,我们可以看到 NodeNotReady 的原因在于,我们尚未部署任何网络插件,kube-proxy等组件还处于starting状态。还可以通过 kubectl 检查这个节点上各个系统 Pod 的状态
[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78d4cf999f-7jdx7 0/1 Pending 0 29m
coredns-78d4cf999f-s6mhk 0/1 Pending 0 29m
etcd-k8s-master 1/1 Running 0 34m 192.168.3.115 k8s-master
kube-apiserver-k8s-master 1/1 Running 0 34m 192.168.3.115 k8s-master
kube-controller-manager-k8s-master 1/1 Running 0 34m 192.168.3.115 k8s-master
kube-proxy-przwf 1/1 Running 0 34m 192.168.3.115 k8s-master
kube-scheduler-k8s-master 1/1 Running 0 34m 192.168.3.115 k8s-master

可以看到,CoreDNS依赖于网络的 Pod 都处于 Pending 状态,即调度失败。这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。集群初始化如果遇到问题,可以使用kubeadm reset命令进行清理然后重新执行初始化

部署网络插件
[centos@k8s-master ~]$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created
[centos@k8s-master ~]$

部署完成后,我们可以通过 kubectl get 重新检查 Pod 的状态:

[centos@k8s-master ~]$ kubectl get pod -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
coredns-78d4cf999f-7jdx7 1/1 Running 0 11h 10.244.0.3 k8s-master
coredns-78d4cf999f-s6mhk 1/1 Running 0 11h 10.244.0.2 k8s-master
etcd-k8s-master 1/1 Running 1 11h 192.168.3.115 k8s-master
kube-apiserver-k8s-master 1/1 Running 1 11h 192.168.3.115 k8s-master
kube-controller-manager-k8s-master 1/1 Running 1 11h 192.168.3.115 k8s-master
kube-flannel-ds-amd64-lkf2f 1/1 Running 0 10h 192.168.3.115 k8s-master
kube-proxy-przwf 1/1 Running 1 11h 192.168.3.115 k8s-master
kube-scheduler-k8s-master 1/1 Running 1 11h 192.168.3.115 k8s-master
[centos@k8s-master ~]$

再次查看master节点状态已经为ready状态:
[centos@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 11h v1.13.1
[centos@k8s-master ~]$

3.部署worker节点
Kubernetes 的 Worker 节点跟 Master 节点几乎是相同的,它们运行着的都是一个 kubelet 组件。唯一的区别在于,在 kubeadm init 的过程中,kubelet 启动后,Master 节点上还会自动运行 kube-apiserver、kube-scheduler、kube-controller-manger 这三个系统 Pod。在 k8s-node1 和 k8s-node2 上分别执行如下命令,将其注册到 Cluster 中:

#执行以下命令将节点接入集群
kubeadm join 192.168.3.115:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69

#如果执行kubeadm init时没有记录下加入集群的命令,可以通过以下命令重新创建
kubeadm token create --print-join-command

在k8s-node1上执行kubeadm join :

[root@k8s-node1 ~]# kubeadm join 192.168.3.115:6443 --token 67kq55.8hxoga556caxty7s --discovery-token-ca-cert-hash sha256:7d50e704bbfe69661e37c5f3ad13b1b88032b6b2b703ebd4899e259477b5be69
[preflight] Running pre-flight checks
[discovery] Trying to connect to API Server “192.168.3.115:6443”
[discovery] Created cluster-info discovery client, requesting info from “https://192.168.92.56:6443
[discovery] Requesting info from “https://192.168.92.56:6443” again to validate TLS against the pinned public key
[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server “192.168.3.115:6443”
[discovery] Successfully established connection with API Server “192.168.3.115:6443”
[join] Reading configuration from the cluster…
[join] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’
[kubelet] Downloading configuration for the kubelet from the “kubelet-config-1.13” ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”
[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”
[kubelet-start] Activating the kubelet service
[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap…
[patchnode] Uploading the CRI Socket information “/var/run/dockershim.sock” to the Node API object “k8s-node1” as an annotation

This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details.

Run ‘kubectl get nodes’ on the master to see this node join the cluster.

重复执行以上操作将k8s-node2也加进去

通过 kubectl get nodes 查看节点的状态:
[centos@k8s-master ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 11h v1.13.1
k8s-node1 Ready 24m v1.13.1
k8s-node2 Ready 4m9s v1.13.1
[centos@k8s-master ~]$
nodes状态全部为ready,由于每个节点都需要启动若干组件,如果node节点的状态是 NotReady,可以查看所有节点pod状态,确保所有pod成功拉取到镜像并处于running状态:
[centos@k8s-master ~]$ kubectl get pod --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-78d4cf999f-7jdx7 1/1 Running 0 11h 10.244.0.3 k8s-master
kube-system coredns-78d4cf999f-s6mhk 1/1 Running 0 11h 10.244.0.2 k8s-master
kube-system etcd-k8s-master 1/1 Running 1 12h 192.168.3.115 k8s-master
kube-system kube-apiserver-k8s-master 1/1 Running 1 12h 192.168.3.115 k8s-master
kube-system kube-controller-manager-k8s-master 1/1 Running 1 12h 192.168.3.115 k8s-master
kube-system kube-flannel-ds-amd64-d2r8p 1/1 Running 0 6m43s 192.168.3.114 k8s-node2
kube-system kube-flannel-ds-amd64-d85c6 1/1 Running 0 27m 192.168.3.113 k8s-node1
kube-system kube-flannel-ds-amd64-lkf2f 1/1 Running 0 11h 192.168.3.115 k8s-master
kube-system kube-proxy-k8jx8 1/1 Running 0 6m43s 192.168.3.114 k8s-node2
kube-system kube-proxy-n95ck 1/1 Running 0 27m 192.168.3.113 k8s-node1
kube-system kube-proxy-przwf 1/1 Running 1 12h 192.168.3.115 k8s-master
kube-system kube-scheduler-k8s-master 1/1 Running 1 12h 192.168.3.115 k8s-master
[centos@k8s-master ~]$

如果有pod提示Init:ImagePullBackOff,说明这个pod的镜像在对应节点上拉取失败,我们可以通过 kubectl describe pod 查看 Pod 具体情况,以确认拉取失败的镜像:
[centos@k8s-master ~]$ kubectl describe pod kube-flannel-ds-amd64-d2r8p --namespace=kube-system

Events:
Type Reason Age From Message


Normal Scheduled 2m14s default-scheduler Successfully assigned kube-system/kube-flannel-ds-amd64-lzx5v to k8s-node2
Warning Failed 109s kubelet, k8s-node2 Failed to pull image “quay.io/coreos/flannel:v0.10.0-amd64”: rpc error: code = Unknown desc = Error response from daemon: Get https://quay.io/v2/: net/http: TLS handshake timeout
Warning Failed 109s kubelet, k8s-node2 Error: ErrImagePull
Normal BackOff 108s kubelet, k8s-node2 Back-off pulling image “quay.io/coreos/flannel:v0.10.0-amd64
Warning Failed 108s kubelet, k8s-node2 Error: ImagePullBackOff
Normal Pulling 94s (x2 over 2m6s) kubelet, k8s-node2 pulling image “quay.io/coreos/flannel:v0.10.0-amd64

这里看最后events输出内容,可以看到在下载 image 时失败,如果网络质量不好,这种情况是很常见的。我们可以耐心等待,因为 Kubernetes 会重试,我们也可以自己手工执行 docker pull 去下载这个镜像。
[root@k8s-node2 ~]# docker pull quay.io/coreos/flannel:v0.10.0-amd64
v0.10.0-amd64: Pulling from coreos/flannel
ff3a5c916c92: Already exists
8a8433d1d437: Already exists
306dc0ee491a: Already exists
856cbd0b7b9c: Already exists
af6d1e4decc6: Already exists
Digest: sha256:88f2b4d96fae34bfff3d46293f7f18d1f9f3ca026b4a4d288f28347fcb6580ac
Status: Image is up to date for quay.io/coreos/flannel:v0.10.0-amd64
[root@k8s-node2 ~]#
如果无法从 quay.io/coreos/flannel:v0.10.0-amd64 下载镜像,可以从阿里云或者dockerhub镜像仓库下载,然后改回原来的tag即可:
docker pull registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64
docker tag registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0-amd64
docker rmi registry.cn-hangzhou.aliyuncs.com/kubernetes_containers/flannel:v0.10.0-amd64
查看master节点下载了哪些镜像:
[centos@k8s-master ~]$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 2 weeks ago 80.2MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.13.1 40a63db91ef8 2 weeks ago 181MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.13.1 ab81d7360408 2 weeks ago 79.6MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.13.1 26e6f1db2a52 2 weeks ago 146MB
registry.aliyuncs.com/google_containers/coredns 1.2.6 f59dcacceff4 8 weeks ago 40MB
registry.aliyuncs.com/google_containers/etcd 3.2.24 3cab8e1b9802 3 months ago 220MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[centos@k8s-master ~]$
查看node节点下载了哪些镜像:
[root@k8s-node1 ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.13.1 fdb321fd30a0 2 weeks ago 80.2MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 11 months ago 44.6MB
registry.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 12 months ago 742kB
[root@k8s-node1 ~]#

测试集群各个组件
首先验证kube-apiserver, kube-controller-manager, kube-scheduler, pod network 是否正常:
部署一个 Nginx Deployment,包含2个Pod

[centos@k8s-master ~]$ kubectl create deployment nginx --image=nginx:alpine
deployment.apps/nginx created
[centos@k8s-master ~]$ kubectl scale deployment nginx --replicas=2
deployment.extensions/nginx scaled
[centos@k8s-master ~]$
验证Nginx Pod是否正确运行,并且会分配10.244.开头的集群IP
[centos@k8s-master ~]$ kubectl get pods -l app=nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-54458cd494-p2qgx 1/1 Running 0 111s 10.244.1.2 k8s-node1
nginx-54458cd494-sdlm7 1/1 Running 0 103s 10.244.2.2 k8s-node2
[centos@k8s-master ~]$
*再验证一下kube-proxy是否正常:*以 NodePort 方式对外提供服务
[centos@k8s-master ~]$ kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[centos@k8s-master ~]$ kubectl get services nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.108.17.2 80:30670/TCP 12s
[centos@k8s-master ~]$

可以通过任意 NodeIP:Port 在集群外部访问这个服务:
[centos@k8s-master ~]$ curl 192.168.3.115:30670
[centos@k8s-master ~]$ curl 192.168.3.114:30670
[centos@k8s-master ~]$ curl 192.168.3.113:30670

访问k8s-master ip 192.168.3.115:30670

在这里插入图片描述
最后验证一下dns, pod network是否正常:
[centos@k8s-master ~]$ kubectl run -it curl --image=radial/busyboxplus:curl
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don’t see a command prompt, try pressing enter.
[ root@curl-66959f6557-s5qqs:/ ]$

输入nslookup nginx查看是否可以正确解析出集群内的IP,以验证DNS是否正常
[ root@curl-66959f6557-s5qqs:/ ]$ nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name: nginx
Address 1: 10.108.17.2 nginx.default.svc.cluster.local
通过服务名进行访问,验证kube-proxy是否正常
[ root@curl-66959f6557-q472z:/ ]$ curl http://nginx/

Welcome to nginx! ...... [ root@curl-66959f6557-q472z:/ ]$ *分别访问一下2个Pod的内网IP,验证跨Node的网络通信是否正常* [ root@curl-66959f6557-s5qqs:/ ]$ curl 10.244.1.2 Welcome to nginx! ...... [ root@curl-66959f6557-s5qqs:/ ]$ curl 10.244.2.2 Welcome to nginx! ...... [ root@curl-66959f6557-s5qqs:/ ]$

kube-proxy开启ipvs
修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: “ipvs”:
[centos@k8s-master ~]$ kubectl edit cm kube-proxy -n kube-system
configmap/kube-proxy edited
之后重启各个节点上的kube-proxy pod:
[centos@k8s-master ~]$ kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "KaTeX parse error: Expected 'EOF', got '}' at position 20: …n kube-system")}̲' pod "kube-pro… kubectl get pod -n kube-system | grep kube-proxy
kube-proxy-6qlgv 1/1 Running 0 65s
kube-proxy-fdtjd 1/1 Running 0 47s
kube-proxy-m8zkx 1/1 Running 0 52s
[centos@k8s-master ~]$
查看日志:
[centos@k8s-master ~]$ kubectl logs kube-proxy-6qlgv -n kube-system
I1213 09:50:15.414493 1 server_others.go:189] Using ipvs Proxier.
W1213 09:50:15.414908 1 proxier.go:365] IPVS scheduler not specified, use rr by default
I1213 09:50:15.415021 1 server_others.go:216] Tearing down inactive rules.
I1213 09:50:15.461658 1 server.go:464] Version: v1.13.0
I1213 09:50:15.467827 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I1213 09:50:15.467997 1 config.go:202] Starting service config controller
I1213 09:50:15.468010 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I1213 09:50:15.468092 1 config.go:102] Starting endpoints config controller
I1213 09:50:15.468100 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I1213 09:50:15.568766 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I1213 09:50:15.568950 1 controller_utils.go:1034] Caches are synced for service config controller
[centos@k8s-master ~]$
日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启。

移除节点和集群
kubernetes集群移除节点
以移除k8s-node2节点为例,在Master节点上运行:
kubectl drain k8s-node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-node2
上面两条命令执行完成后,在k8s-node2节点执行清理命令,重置kubeadm的安装状态:
kubeadm reset
在master上删除node并不会清理k8s-node2运行的容器,需要在删除节点上面手动运行清理命令。
如果你想重新配置集群,使用新的参数重新运行kubeadm init或者kubeadm join即可。

至此3个节点的集群搭建完成,后续可以继续添加node节点,或者部署dashboard、helm包管理工具、EFK日志系统、Prometheus Operator监控系统、rook+ceph存储系统等组件。

原文:https://blog.csdn.net/networken/article/details/84991940

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐