Centos7.6部署k8s v1.16.4高可用集群(主备模式)
共有7台服务器,3台control plane(master0001,master0002,master0003),3台work(work0001,work0002,work0003),1台client通过 $ ip add 查看每个节点的ip192.168.1.125 master0001192.168.1.126 master0002192.168.1.127 master0003192.16
·
共有7台服务器,3台control plane(master01,master02,master03),3台work(work01,work02,work03),1台client。
通过 $ ip add 查看每个节点的ip
192.168.1.157 master01
192.168.1.158 master02
192.168.1.159 master03
192.168.1.160 work01
192.168.1.161 work02
192.168.1.162 work03
192.168.1.163 vip
192.168.1.164 client
control plane(master01,master02,master03) 和 work(work01,work02,work03)节点都执行本部分操作
1、Centos7.6操作系统安装及优化全纪录
说明:生产环境对软件版本和内核版本要求非常精确,别没事有事随便的进行yum update操作!!!!!!!!!
yum update:升级所有包同时也升级软件和系统内核
yum upgrade:只升级所有包,不升级软件和系统内核
1.1 配置阿里源
[root@centos7 /]# yum -y install wget
[root@centos7 /]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
# 做好备份,为了更新失败时切换回去
[root@centos7 /]# cp /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.bak
1.2 清除缓存并生成新的缓存
[root@centos7 /]# yum clean all
[root@centos7 /]# yum makecache
1.3 安装net-tools工具,运行ifconfig查看虚拟机的ip
[root@centos7 /]# yum -y install net-tools
[root@centos7 /]# ifconfig
1.4关闭防火墙
firewall-cmd --state #查看防火墙状态
systemctl stop firewalld.service #停止firewall
systemctl disable firewalld.service #禁止firewall开机启动
1.5 关闭selinux
getenforce #查看selinux状态
setenforce 0 #临时关闭selinux
sed -i 's/^ *SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config #永久关闭(需重启系统)
1.6 需重启系统,使配置生效
reboot
至此完成Centos7.6操作系统安装和优化。
control plane(master01,master02,master03) 和 work(work01,work02,work03)节点都执行本部分操作
2、安装前准备工作
2.1 配置主机名,退出重新登陆即可显示新设置的主机名master01
[root@centos7 ~]# hostnamectl set-hostname master01
[root@centos7 ~]# more /etc/hostname
master01
2.2 修改修改各节点的hosts文件
[root@master01 ~]# cat >> /etc/hosts << EOF
192.168.1.157 master01
192.168.1.158 master02
192.168.1.159 master03
192.168.1.160 work01
192.168.1.161 work02
192.168.1.162 work03
EOF
2.3 验证mac地址uuid,保证各节点mac和uuid唯一(一般情况下都是唯一的)
[root@master01 ~]# cat /sys/class/net/ens160/address
可能会报错误:cat: /sys/class/net/ens160/address: 没有那个文件或目录
直接从VirtualBox里看:选中虚拟机master0001--->设置--->网络--->高级--->MAC地址
[root@master01 ~]# cat /sys/class/dmi/id/product_uuid
2.4 禁用swap
# 临时禁用
[root@master01 ~]# swapoff -a
# 永久禁用,若需要重启后也生效,在禁用swap后还需修改配置文件/etc/fstab,注释swap
[root@master01 ~]# sed -i.bak '/swap/s/^/#/' /etc/fstab
2.5 进行时间同步,并确认时间同步成功,(每个节点都要设置)
timedatectl
timedatectl set-ntp true
2.6 内核参数修改
本文的k8s网络使用flannel(k8s的一种网络模型),该网络需要设置内核参数bridge-nf-call-iptables=1,修改这个参数需要系统有br_netfilter模块
2.6.1 br_netfilter模块加载,查看br_netfilter模块
[root@master01 ~]# lsmod |grep br_netfilter
如果系统没有br_netfilter模块则执行下面的新增命令,如有则忽略。
2.6.2 临时新增br_netfilter模块,该方式重启后会失效:
[root@master01 ~]# modprobe br_netfilter
2.6.3 永久新增br_netfilter模块:
[root@master01 ~]# cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do
[ -x $file ] && $file
done
EOF
[root@master01 ~]# cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF
[root@master01 ~]# chmod 755 /etc/sysconfig/modules/br_netfilter.modules
2.6.4 内核参数临时修改
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-iptables=1
[root@master01 ~]# sysctl net.bridge.bridge-nf-call-ip6tables=1
2.6.5 内核参数永久修改
[root@master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@master01 ~]# sysctl -p /etc/sysctl.d/k8s.conf
2.7 设置kubernetes源
2.7.1 新增kubernetes源
[root@master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
2.7.2 更新缓存
[root@master01 ~]# yum clean all
[root@master01 ~]# yum -y makecache
3. 免密登录,配置master01到master02、master03免密登录,本步骤只在master01上执行。
3.1 创建秘钥
[root@master01 ~]# ssh-keygen -t rsa
3.2 将秘钥同步至master02/master03
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.126
[root@master01 ~]# ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.1.127
3.3 免密登陆测试,master01可以直接登录master02和master03,不需要输入密码。
[root@master01 ~]# ssh master02
[root@master01 ~]# ssh master03
4、Docker安装
4.1 安装依赖包
[root@master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
4.2 设置Docker源
[root@master01 ~]# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
4.3 安装Docker-CE
docker安装版本查看
[root@master01 ~]# yum list docker-ce --showduplicates | sort -r
4.2 安装docker,指定安装的docker版本为18.09.9
[root@master01 ~]# yum install docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io -y
4.4 启动Docker
[root@master01 ~]# systemctl start docker
[root@master01 ~]# systemctl enable docker
4.5 命令补全
4.5.1 安装bash-completion
[root@master01 ~]# yum -y install bash-completion
4.5.2 加载bash-completion
[root@master01 ~]# source /etc/profile.d/bash_completion.sh
4.6. 镜像加速
由于Docker Hub的服务器在国外,下载镜像会比较慢,可以配置镜像加速器。主要的加速器有:Docker官方提供的中国registry mirror、阿里云加速器、DaoCloud 加速器,本文以阿里加速器配置为例。
登陆地址为:https://cr.console.aliyun.com ,未注册的可以先注册阿里云账户
4.6.1 配置镜像加速器
配置daemon.json文件
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://P62531cx.mirror.aliyuncs.com"]
}
EOF
重启服务
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
4.6.2 验证
[root@master01 ~]# docker --version
4.7 修改Cgroup Driver
4.7.1 修改daemon.json
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://P62531cx.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
查看一下
[root@master01 ~]# more /etc/docker/daemon.json
{
"registry-mirrors": ["https://P62531cx.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
4.7.2 重新加载docker
[root@master01 ~]# systemctl daemon-reload
[root@master01 ~]# systemctl restart docker
3台control plane(master01,master02,master03)节点执行本部分操作
4.8、keepalived安装
4.8.1. 安装keepalived
[root@master01 ~]# yum -y install keepalived
4.8.2. keepalived配置,(对比修改,删除不需要的配置项)
master01上keepalived配置:(192.168.1.163为vip服务器的IP地址)
[root@master01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master01
}
vrrp_instance VI_1 {
state MASTER //MASTER主节点,备用节点上设置为state BACKUP
interface enp0s3 //绑定虚拟机IP的网卡 三个节点设置一样 根据 $ ip addr查看,有三个 lo , enp0s3 , docker0
virtual_router_id 51 //VRRP组名,主副节点设置必须一样,指名各个节点属于同一个VRRP组,同一个组的节点互相抢IP
priority 100 //优先级(1~254之间),备用节点必须比主节点优先级低
advert_int 1 //组播信息发送间隔,三个节点设置必须一样
authentication { //设置验证信息, 三个节点设置必须一样,用于节点间信息转发时的加密
auth_type PASS
auth_pass 1111
}
virtual_ipaddress { // 虚拟IP三个节点设置必须一样,两节点同时抢一个io
192.168.1.163
}
}
master02上keepalived配置:(192.168.1.163为vip服务器的IP地址)
[root@master02 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master02
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s3
virtual_router_id 51
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.163
}
}
master03上keepalived配置:(192.168.1.163为vip服务器的IP地址)
[root@master03 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
router_id master03
}
vrrp_instance VI_1 {
state BACKUP
interface enp0s3
virtual_router_id 51
priority 80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.1.163
}
}
4.8.3 启动keepalived
所有control plane启动keepalived服务并设置开机启动
[root@master01 ~]# service keepalived start
[root@master01 ~]# systemctl enable keepalived
4.8.4. VIP查看,vip在master01上
[root@master01 ~]# ip a
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:5f:5c:48 brd ff:ff:ff:ff:ff:ff
inet 192.168.1.125/24 brd 192.168.1.255 scope global noprefixroute dynamic enp0s3
valid_lft 35113sec preferred_lft 35113sec
inet 192.168.1.131/32 scope global enp0s3
valid_lft forever preferred_lft forever
inet6 fe80::8798:430b:a1c7:2e49/64 scope link noprefixroute
valid_lft forever preferred_lft forever
control plane(master01,master02,master03) 和 work(work01,work02,work03)节点都执行本部分操作
3、k8s安装
安装包说明
kubelet 运行在集群所有节点上,用于启动Pod和容器等对象的工具
kubeadm 用于初始化集群,启动集群的命令工具
kubectl 用于和集群通信的命令行,通过kubectl可以部署和管理应用,查看各种资源,创建、删除和更新各种组件
3.1.1 版本查看,本文安装的kubelet版本是1.16.4,该版本支持的docker版本为1.13.1, 17.03, 17.06, 17.09, 18.06, 18.09。
[root@master01 ~]# yum list kubelet --showduplicates | sort -r
3.1.2 安装kubelet、kubeadm和kubectl,安装三个包
[root@master01 ~]# yum install -y kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4
3.1.3 启动kubelet并设置开机启动
[root@master01 ~]# systemctl enable kubelet && systemctl start kubelet
3.1.4 kubectl命令补全
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
3.1.5 镜像下载的脚本
Kubernetes几乎所有的安装组件和Docker镜像都放在goolge自己的网站上,直接访问可能会有网络问题,这里的解决办法是从阿里云镜像仓库下载镜像,拉取到本地以后改回默认的镜像tag。本文通过运行image.sh脚本方式拉取镜像。
3.1.5.1 新建脚本文件-image.sh
[root@master01 ~] touch image.sh
[root@master01 ~] vim image.sh
#!/bin/bash
url=registry.cn-hangzhou.aliyuncs.com/loong576
version=v1.16.4
images=(`kubeadm config images list --kubernetes-version=$version|awk -F '/' '{print $2}'`)
for imagename in ${images[@]} ; do
docker pull $url/$imagename
docker tag $url/$imagename k8s.gcr.io/$imagename
docker rmi -f $url/$imagename
done
3.1.5.2 给image.sh权限
$ chmod -R 777 image.sh
3.1.5.2 运行脚本image.sh
$ ./image.sh
$ docker images
只在master01节点执行本部分操作
3.2 初始化Master
3.2.1. kubeadm.conf为初始化的配置文件
新建文件kubeadm-config.yaml
[root@master01 ~] touch kubeadm-config.yaml
[root@master01 ~] vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
certSANs:
- master01
- master02
- master03
- work01
- work02
- work03
- 192.168.1.157
- 192.168.1.158
- 192.168.1.159
- 192.168.1.160
- 192.168.1.161
- 192.168.1.162
- 192.168.1.163
controlPlaneEndpoint: "192.168.1.163:6443"
networking:
podSubnet: "10.244.0.0/16"
3.2.2 master初始化
[root@master01 ~]# kubeadm init --config=kubeadm-config.yaml
如果初始化失败,可执行kubeadm reset后重新初始化
[root@master01 ~]# kubeadm reset
[root@master01 ~]# rm -rf $HOME/.kube/config
3.2.3 记录kubeadm join的输出,后面需要这个命令将work节点和其他control plane节点加入集群中。
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6
3.2.4 加载环境变量
[root@master01 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
本文所有操作都在root用户下执行,若为非root用户,则执行如下操作:
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
3.2.5 安装flannel网络,在master01上新建flannel网络
由于网络原因,可能会安装失败,可以在文末直接下载kube-flannel.yml文件,然后再执行apply
[root@master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
k8s集群token过期重新生成,参考:https://www.cnblogs.com/linyouyi/p/10850904.html
默认情况下,token会在24小时后过期。如果要在令牌过期后重新向集群中添加新的节点,则需要重新生成token,并获取ca证书sha256编码hash值
1.先查看token是否还可用
$ kubeadm token list
1.1 还在则获取ca证书sha256编码hash值,不在则进行2操作
$ openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
6fd9b1bf2d593d2d4f550cd9f1f596865f117fef462db42860228311c2712b8b
1.2 node节点加入
$ kubeadm join k8smaster.com:6443 --token ky6r26.ucd2s4jmtimxvj90 \
--discovery-token-ca-cert-hash sha256:6fd9b1bf2d593d2d4f550cd9f1f596865f117fef462db42860228311c2712b8b
2.生成一个新的token
$ kubeadm token create --print-join-command //默认有效期24小时,若想久一些可以结合--ttl参数,设为0则用不过期
kubeadm join k8smaster.com:6443 --token pdas2m.fkgn8q7mz5u96jm6 --discovery-token-ca-cert-hash sha256:6fd9b1bf2d593d2d4f550cd9f1f596865f117fef462db42860228311c2712b8b
$ 查看token
[root@hadoop01 ~]# kubeadm token list
TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
pdas2m.fkgn8q7mz5u96jm6 23h 2019-10-25T23:38:46+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
$ node节点加入
kubeadm join k8smaster.com:6443 --token pdas2m.fkgn8q7mz5u96jm6 \
--discovery-token-ca-cert-hash sha256:6fd9b1bf2d593d2d4f550cd9f1f596865f117fef462db42860228311c2712b8b \
4、control plane节点加入集群
4.1 证书分发,master01分发证书
在master01上运行脚本cert-main-master.sh,将证书分发至master02和master03
root@master01 ~]# touch cert-main-master.sh
[root@master01 ~]# vim cert-main-master.sh
USER=root # customizable
# master02和master03的IP地址
USER=root # customizable
CONTROL_PLANE_IPS="192.168.1.158 192.168.1.159"
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
[root@master01 ~]# sudo chmod -R 777 cert-main-master.sh
[root@master01 ~]# ./cert-main-master.sh
4.2 master02移动证书至指定目录:
在master02上运行脚本cert-other-master.sh,将证书移至指定目录
root@master02 ~]# touch cert-other-master.sh
[root@master02 ~]# vim cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master02 ~]# sudo chmod -R 777 cert-other-master.sh
[root@master02 ~]# ./cert-other-master.sh
4.3 master03移动证书至指定目录,在master03上也运行脚本cert-other-master.sh
root@master03 ~]# touch cert-other-master.sh
[root@master03 ~]# vim cert-other-master.sh
USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
# Quote this line if you are using external etcd
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key
[root@master03 ~]# sudo chmod -R 777 cert-other-master.sh
[root@master03 ~]# ./cert-other-master.sh
4.4 master节点加入集群
4.4.1 master02加入集群,运行初始化master生成的control plane节点加入集群的命令
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6 \
--control-plane
4.4.2 master03加入集群
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6 \
--control-plane
4.4.3 加载环境变量,master02和master03加载环境变量,该步操作是为了在master02和master03上也能执行kubectl命令
[root@master02 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master02 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master02 ~]# source .bash_profile
[root@master03 ~]# scp master01:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@master03 ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@master03 ~]# source .bash_profile
4.4.4 集群节点查看, 所有control plane节点处于ready状态,所有的系统组件也正常
[root@master01 ~]# kubectl get nodes
[root@master01 ~]# kubectl get po -o wide -n kube-system
4.5 work节点加入集群
4.5.1 work01加入集群,运行初始化master生成的work节点加入集群的命令
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6
4.5.2 work02加入集群,运行初始化master生成的work节点加入集群的命令
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6
4.5.3 work03加入集群,运行初始化master生成的work节点加入集群的命令
kubeadm join 192.168.1.163:6443 --token xm2gdo.7ohrbfxx4u3jlyd7 \
--discovery-token-ca-cert-hash sha256:cb3cefcdd8c0a1dda428cce5abc3f981bad54f8812bf6c8c1d12eef5e9a79dd6
4.4.4 集群节点查看
[root@master01 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 44m v1.16.4
master02 Ready master 33m v1.16.4
master03 Ready master 23m v1.16.4
work01 Ready <none> 11m v1.16.4
work02 Ready <none> 7m50s v1.16.4
work03 Ready <none> 3m4s v1.16.4
4.6 client配置
4.6.1 新增kubernetes源
[root@client ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
4.6.2 更新缓存
[root@client ~]# yum clean all
[root@client ~]# yum -y makecache
4.6.3 安装kubectl,安装版本与集群版本保持一致
[root@client ~]# yum install -y kubectl-1.16.4
4.6.4 命令补全
4.6.4.1 安装bash-completion
[root@client ~]# yum -y install bash-completion
4.6.4.2 加载bash-completion
[root@client ~]# source /etc/profile.d/bash_completion.sh
4.6.5 拷贝admin.conf
[root@client ~]# mkdir -p /etc/kubernetes
[root@client ~]# scp 192.168.1.163:/etc/kubernetes/admin.conf /etc/kubernetes/
[root@client ~]# echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
[root@client ~]# source .bash_profile
4.6.6 加载环境变量
[root@master01 ~]# echo "source <(kubectl completion bash)" >> ~/.bash_profile
[root@master01 ~]# source .bash_profile
4.6.7 kubectl测试
[root@client ~]# kubectl get nodes
[root@client ~]# kubectl get cs
[root@client ~]# kubectl get po -o wide -n kube-system
Dashboard搭建,至此k8s集群已搭建完成,可以根据喜好选择不同的模板管理你的集群
更多推荐
已为社区贡献3条内容
所有评论(0)