20 kubeadm高可用部署1.23以上k8s版本
每个节点安装kubeadm,kubelet和kubectl 安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/两者相比,
文章目录
KubeAdmin安装k8s
1、集群类型
# kubernetes集群大体上分为两类: 一主多从和多主多从
# 1、一主多从:
一台 Master节点和多台Node节点,搭建简单,有单机故障分析,适合于测试环境
# 2、多主多从:
多台 Master节点和多台Node节点,搭建麻烦,安全性比较高,适合于生产环境
2、安装方式
官方地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/
# 方式一: kubeadm
Kubeadm 是一个K8s 部署工具,提供kubeadm init 和kubeadm join,用于快速部署Kubernetes 集群。
# 方式二:二进制包
从github 下载发行版的二进制包,手动部署每个组件,组成Kubernetes 集群。
Kubeadm 降低部署门槛,但屏蔽了很多细节,遇到问题很难排查。如果想更容易可控,推荐使用二进制包部署Kubernetes 集群,虽然手动部署麻烦点,期间可以学习很多工作原理,也利于后期维护。
一、准备环境
1、部署软件、系统要求
软件 | 版本 |
---|---|
Centos | CentOS Linux release 7.5及以上 |
Docker | 19.03.12 |
Kubernetes | V0.13.0 |
Flannel | V1.19.1 |
Kernel-lm | kernel-lt-4.4.245-1.el7.elrepo.x86_64.rpm |
Kernel-lm-deve | kernel-lt-devel-4.4.245-1.el7.elrepo.x86_64.rpm |
2、节点规划
- IP建议采用192网段,避免与kubernetes内网冲突
准备机器 | IP | 配置 | 系统内核版本 |
---|---|---|---|
k8s-master1 | 192.168.11.20 | 2核2G | 4.4+ |
k8s-master2 | 192.168.11.21 | 2核2G | 4.4+ |
k8s-master3 | 192.168.11.22 | 2核2G | 4.4+ |
k8s-node1 | 192.168.11.23 | 2核2G | 4.4+ |
k8s-node2 | 192.168.11.24 | 2核2G | 4.4+ |
二、kubeadm安装k8s
服务器配置至少是2G2核的。如果不是则可以在集群初始化后面增加 --ignore-preflight-errors=NumCPU
1、内核优化脚本(所有机器)
[root@k8s-m-01 ~]# vim base.sh
#!/bin/bash
# 1、修改主机名和网卡
hostnamectl set-hostname $1 &&\
sed -i "s#111#$2#g" /etc/sysconfig/network-scripts/ifcfg-eth[01] &&\
systemctl restart network &&\
# 2、关闭selinux和防火墙和ssh连接
setenforce 0 &&\
sed -i 's#enforcing#disabled#g' /etc/selinux/config &&\
systemctl disable --now firewalld &&\
# 如果iptables没有安装就不需要执行
# systemctl disable --now iptables &&\
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config &&\
systemctl restart sshd &&\
# 3、关闭swap分区
# 一旦触发 swap,会导致系统性能急剧下降,所以一般情况下,K8S 要求关闭 swap
# cat /etc/fstab
# 注释最后一行swap,如果没有安装swap就不需要
swapoff -a &&\
#忽略swap
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet &&\
# 4、修改本机hosts文件
cat >>/etc/hosts <<EOF
192.168.11.20 master01 m1
192.168.11.21 master02 m2
192.168.11.22 master03 m3
192.168.11.23 node01 n1
192.168.11.24 node02 n2
192.168.11.26 vip
EOF
# 5、配置镜像源(国内源)
# 默认情况下,CentOS 使用的是官方 yum 源,所以一般情况下在国内使用是非常慢,所以我们可以替换成 国内的一些比较成熟的 yum 源,例如:清华大学镜像源,网易云镜像源等等
rm -rf /ect/yum.repos.d/* &&\
curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo &&\
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo &&\
yum clean all &&\
yum makecache &&\
# 6、更新系统
#查看内核版本,若内核高于4.0,可不加--exclud选项
yum update -y --exclud=kernel* &&\
# 由于 Docker 运行需要较新的系统内核功能,例如 ipvs 等等,所以一般情况下,我们需要使用 4.0+以上版 本的系统内核要求是 4.18+,如果是 CentOS 8 则不需要内核系统更新
# 7、安装基础常用软件,是为了方便我们的日常使用
yum install wget expect vim net-tools ntp bash-completion ipvsadm ipset jq iptables conntrack sysstat libseccomp ntpdate -y &&\
# 8、更新系统内核
#如果是centos8则不需要升级内核
cd /opt/ &&\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-5.4.236-1.el7.elrepo.x86_64.rpmm &&\
wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-lt-devel-5.4.236-1.el7.elrepo.x86_64.rpm &&\
# 如果内核低于4.0会有一些bug,导致生产环境中,如果流量很大的时候,会出现流量抖动现象
# 官网https://elrepo.org/linux/kernel/el7/x86_64/RPMS/
# 9、安装系统内容
yum localinstall /opt/kernel-lt* -y &&\
# 10、调到默认启动
grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg &&\
# 11、查看当前默认启动的内核
grubby --default-kernel &&\
reboot
# 安装完成就是5.4内核
2、 免密脚本(所有机器)
# 1、免密
[root@k8s-master-01 ~]# ssh-keygen -t rsa
[root@k8s-master-01 ~]# for i in master01 master02 master03 node01 node02;do ssh-copy-id -i ~/.ssh/id_rsa.pub root@$i;done
# 在集群当中,时间是一个很重要的概念,一旦集群当中某台机器时间跟集群时间不一致,可能会导致集群面 临很多问题。所以,在部署集群之前,需要同步集群当中的所有机器的时间
方式一:时间同步ntpdate
# 2、时间同步写入定时任务 crontab -e
# 每隔5分钟刷新一次
*/5 * * * * /usr/sbin/ntpdate ntp.aliyun.com &> /dev/null
方式二:时间同步chrony
[root@k8s-m-01 ~]# yum -y install chrony
[root@k8s-m-01 ~]# systemctl enable --now chronyd
[root@k8s-m-01 ~]# date #三台机器时间是否一样
Mon Aug 2 10:44:18 CST 2021
3、安装IPVS和内核优化(所有机器)
kubernetes中service有两种代理模式,一种是iptables,一种是ipvs
两者相比,ipvs性能高,但是如果使用,需要手动加载ipvs模块
# 1、安装 IPVS 、加载 IPVS 模块 (所有节点)
[root@k8s-m-01 ~]# yum install ipset ipvsadm #如果没有下载这2个命令
ipvs 是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选 ipvs
[root@k8s-n-01 ~]# vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
# 2、授权(所有节点)
[root@k8s-n-01 ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
# 3、内核参数优化(所有节点)
加载IPVS 模块、生效配置
内核参数优化的主要目的是使其更适合 kubernetes 的正常运行
[root@k8s-n-01 ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1 # 可以之间修改这两个
net.bridge.bridge-nf-call-ip6tables = 1 # 可以之间修改这两个
fs.may_detach_mounts = 1
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_watches=89100
fs.file-max=52706963 开启 OOM
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
# 立即生效
sysctl --system
4、安装Docker(所有机器)
1、docker驱动方式
# 1、什么是cgroup
Cgroup 是一个 Linux 内核特性,对一组进程的资源使用(CPU、内存、磁盘 I/O 和网络等)进行限制、审计和隔离。
cgroups(Control Groups) 是 linux 内核提供的一种机制,这种机制可以根据需求把一系列系统任务及其子任务整合(或分隔)到按资源划分等级的不同组内,从而为系统资源管理提供一个统一的框架。简单说,cgroups 可以限制、记录任务组所使用的物理资源。本质上来说,cgroups 是内核附加在程序上的一系列钩子(hook),通过程序运行时对资源的调度触发相应的钩子以达到资源追踪和限制的目的。
# 2、什么是cgroupfs
docker默认的Cgroup Driver是cgroupfs
[root@docker][14:59:03][OK] ~
#docker info |grep cgroup
Cgroup Driver: cgroupfs
Cgroup提供了一个原生接口并通过cgroupfs提供(从这句话我们可以知道cgroupfs就是Cgroup的一个接口的封装)。类似于procfs和sysfs,是一种虚拟文件系统。并且cgroupfs是可以挂载的,默认情况下挂载在/sys/fs/cgroup目录。
# 3、什么是Systemd?
Systemd也是对于Cgroup接口的一个封装。systemd以PID1的形式在系统启动的时候运行,并提供了一套系统管理守护程序、库和实用程序,用来控制、管理Linux计算机操作系统资源。
# 4、为什么使用systemd而不是croupfs
这里引用以下kubernetes官方的原话:
当某个 Linux 系统发行版使用 systemd 作为其初始化系统时,初始化进程会生成并使用一个 root 控制组(cgroup),并充当 cgroup 管理器。 Systemd 与 cgroup 集成紧密,并将为每个 systemd 单元分配一个 cgroup。 你也可以配置容器运行时和 kubelet 使用 cgroupfs。 连同 systemd 一起使用 cgroupfs 意味着将有两个不同的 cgroup 管理器。
单个 cgroup 管理器将简化分配资源的视图,并且默认情况下将对可用资源和使用 中的资源具有更一致的视图。 当有两个管理器共存于一个系统中时,最终将对这些资源产生两种视图。 在此领域人们已经报告过一些案例,某些节点配置让 kubelet 和 docker 使用 cgroupfs,而节点上运行的其余进程则使用 systemd; 这类节点在资源压力下 会变得不稳定。
ubuntu系统,debian系统,centos7系统,都是使用systemd初始化系统的。systemd这边已经有一套cgroup管理器了,如果容器运行时和kubelet使用cgroupfs,此时就会存在cgroups和systemd两种cgroup管理器。也就意味着操作系统里面存在两种资源分配的视图,当操作系统上存在CPU,内存等等资源不足的时候,操作系统上的进程会变得不稳定。
`注意事项: 不要尝试修改集群里面某个节点的cgroup驱动,如果有需要,最好移除该节点重新加入。
# 5、如何修改docker默认的cgroup驱动
增加"exec-opts": ["native.cgroupdriver=systemd"]配置,重启docker即可
2、docker安装脚本
[root@k8s-n-01 ~]# vim docker.sh
# step 1: 安装必要的一些系统工具
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 &&\
# Step 2: 添加软件源信息
sudo yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo &&\
# Step 3
sudo sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo &&\
# Step 4: 更新并安装Docker-CE
sudo yum makecache fast &&\
sudo yum -y install docker-ce &&\
# Step 4: 开启Docker服务
systemctl enable --now docker.service &&\
# Step 5: Docker加速优化服务
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"exec-opts": ["native.cgroupdriver=systemd"], #这个docker驱动模式改成systemd启动
"registry-mirrors": ["https://k7eoap03.mirror.aliyuncs.com"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
3、docker卸载
# 1、卸载旧的版本
sudo yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine
#2.卸载依赖
yum remove docker-ce docker-ce-cli containerd.io -y
#3.删除目录
rm -rf /var/lib/docker #docker默认的工作路径
#4.镜像加速器(docker优化)
- 登录阿里云找到容器镜像服务
- 找到镜像加速地址
- 配置使用
5、对kubeadmin做高可用
1、安装高可用软件(所有master节点)
# 负载均衡器有很多种,只要能实现api-server高可用都行
# 官方推荐: keeplived + haproxy
[root@k8s-m-01 ~]# yum install -y keepalived haproxy
2、修改keepalived配置文件(所有master节点)
# 1、根据节点的不同,修改的配置也不同
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf_bak
cd /etc/keepalived
KUBE_APISERVER_IP=`hostname -i`
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
# 添加如下内容
script_user root
enable_script_security
}
vrrp_script check_haproxy {
script "/etc/keepalived/check_haproxy.sh" # 检测脚本路径
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER # m2、m3节点改成BACKUP
interface eth0 # 切记192网络对应的网络是eh0,172网段对应的是eth1
virtual_router_id 51
priority 100 # 权重 m2改成90 m3改成80
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.11.26 # 虚拟IP
}
track_script {
check_haproxy # 模块
}
}
EOF
3、每台master节点编写健康监测脚本
[root@master01 ~]# vim /etc/keepalived/check_haproxy.sh
#!/bin/sh
# HAPROXY down
A=`ps -C haproxy --no-header | wc -l`
if [ $A -eq 0 ]
then
systmectl start haproxy
if [ ps -C haproxy --no-header | wc -l -eq 0 ]
then
killall -9 haproxy
echo "HAPROXY down" | mail -s "haproxy"
sleep 10
fi
fi
4、master节点给脚本增加执行权限
[root@master01 ~]# chmod +x /etc/keepalived/check_haproxy.sh
5、修改haproxy配置文件(所有master节点)
# 1、高可用软件 --->是做负载均衡 向后负载均衡会用SLB
[root@k8s-m-01 keepalived]# vim /etc/haproxy/haproxy.cfg
global
maxconn 2000
ulimit-n 16384
log 127.0.0.1 local0 err
stats timeout 30s
defaults
log global
mode http
option httplog
timeout connect 5000
timeout client 50000
timeout server 50000
timeout http-request 15s
timeout http-keep-alive 15s
frontend monitor-in
bind *:33305
mode http
option httplog
monitor-uri /monitor
listen stats
bind *:8006
mode http
stats enable
stats hide-version
stats uri /stats
stats refresh 30s
stats realm Haproxy\ Statistics
stats auth admin:admin
frontend k8s-master
bind 0.0.0.0:8443
bind 127.0.0.1:8443
mode tcp
option tcplog
tcp-request inspect-delay 5s
default_backend k8s-master
backend k8s-master
mode tcp
option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server master01 192.168.11.20:6443 check inter 2000 fall 2 rise 2 weight 100
server masrer02 192.168.11.21:6443 check inter 2000 fall 2 rise 2 weight 100
server master03 192.168.11.22:6443 check inter 2000 fall 2 rise 2 weight 100
6、master节点启动keepalived和haproxy服务并加入开机启动
[root@master01 ~]# systemctl start keepalived && systemctl enable keepalived
[root@master01 ~]# systemctl start haproxy && systemctl enable haproxy
7、查看vip IP地址
[root@master01 ~]# ip -4 a |grep 192.168.11
inet 192.168.11.20/24 brd 192.168.11.255 scope global noprefixroute eth0
inet 192.168.11.26/32 scope global eth0
6、安装kubernetes(所有机器)
每个节点安装kubeadm,kubelet和kubectl 安装的kubeadm、kubectl和kubelet要和kubernetes版本一致,kubelet加入开机启动之后不手动启动,要不然会报错,初始化集群之后集群会自动启动kubelet服务!!!
# 1、阿里源kubernetes
[root@k8s-n-02 yum.repos.d]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# 2、下载最新版本 yum install -y kubelet kubeadm kubectl
# 版本是kubelet-1.23.17
yum -y install kubectl-1.23.17 kubeadm-1.23.17 kubelet-1.23.17
# 3、此时只需开机自启,无需启动,因为还未初始化
systemctl enable --now kubelet.service
# 4、查看版本
[root@k8s-m-01 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.17", GitCommit:"953be8927218ec8067e1af2641e540238ffd7576", GitTreeState:"clean", BuildDate:"2023-02-22T13:34:27Z", GoVersion:"go1.19.6", Compiler:"gc", Platform:"linux/amd64"}
7、m01主节点初始化配置
1、获取默认配置文件
[root@master01 ~]# kubeadm config print init-defaults > kubeadm-config.yaml
2、修改初始化配置文件
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef # token每个人都不一样
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.11.20 # 当前的主机ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master01 # 对应的主机名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
cerSANs:
- 192.168.11.26 # 高可用的虚拟IP
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
controlPlaneEndpoint: 192.168.11.26:8443 # 高可用的虚拟IP
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 可以写自己的镜像仓库
kind: ClusterConfiguration
kubernetesVersion: 1.23.17 # 版本号
networking:
dnsDomain: cluster.local
podSubnet: 10.244.0.0/16 # 网络路由
serviceSubnet: 10.96.0.0/12
scheduler: {}
3、查看并下载kubernetes所需要的镜像
# 1、查看镜像列表
[root@master01 ~]# kubeadm config images list
registry.k8s.io/kube-apiserver:v1.23.17
registry.k8s.io/kube-controller-manager:v1.23.17
registry.k8s.io/kube-scheduler:v1.23.17
registry.k8s.io/kube-proxy:v1.23.17
registry.k8s.io/pause:3.6
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.8.6
# 2、下载相关镜像
[root@master01 ~]# kubeadm config images pull --config kubeadm-config.yaml
4、master01主节点初始化集群
[root@master1 ~]# kubeadm config images pull --config kubeadm-config.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.17
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
[root@master1 ~]# kubeadm init --config kubeadm-config.yaml
[init] Using Kubernetes version: v1.23.17
[preflight] Running pre-flight checks
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 192.168.11.26:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:19038d0136cb3fd96beed7ca2149c2e1ae4817bd81b678c1755585ff22485376 \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.11.26:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:19038d0136cb3fd96beed7ca2149c2e1ae4817bd81b678c1755585ff22485376
5、在其它两个master节点创建以下目录
[root@master1 ~]# mkdir -p /etc/kubernetes/pki/etcd
6、把master01主节点证书分别复制到其他master节点
scp /etc/kubernetes/pki/ca.* root@192.168.11.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@192.168.11.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.11.21:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@192.168.11.21:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@192.168.11.21:/etc/kubernetes/
scp /etc/kubernetes/pki/ca.* root@192.168.11.22:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/sa.* root@192.168.11.22:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/front-proxy-ca.* root@192.168.11.22:/etc/kubernetes/pki/
scp /etc/kubernetes/pki/etcd/ca.* root@192.168.11.22:/etc/kubernetes/pki/etcd/
scp /etc/kubernetes/admin.conf root@192.168.11.22:/etc/kubernetes/
7、把 master01主节点的 admin.conf 复制到其他 node 节点
scp /etc/kubernetes/admin.conf root@192.168.11.23:/etc/kubernetes/
scp /etc/kubernetes/admin.conf root@192.168.11.24:/etc/kubernetes/
8、master节点加入集群执行以下命令
kubeadm join 192.168.11.26:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:19038d0136cb3fd96beed7ca2149c2e1ae4817bd81b678c1755585ff22485376 \
--control-plane
9、node节点加入集群执行以下命令
kubeadm join 192.168.11.26:8443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:19038d0136cb3fd96beed7ca2149c2e1ae4817bd81b678c1755585ff22485376
10、所有master节点执行以下命令,node节点随意
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile.d/kubernetes.sh
source /etc/profile
非root用户执行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
11、查看所有节点状态
[root@master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master01 NoReady control-plane,master 150m v1.23.17
master02 NoReady control-plane,master 144m v1.23.17
master03 NoReady control-plane,master 144m v1.23.17
node01 NoReady <none> 145m v1.23.17
node02 NoReady <none> 145m v1.23.17
12、安装网络插件(三种选其一)
# 1、flannel插件
[root@k8s-m-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 2、calico插件
[root@k8s-m-01 ~]# curl https://docs.projectcalico.org/manifests/calico.yaml -O
[root@k8s-m-01 ~]# kubectl apply -f calico.yaml
# 3、cilium插件
[root@k8s-m-01 ~]# http://120.46.132.244:8080/mm/cilium.tar.gz
[root@k8s-m-01 ~]# tar xf cilium.tar.gz
[root@k8s-m-01 ~]# cp cilium /usr/local/bin/
[root@k8s-m-01 ~]# chmod +x /usr/local/bin/cilium
[root@k8s-m-01 ~]# cilium install
13、重新查看节点状态
# 方式一:查看node和pod
[root@master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
master01 Ready control-plane,master 150m v1.23.17
master02 Ready control-plane,master 144m v1.23.17
master03 Ready control-plane,master 144m v1.23.17
node01 Ready <none> 145m v1.23.17
node02 Ready <none> 145m v1.23.17
[root@master01 ~]# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-flannel kube-flannel-ds-5bxqx 1/1 Running 0 148m
kube-flannel kube-flannel-ds-ft9k2 1/1 Running 0 148m
kube-flannel kube-flannel-ds-gp9rp 1/1 Running 0 141m
kube-flannel kube-flannel-ds-mmhpb 1/1 Running 0 148m
kube-flannel kube-flannel-ds-q7x5b 1/1 Running 2 (54m ago) 148m
kube-system coredns-6d8c4cb4d-d6dtp 1/1 Running 0 155m
kube-system coredns-6d8c4cb4d-qswhc 1/1 Running 0 155m
kube-system etcd-master01 1/1 Running 0 156m
kube-system etcd-master02 1/1 Running 0 150m
kube-system etcd-master03 1/1 Running 0 150m
kube-system kube-apiserver-master01 1/1 Running 0 156m
kube-system kube-apiserver-master02 1/1 Running 0 150m
kube-system kube-apiserver-master03 1/1 Running 0 150m
kube-system kube-controller-manager-master01 1/1 Running 2 (123m ago) 156m
kube-system kube-controller-manager-master02 1/1 Running 0 150m
kube-system kube-controller-manager-master03 1/1 Running 1 (122m ago) 150m
kube-system kube-proxy-5bpkl 1/1 Running 0 150m
kube-system kube-proxy-kjrzj 1/1 Running 2 (54m ago) 151m
kube-system kube-proxy-ltl77 1/1 Running 0 155m
kube-system kube-proxy-qsngx 1/1 Running 0 151m
kube-system kube-proxy-v525l 1/1 Running 0 150m
kube-system kube-scheduler-master01 1/1 Running 2 (123m ago) 156m
kube-system kube-scheduler-master02 1/1 Running 0 150m
kube-system kube-scheduler-master03 1/1 Running 1 (122m ago) 150m
# 方式二:NDS测试
[root@master01 ~]# kubectl run test -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes #输入这条命令,成功后就是以下内容
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/ #
#出现以上界面成功
8、master节点操作(所有)
1、下载etcdctl客户端命令行工具
wget https://github.com/etcd-io/etcd/releases/download/v3.4.14/etcd-v3.4.14-linux-amd64.tar.gz
2、解压并加入环境变量
tar -zxf etcd-v3.4.14-linux-amd64.tar.gz
mv etcd-v3.4.14-linux-amd64/etcdctl /usr/local/bin
chmod +x /usr/local/bin/
3、验证etcdctl是否能用,出现以下结果代表已经成功了
[root@master01 ~]# etcdctl
NAME:
etcdctl - A simple command line client for etcd3.
USAGE:
etcdctl [flags]
VERSION:
3.4.14
API VERSION:
3.4
COMMANDS:
alarm disarm Disarms all alarms
alarm list Lists all alarms
auth disable Disables authentication
auth enable Enables authentication
check datascale Check the memory usage of holding data for different workloads on a given server endpoint.
check perf Check the performance of the etcd cluster
compaction Compacts the event history in etcd
defrag Defragments the storage of the etcd members with given endpoints
del Removes the specified key or range of keys [key, range_end)
elect Observes and participates in leader election
endpoint hashkv Prints the KV history hash for each endpoint in --endpoints
endpoint health Checks the healthiness of endpoints specified in `--endpoints` flag
endpoint status Prints out the status of endpoints specified in `--endpoints` flag
get Gets the key or a range of keys
help Help about any command
lease grant Creates leases
lease keep-alive Keeps leases alive (renew)
lease list List all active leases
lease revoke Revokes leases
lease timetolive Get lease information
lock Acquires a named lock
make-mirror Makes a mirror at the destination etcd cluster
member add Adds a member into the cluster
member list Lists all members in the cluster
member promote Promotes a non-voting member in the cluster
member remove Removes a member from the cluster
member update Updates a member in the cluster
migrate Migrates keys in a v2 store to a mvcc store
move-leader Transfers leadership to another etcd cluster member.
put Puts the given key into the store
role add Adds a new role
role delete Deletes a role
role get Gets detailed information of a role
role grant-permission Grants a key to a role
role list Lists all roles
role revoke-permission Revokes a key from a role
snapshot restore Restores an etcd member snapshot to an etcd directory
snapshot save Stores an etcd node backend snapshot to a given file
snapshot status Gets backend snapshot status of a given file
txn Txn processes all the requests in one transaction
user add Adds a new user
user delete Deletes a user
user get Gets detailed information of a user
user grant-role Grants a role to a user
user list Lists all users
user passwd Changes password of user
user revoke-role Revokes a role from a user
version Prints the version of etcdctl
watch Watches events stream on keys or prefixes
OPTIONS:
--cacert="" verify certificates of TLS-enabled secure servers using this CA bundle
--cert="" identify secure client using this TLS certificate file
--command-timeout=5s timeout for short running command (excluding dial timeout)
--debug[=false] enable client-side debug logging
--dial-timeout=2s dial timeout for client connections
-d, --discovery-srv="" domain name to query for SRV records describing cluster endpoints
--discovery-srv-name="" service name to query when using DNS discovery
--endpoints=[127.0.0.1:2379] gRPC endpoints
-h, --help[=false] help for etcdctl
--hex[=false] print byte strings as hex encoded strings
--insecure-discovery[=true] accept insecure SRV records describing cluster endpoints
--insecure-skip-tls-verify[=false] skip server certificate verification (CAUTION: this option should be enabled only for testing purposes)
--insecure-transport[=true] disable transport security for client connections
--keepalive-time=2s keepalive time for client connections
--keepalive-timeout=6s keepalive timeout for client connections
--key="" identify secure client using this TLS key file
--password="" password for authentication (if this option is used, --user option shouldn't include password)
--user="" username[:password] for authentication (prompt if password is not supplied)
-w, --write-out="simple" set the output format (fields, json, protobuf, simple, table)
4、查看etcd高可用集群健康状态
[root@master01 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.11.20:2379,192.168.11.21:2379,192.168.11.22:2379 endpoint health
+--------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+--------------------+--------+-------------+-------+
| 192.168.11.21:2379 | true | 56.095801ms | |
| 192.168.11.22:2379 | true | 51.466549ms | |
| 192.168.11.20:2379 | true | 60.962885ms | |
+--------------------+--------+-------------+-------+
5、查看etcd高可用集群列表
[root@master01 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.11.20:2379,192.168.11.21:2379,192.168.11.22:2379 member list
+------------------+---------+----------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+----------+----------------------------+----------------------------+------------+
| 4ebbb444774b731c | started | master01 | https://192.168.11.20:2380 | https://192.168.11.20:2379 | false |
| 6eee768fef3c0610 | started | master03 | https://192.168.11.22:2380 | https://192.168.11.22:2379 | false |
| 73cee9d525c91b49 | started | master02 | https://192.168.11.21:2380 | https://192.168.11.21:2379 | false |
+------------------+---------+----------+----------------------------+----------------------------+------------+
6、查看etcd高可用集群leader
[root@master01 ~]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/peer.crt --key=/etc/kubernetes/pki/etcd/peer.key --write-out=table --endpoints=192.168.11.20:2379,192.168.11.21:2379,192.168.11.22:2379 endpoint status
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.11.20:2379 | 4ebbb444774b731c | 3.5.6 | 3.6 MB | false | false | 6 | 19256 | 19256 | |
| 192.168.11.21:2379 | 73cee9d525c91b49 | 3.5.6 | 3.4 MB | true | false | 6 | 19256 | 19256 | |
| 192.168.11.22:2379 | 6eee768fef3c0610 | 3.5.6 | 3.4 MB | false | false | 6 | 19256 | 19256 | |
+--------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
9、故障排查汇总
# 1、从节点加入集群可能会出现如下报错:
[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
PS:前提安装Docker+启动,再次尝试加入节点!
# 1、报错原因:
swap没关,一旦触发 swap,会导致系统性能急剧下降,所以一般情况下,所以K8S 要求关闭 swap
# 2、解决方法:
1> 执行以下三条命令后再次执行添加到集群命令:
modprobe br_netfilter
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
echo 1 > /proc/sys/net/ipv4/ip_forward
# 2、STATUS 状态是Healthy
[root@k8s-m-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
scheduler Unhealthy Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager Unhealthy Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0 Healthy {"health":"true"}
1、解决方式
[root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-controller-manager.yaml
#- --port=0
[root@k8s-m-01 ~]# vim /etc/kubernetes/manifests/kube-scheduler.yaml
#- --port=0
[root@k8s-m-01 ~]# systemctl restart kubelet.service
2、查看状态
[root@k8s-m-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
三、安装集群图形化界面(Dashboard )
Dashboard 是 基 于 网 页 的 Kubernetes 用 户 界 面 。 您 可 以 使 用 Dashboard 将 容 器 应 用 部 署 到Kubernetes 集群中,也可以对容器应用排错,还能管理集群本身及其附属资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如Deployment,Job,DaemonSet等等)。
1、安装图形化界面 (必须启动所有机器,否则连接不上集群)
可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
2、下载recommended.yaml文件
[root@master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
3、修改recommended.yaml文件
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #增加
ports:
- port: 443
targetPort: 8443
nodePort: 30000 #增加
selector:
k8s-app: kubernetes-dashboard
---
4、创建证书
mkdir dashboard-certs
cd dashboard-certs/
#创建命名空间
kubectl create namespace kubernetes-dashboard
# 创建key文件
openssl genrsa -out dashboard.key 2048
#证书请求
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
5、安装dashboard
(如果报错:Error from server (AlreadyExists): error when creating “./recommended.yaml”: namespaces “kubernetes-dashboard” already exists这个忽略不计,不影响。)
[root@master01 ~]# kubectl apply -f recommended.yaml
6、查看安装结果
[root@master01 ~]# kubectl get po -A |grep kubernetes
kubernetes-dashboard dashboard-metrics-scraper-577dc49767-bhvtd 1/1 Running 0 148m
kubernetes-dashboard kubernetes-dashboard-78f9d9744f-2lvq6 1/1 Running 0 127m
7、创建dashboard管理员
vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: dashboard-admin
namespace: kubernetes-dashboard
8、部署dashboard-admin.yaml文件
[root@master01 ~]# kubectl apply -f dashboard-admin.yaml
9、为用户分配权限
[root@master01 ~]# vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: dashboard-admin-bind-cluster-role
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: dashboard-admin
namespace: kubernetes-dashboard
[root@master01 ~]# kubectl apply -f dashboard-admin-bind-cluster-role.yaml
10、查看并复制用户Token
[root@master01 ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
四、一条命令安装集群图形化界面(Dashboard )
# 1、下载资源清单并生成
方式一:giitubx下载
[root@k8s-m-01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.4/aio/deploy/recommended.yaml
方式二:自己网站下载并生成
[root@k8s-m-01 ~]# http://服务器:8080/mm/recommended.yaml
[root@k8s-m-01 ~]# kubectl apply -f recommended.yaml
方式三:一步生成并安装
[root@k8s-m-01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml
# 2、查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.109.68.74 <none> 8000/TCP 30s
kubernetes-dashboard ClusterIP 10.105.125.10 <none> 443/TCP 34s
# 3、开一个端口,用于访问
[root@k8s-m-01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
type: ClusterIP => type: NodePort #修改成NodePort
# 4、重新查看端口
[root@k8s-m-01 ~]# kubectl get svc -n kubernetes-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.96.44.119 <none> 8000/TCP 12m
kubernetes-dashboard NodePort 10.96.42.127 <none> 443:40927/TCP 12m
# 5、创建token配置文件
[root@k8s-m-01 ~]# vim token.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kube-system
# 6、部署token到集群
[root@k8s-m-01 ~]# kubectl apply -f token.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
# 7、获取token
[root@k8s-m-01 ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token: | awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1NeTJxSDZmaFc1a00zWVRXTHdQSlZlQnNjWUdQMW1zMjg5OTBZQ1JxNVEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxMm56Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2Q4MjIzYi1jYmY1LTQ5ZTUtYjAxMS1hZTAzMzM2MzVhYzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Q4gC_Kr_Ltl_zG0xkhSri7FQrXxdA5Zjb4ELd7-bVbc_9kAe292w0VM_fVJky5FtldsY0XOp6zbiDVCPkmJi9NXT-P09WvPc9g-ISbbQB_QRIWrEWF544TmRSTZJW5rvafhbfONtqZ_3vWtMkCiDsf7EAwDWLLqA5T46bAn-fncehiV0pf0x_X16t72Qqa-aizHBrVcMsXQU0wnYC7jt373pnhnFHYdcJXx_LgHaC1LgCzx5BfkuphiYOaj_dVB6tAlRkQo3QkFP9GIBW3LcVfhOQBmMQl8KeHvBW4QC67PQRv55IUaUDJ_lRC2QKbeJzaUto-ER4YxFwr4tncBwZQ
# 8、验证集群是否成功
[root@k8s-m-01 kubernetes]# kubectl run test01 -it --rm --image=busybox:1.28.3
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/
# 9、通过token访问
192.168.15.111:40927 # 第五步查看端口
1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWpxMm56Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyN2Q4MjIzYi1jYmY1LTQ5ZTUtYjAxMS1hZTAzMzM2MzVhYzQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.Q4gC_Kr_Ltl_zG0xkhSri7FQrXxdA5Zjb4ELd7-bVbc_9kAe292w0VM_fVJky5FtldsY0XOp6zbiDVCPkmJi9NXT-P09WvPc9g-ISbbQB_QRIWrEWF544TmRSTZJW5rvafhbfONtqZ_3vWtMkCiDsf7EAwDWLLqA5T46bAn-fncehiV0pf0x_X16t72Qqa-aizHBrVcMsXQU0wnYC7jt373pnhnFHYdcJXx_LgHaC1LgCzx5BfkuphiYOaj_dVB6tAlRkQo3QkFP9GIBW3LcVfhOQBmMQl8KeHvBW4QC67PQRv55IUaUDJ_lRC2QKbeJzaUto-ER4YxFwr4tncBwZQ
8、验证集群是否成功
[root@k8s-m-01 kubernetes]# kubectl run test01 -it --rm --image=busybox:1.28.3
If you don’t see a command prompt, try pressing enter.
/ # nslookup kubernetes
Address 1: 10.96.0.2 kube-dns.kube-system.svc.cluster.local
Name: kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local
/
9、通过token访问
192.168.15.111:40927 # 第五步查看端口
更多推荐
所有评论(0)