k8s1.18集群安装
一、准备实验环境1.准备两台centos7虚拟机,用来安装k8s集群,下面是两台虚拟机的配置情况master1(192.168.253.138)配置:操作系统:centos7.6以及更高版本都可以配置:2核cpu,3.5G内存(内存小了会卡),20G硬盘网络:我用的NAT网络模式(原创作者桥接模式也可以)node1(192.168.253.148)配置:操作系统:centos7.6以及更高版本都可
资料下载
1.下文需要的yaml文件所在的下载地址如下:
calico-v3.5.3/dashboard/mertric/traefik.yaml-Java文档类资源-CSDN下载
2.下文里提到的初始化k8s集群需要的镜像获取方式:镜像在百度网盘,链接如下:
链接:https://pan.baidu.com/s/1k1heJy8lLnDk2JEFyRyJdA
提取码:udkj
或者用下边这个:
链接:https://pan.baidu.com/s/1Enhkn-BxEzfrHoybydj0Zg?pwd=rqoo
提取码:rqoo
一、准备实验环境
1.准备两台centos7虚拟机,用来安装k8s集群,下面是两台虚拟机的配置情况
master1(192.168.253.138)配置:
操作系统:centos7.6以及更高版本都可以配置:2核cpu,3.5G内存(内存小了会卡),20G硬盘
网络:我用的NAT网络模式(原创作者桥接模式也可以)
node1(192.168.253.148)配置:
操作系统:centos7.6以及更高版本都可以配置:2核cpu,3G内存(内存小了会卡),20G硬盘
网络:我用的NAT网络模式(原创作者桥接模式也可以)
二、初始化实验环境
1.配置静态ip
把虚拟机或者物理机配置成静态ip地址,这样机器重新启动后ip地址也不会发生改变(我配置后,即使配置好了,开启也总变化成虚拟机DHCP分配的,每次都要用命令 # service network restart 才行)。
1.1 在master1节点配置网络
修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,变成如下:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
#BOOTPROTO=dhcp
BOOTPROTO=static #注意这个从dhcp改为static,防止虚拟机重启后,IP自动分配,导致每次IP都不一样
IPADDR=192.168.253.138 #静态IP
GATEWAY=192.168.253.2 #默认网关
NETMASK=255.255.255.0 #子网掩码
PREFIX=24 #如果不写子网掩码,就得写这个24,24代表三个8(255.255.255.0)
ONBOOT=yes
UUID=233e2d0a-bab5-4de3-9bc7-a074bc8a-138 #虚拟机拷贝后修改它,保证唯一性
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
#UUID=233e2d0a-bab5-4de3-9bc7-a074bc8a3bf8
DEVICE=ens33
#ONBOOT=no
IPV6_PRIVACY=no
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=8701e71a-1a43-4332-ba16-9b12e689cfbb
DEVICE=ens33
ONBOOT=yes
IPADDR=192.168.74.132
GATEWAY=192.168.74.2
NETMASK=255.255.255.0
DNS1=114.114.114.114
//重启系统后静态IP失效的解决办法,如下:
service NetworkManager stop // 将当前NetwokManager服务关闭
chkconfig NetworkManager off // 将NetworkManager 服务设置开机不启动
service network restart // 重新启动一下network服务
修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下:
service network restart
1.2 在node1节点配置网络
修改/etc/sysconfig/network-scripts/ifcfg-ens33文件,变成如下:
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
#BOOTPROTO=dhcp
BOOTPROTO=static
IPADDR=192.168.253.148 #静态IP
GATEWAY=192.168.253.2 #默认网关
NETMASK=255.255.255.0 #子网掩码
ONBOOT=yes
UUID=233e2d0a-bab5-4de3-9bc7-a074bc8a-148 #虚拟机拷贝后修改它,保证唯一性
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
#UUID=233e2d0a-bab5-4de3-9bc7-a074bc8a3bf8
DEVICE=ens33
#ONBOOT=no
IPV6_PRIVACY=no
修改配置文件之后需要重启网络服务才能使配置生效,重启网络服务命令如下:
service network restart
2.修改yum源,各个节点操作
(1)备份原来的yum源
mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
(2)下载阿里的yum源
wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
(3)生成新的yum缓存
yum makecache fast
(4)配置安装k8s需要的yum源
cat <<EOF > /etc/yum.repos.d/kubernetestest.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
EOF
(5)清理yum缓存
yum clean all
(6)生成新的yum缓存
yum makecache fast
(7)更新yum源
yum -y update (需要走558M的流量)
(8)安装软件包
yum -y install yum-utilsdevice-mapper-persistent-data lvm2
(9)添加新的软件源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum clean all
yum makecache fast
3.安装基础软件包(下载47M),各个节点操作
yum -y install wget net-tools nfs-utils lrzsz gcc gcc-c++make cmake libxml2-devel openssl-devel curl curl-devel unzip sudo ntplibaio-devel wget vim ncurses-devel autoconf automake zlib-devel python-devel epel-release openssh-serversocat ipvsadm conntrack ntpdate
4.关闭firewalld防火墙,各个节点操作,centos7系统默认使用的是firewalld防火墙,停止firewalld防火墙,并禁用这个服务
systemctl stop firewalld && systemctl disable firewalld
5.安装iptables,各个节点操作,如果你用firewalld不是很习惯,可以安装iptables,这个步骤可以不做,根据大家实际需求
5.1 安装iptables
yum install iptables-services -y
5.2 禁用iptables(我自己没禁用这个)
service iptables stop && systemctl disable iptables
6.时间同步,各个节点操作
6.1 时间同步
ntpdate cn.pool.ntp.org
6.2 编辑计划任务,每小时做一次同步(我自己没做定时同步)
1)crontab -e 来编辑定时任务,i进去
每天凌晨一点同步
0 1 * * * /usr/sbin/ntpdate -u pool.ntp.org
2)重启crond服务:
service crond restart
7. 关闭selinux,各个节点操作
关闭selinux,设置永久关闭,这样重启机器selinux也处于关闭状态
修改/etc/sysconfig/selinux和/etc/selinux/config文件,把
SELINUX=enforcing变成SELINUX=disabled,也可用下面方式修改:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/sysconfig/selinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
上面文件修改之后,需要重启虚拟机,可以强制重启:
reboot -f
8.关闭交换分区,各个节点操作
swapoff -a
#永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i 's/.*swap.*/#&/' /etc/fstab
9.修改内核参数,各个节点操作
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
10.修改主机名
在192.168.253.138上:
hostnamectl set-hostname master1
在192.168.253.148上:
hostnamectl set-hostname node1
11.配置hosts文件,各个节点操作
在/etc/hosts文件增加如下几行:
192.168.253.138 master1
192.168.253.148 node1
12.配置master1到node1无密码登陆,配置master1到node1无密码登陆(这部等docker安装后做,之后虚拟机克隆后,在做这步)
在master1上操作
ssh-keygen -t rsa
#一直回车就可以
cd /root && ssh-copy-id -i .ssh/id_rsa.pub root@node1
#上面需要输入yes之后,输入密码,输入node1物理机密码即可
三、安装kubernetes1.18.2单master节点的高可用集群
1.安装docker19.03,各个节点操作
1.1 查看支持的docker版本
yum list docker-ce --showduplicates |sort -r
1.2 安装19.03.7版本(下载91M)
yum install -y docker-ce-19.03.7-3.el7
systemctl enable docker && systemctl start docker
#查看docker状态,如果状态是active(running),说明docker是正常运行状态
systemctl status docker
1.3 修改docker配置文件
cat > /etc/docker/daemon.json <<EOF
{"exec-opts":["native.cgroupdriver=systemd"],
"log-driver":"json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver":"overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF
1.4 重启docker使配置生效
systemctl daemon-reload && systemctl restart docker
(如果启动失败,查看linux系统操作日志 tail -200f /var/log/messages)
1.5 设置网桥包经IPTables,core文件生成路径,配置永久生效
//echo 把1写入/proc/sys/net/bridge/bridge-nf-call-ip6tables
echo 1 >/proc/sys/net/bridge/bridge-nf-call-ip6tables
//echo追加内容到 /etc/sysctl.conf
echo """
vm.swappiness = 0
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
""" > /etc/sysctl.conf
sysctl -p
1.6 开启ipvs,不开启ipvs将会使用iptables,但是效率低,所以官网推荐需要开通ipvs内核(这步我没执行成功)
cat>/etc/sysconfig/modules/ipvs.modules<<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sedip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -F filename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ];then
/sbin/modprobe\${kernel_module}
fi
done
EOF
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rrip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sedip_vs_ftp nf_conntrack"
for kernel_module in \${ipvs_modules}; do
/sbin/modinfo -Ffilename \${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ];then
/sbin/modprobe\${kernel_module}
fi
done
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules &&bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
2.安装kubernetes1.18.2
2.1在master1和node1上安装kubeadm和kubelet
yum install kubeadm-1.18.2 kubelet-1.18.2 -y (下载量64M)
systemctl enable kubelet
2.2上传镜像到master1和node1节点之后,按如下方法通过docker load -i手动解压镜像,镜像在百度网盘,文章最上面附有镜像所在的百度网盘地址,我是从官方下载的镜像,大家可以放心使用。
docker load -i 1-18-kube-apiserver.tar.gz
docker load -i 1-18-kube-scheduler.tar.gz
docker load -i 1-18-kube-controller-manager.tar.gz
docker load -i 1-18-pause.tar.gz
docker load -i 1-18-cordns.tar.gz
docker load -i 1-18-etcd.tar.gz
docker load -i 1-18-kube-proxy.tar.gz
说明:
pause版本是3.2,用到的镜像是k8s.gcr.io/pause:3.2
etcd版本是3.4.3,用到的镜像是k8s.gcr.io/etcd:3.4.3-0
cordns版本是1.6.7,用到的镜像是k8s.gcr.io/coredns:1.6.7
apiserver、scheduler、controller-manager、kube-proxy版本是1.18.2,用到的镜像分别是
k8s.gcr.io/kube-apiserver:v1.18.2
k8s.gcr.io/kube-controller-manager:v1.18.2
k8s.gcr.io/kube-scheduler:v1.18.2
k8s.gcr.io/kube-proxy:v1.18.2
为什么手动解压镜像?
1)因为很多同学的公司是内网环境,或者访问不了dockerhub镜像仓库,所以需要我们把镜像上传到各个机器手动解压,很多同学会问,如果机器很多,怎么办,难道还要把镜像拷贝到很多机器,这岂不是很费时间,的确,如果机器很多,我们只需要把这些镜像传到我们的内部私有镜像仓库即可,这样我们在kubeadm初始化kubernetes时可以通过"--image-repository=私有镜像仓库地址"的方式进行镜像拉取,这样不需要手动传到镜像到每个机器,后面会介绍;
2)镜像存到百度网盘可以永久使用,防止官方不在维护,我们无从下载镜像,所以有私有仓库的同学可以把这些镜像传到自己私有镜像仓库。
2.3 在master1节点初始化k8s集群,在master1上操作如下
如果按照我在2.2节手动上传镜像到各个节点,通过docker load-i方式解压镜像,那么用下面的方法初始化k8s集群,大家都统一按照这种方法上传镜像到各个机器,手动解压,这样后面实验才会正常进行。
初始化k8s集群
kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.253.138
kubeadm init --kubernetes-version=v1.18.2 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.253.138
初始化命令执行成功之后显示如下内容,说明初始化成功了
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regularuser:
mkdir -p $HOME/.kube
sudo cp -i/etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g)$HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the optionslisted at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following oneach as root:
kubeadm join 192.168.253.138:6443 --token si1c9n.3c5os94xcuzq6wl3 \
--discovery-token-ca-cert-hashsha256:9d3a35eab0f6badba61ebb833d420902e4f9e0168ee1c1374121668ab382a596
注:kubeadm join ... 这条命令需要记住,我们把k8s的node1节点加入到集群需要在这些节点节点输入这条命令,每次执行这个结果都是不一样的,大家记住自己执行的结果,在下面会用到
注意:如果master token忘记了,使用以下命令进行查看:
kubeadm token create --print-join-command
(注意:如果执行 kubeadm reset 后,记得执行rm -rf $HOME/.kube)
2.5 在master1节点执行如下,这样才能有权限操作k8s资源
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
在master1节点执行
kubectl get nodes
显示如下,master1节点是NotReady
NAME STATUS ROLES AGE VERSION master1 NotReady master 8m11s v1.18.2
kubectl get pods -n kube-system
显示如下,可看到cordns也是处于pending状态
coredns-7ff77c879f-j48h6 0/1 Pending 0 3m16scoredns-7ff77c879f-lrb77 0/1 Pending 0 3m16s
上面可以看到STATUS状态是NotReady,cordns是pending,是因为没有安装网络插件,需要安装calico或者flannel,接下来我们安装calico,在master1节点安装calico网络插件:
安装calico需要的镜像是quay.io/calico/cni:v3.5.3和quay.io/calico/node:v3.5.3,镜像在文章开头处的百度网盘地址
手动上传上面两个镜像的压缩包到各个节点,通过docker load -i解压
docker load -i cni.tar.gz
docker load -i calico-node.tar.gz
在master1节点执行如下
kubectl apply -f calico.yaml
yaml下载地址:calico-v3.5.3/dashboard/mertric/traefik.yaml-Java文档类资源-CSDN下载
(注意:删除命令 kubectl delete -f calico.yaml ,重新安装前用删除命令 )
在master1节点执行
kubectl get nodes
显示如下,看到STATUS是Ready
NAME STATUS ROLES AGE VERSION
master1 Ready master 98m v1.18.2
kubectl get pods -n kube-system
看到cordns也是running状态,说明master1节点的calico安装完成
NAME READY STATUS RESTARTS AGE
calico-node-6rvqm 1/1 Running 0 17m
coredns-7ff77c879f-j48h6 1/1 Running 0 97m
coredns-7ff77c879f-lrb77 1/1 Running 0 97m
etcd-master1 1/1 Running 0 97m
kube-apiserver-master1 1/1 Running 0 97m
kube-controller-manager-master1 1/1 Running 0 97m
kube-proxy-njft6 1/1 Running 0 97m
kube-scheduler-master1 1/1 Running 0 97m
注意:calico重新安装步骤:
方法一:
kubectl delete -f calico.yaml
之后重新执行kubectl apply -f calico.yaml
方法二:
rm -rf /etc/cni/net.d/*
rm -rf /var/lib/cni/calico
之后重新执行kubectl apply -f calico.yaml
另外:在不关闭kubernets相关服务的情况下,对kubernets的master节点进行重启。(模拟服务器的异常掉电)情况下,导致kubelet错误,执行以下命令
kubeadm reset
docker load -i 1-18-kube-apiserver.tar.gz
systemctl restart docker
systenctl restart kubelet
如果报错: couldn't get current server API group list:
那么设置一下环境变量: export KUBECONFIG=/etc/kubernetes/admin.conf
或者设置环境变量长期方案;
mkdir ~/.kube cp /etc/kubernetes/admin.conf ~/.kube/config
2.6 把node1节点加入到k8s集群,在node1节点操作
kubeadm join 192.168.0.6:6443 --token si1c9n.3c5os94xcuzq6wl3 \
--discovery-token-ca-cert-hashsha256:9d3a35eab0f6badba61ebb833d420902e4f9e0168ee1c1374121668ab382a596
注:上面的这个加入到k8s节点的一串命令kubeadm join就是在2.4初始化的时候生成的
另外:在不关闭kubernets相关服务的情况下,对kubernets的node节点进行重启。(模拟服务器的异常掉电)情况下,导致kubelet错误,执行以下命令
kubeadm reset 之后,再执行2.6加入这个命令
2.8 在master1节点查看集群节点状态
kubectl get nodes
显示如下:
NAME STATUS ROLES AGE VERSION
master1 Ready master 3m36s v1.18.2
node1 Ready <none> 3m36s v1.18.2
说明node1节点也加入到k8s集群了,通过以上就完成了k8s单master高可用集群的搭建
注:如果子节点node1是 NotReady
①主节点执行$ kubectl describe node [node-ip/或者名] 查看错误
之后主节点先删除这个子node节点 kubectl delete node node1
②之后子节点执行 $kubeadm reset
子节点再进行join操作就行
2.9 安装traefik
官网:https://docs.traefik.io/
把traefik镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载
docker load -i traefik_1_7_9.tar.gz
traefik用到的镜像是k8s.gcr.io/traefik:1.7.9
1)生成traefik证书,在master1上操作
mkdir ~/ikube/tls/ -p
echo """
[req]
distinguished_name = req_distinguished_name
prompt = yes
[ req_distinguished_name ]
countryName = Country Name (2 letter code)
countryName_value = CN
stateOrProvinceName = State orProvince Name (full name)
stateOrProvinceName_value = Beijing
localityName = Locality Name (eg, city)
localityName_value =Haidian
organizationName =Organization Name (eg, company)
organizationName_value = Channelsoft
organizationalUnitName = OrganizationalUnit Name (eg, p)
organizationalUnitName_value = R & D Department
commonName = Common Name (eg, your name or your server\'s hostname)
commonName_value =*.multi.io
emailAddress = Email Address
emailAddress_value =lentil1016@gmail.com
""" > ~/ikube/tls/openssl.cnf
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
2)执行yaml文件,创建traefik
kubectl apply -f traefik.yaml
yaml下载地址:calico-v3.5.3/dashboard/mertric/traefik.yaml-Java文档类资源-CSDN下载
3)查看traefik是否部署成功:
kubectl get pods -n kube-system
traefik-ingress-controller-csbp8 1/1 Running 0 5s
traefik-ingress-controller-hqkwf 1/1 Running 0 5s
另外,重新安装traefik的方法如下:
kubectl delete -f traefik.yaml
openssl req -newkey rsa:4096 -nodes -config ~/ikube/tls/openssl.cnf -days 3650 -x509 -out ~/ikube/tls/tls.crt -keyout ~/ikube/tls/tls.key
kubectl create -n kube-system secret tls ssl --cert ~/ikube/tls/tls.crt --key ~/ikube/tls/tls.key
kubectl apply-f traefik.yaml
3.安装kubernetes-dashboard-2版本(kubernetes的web ui界面)
把kubernetes-dashboard镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载
docker load -i dashboard_2_0_0.tar.gz
docker load -i metrics-scrapter-1-0-1.tar.gz
解压出来的镜像是kubernetesui/dashboard:v2.0.0-beta8和kubernetesui/metrics-scraper:v1.0.1
在master1节点操作
kubectl apply -f kubernetes-dashboard.yaml
查看dashboard是否安装成功:
kubectl get pods -n kubernetes-dashboard
显示如下,说明dashboard安装成功了
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-694557449d-8xmtf 1/1 Running 0 60s
kubernetes-dashboard-5f98bdb684-ph9wg 1/1 Running 2 60s
查看dashboard前端的service
kubectl get svc -n kubernetes-dashboard
修改service type类型变成NodePort:
kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
把type: ClusterIP变成 type: NodePort,保存退出即可。
kubectl get svc -n kubernetes-dashboard
显示如下:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
dashboard-metrics-scraper ClusterIP 10.100.23.9 <none> 8000/TCP 3m59s
kubernetes-dashboard NodePort 10.105.253.155 <none> 443:31175/TCP 4m
上面可看到service类型是NodePort,访问master1节点ip:31175端口即可访问kubernetes dashboard,我的环境需要输入如下地址
https://192.168.253.138:31775/
可看到出现了dashboard界面
另外,重新安装dashboard的方法如下:
kubectl delete -f dashboard.yaml
kubectl delete -f dashboard.yaml
3.1通过yaml文件里指定的默认的token登陆dashboard
1)查看kubernetes-dashboard名称空间下的secret
kubectl get secret -n kubernetes-dashboard
显示如下:
NAME TYPE DATA AGE
default-token-vxd7t kubernetes.io/service-account-token 3 5m27s
kubernetes-dashboard-certs Opaque 0 5m27s
kubernetes-dashboard-csrf Opaque 1 5m27s
kubernetes-dashboard-key-holder Opaque 2 5m27s
kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
2)找到对应的带有token的kubernetes-dashboard-token-ngcmg
kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
显示如下:
...
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
点击sing in登陆,显示如下,默认是只能看到default名称空间内容
3.2 创建管理员token,可查看任何空间权限
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kubernetes-dashboard:kubernetes-dashboard
1)查看kubernetes-dashboard名称空间下的secret
kubectl get secret -n kubernetes-dashboard
显示如下:
NAME TYPE DATA AGE
default-token-vxd7t kubernetes.io/service-account-token 3 5m27s
kubernetes-dashboard-certs Opaque 0 5m27s
kubernetes-dashboard-csrf Opaque 1 5m27s
kubernetes-dashboard-key-holder Opaque 2 5m27s
kubernetes-dashboard-token-ngcmg kubernetes.io/service-account-token 3 5m27s
2)找到对应的带有token的kubernetes-dashboard-token-ngcmg
kubectl describe secret kubernetes-dashboard-token-ngcmg -n kubernetes-dashboard
显示如下:
...
...
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
记住token后面的值,把下面的token值复制到浏览器token登陆处即可登陆:
eyJhbGciOiJSUzI1NiIsImtpZCI6IjZUTVVGMDN4enFTREpqV0s3cDRWa254cTRPc2xPRTZ3bk8wcFJBSy1JSzgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi1uZ2NtZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYwMDFhNTM0LWE2ZWQtNGQ5MC1iMzdjLWMxMWU5Njk2MDE0MCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.WQFE0ygYdKkUjaQjFFU-BeWqys07J98N24R_azv6f-o9AB8Zy1bFWZcNrOlo6WYQuh-xoR8tc5ZDuLQlnZMBSwl2jo9E9FLZuEt7klTfXf4TkrQGLCxzDMD5c2nXbdDdLDtRbSwQMcQwePwp5WTAfuLyqJPFs22Xi2awpLRzbHn3ei_czNuamWUuoGHe6kP_rTnu6OUpVf1txi9C1Tg_3fM2ibNy-NWXLvrxilG3x3SbW1A3G6Y2Vbt1NxqVNtHRRQsYCvTnp3NZQqotV0-TxnvRJ3SLo_X6oxdUVnqt3DZgebyIbmg3wvgAzGmuSLlqMJ-mKQ7cNYMFR2Z8vnhhtA
点击sing in登陆,显示如下,这次就可以看到和操作任何名称空间的资源了
4.安装metrics组件
把metrics-server-amd64_0_3_1.tar.gz和addon.tar.gz镜像上传到各个节点,按照如下方法通过docker load -i解压,镜像地址在文章开头处的百度网盘里,可自行下载
docker load -i metrics-server-amd64_0_3_1.tar.gz
docker load -i addon.tar.gz
metrics-server版本0.3.1,用到的镜像是k8s.gcr.io/metrics-server-amd64:v0.3.1
addon-resizer版本是1.8.4,用到的镜像是k8s.gcr.io/addon-resizer:1.8.4
在k8s的master1节点操作
kubectl apply -f metrics.yaml
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATE
calico-node-h66ll 1/1 Running 0 51m 192.168.0.56 node1 <none>
calico-node-r4k6w 1/1 Running 0 58m 192.168.0.6 master1 <none>
coredns-66bff467f8-2cj5k 1/1 Running 0 70m 10.244.0.3 master1 <none>
coredns-66bff467f8-nl9zt 1/1 Running 0 70m 10.244.0.2 master1 <none>
etcd-master1 1/1 Running 0 70m 192.168.0.6 master1 <none>
kube-apiserver-master1 1/1 Running 0 70m 192.168.0.6 master1 <none>
kube-controller-manager-master1 1/1 Running 0 70m 192.168.0.6 master1 <none>
kube-proxy-qts4n 1/1 Running 0 70m 192.168.0.6 master1 <none>
kube-proxy-x647c 1/1 Running 0 51m 192.168.0.56 node1 <none>
kube-scheduler-master1 1/1 Running 0 70m 192.168.0.6 master1 <none>
metrics-server-8459f8db8c-gqsks 2/2 Running 0 16s 10.244.1.6 node1 <none>
traefik-ingress-controller-xhcfb 1/1 Running 0 39m 192.168.0.6 master1 <none>
traefik-ingress-controller-zkdpt 1/1 Running 0 39m 192.168.0.56 node1 <none>
上面如果看到metrics-server-8459f8db8c-gqsks是running状态,说明metrics-server组件部署成功了,接下来就可以在master1节点上使用kubectl top pods -n kube-system或者kubectl top nodes命令(kubectl top是看cpu和存储的使用情况)
更多推荐
所有评论(0)