目录

一、环境说明

1.1常用问答

1.2 实验目标

1.3 相关配置说明

1.3.1关闭selinux

1.3.2关闭防火墙

1.3.3 时间同步

1.3.4 安装常用软件

1.3.5 安装epel和remi源(本章可不要)

1.4所涉及的软件及版本

1.5 Kubernetes全局架构图

二、安装kubeadm(master和node)

2.1 安装前准备(所有节点)

2.1.1 关闭swap

2.1.2 安装docker-ce及添加镜像加速器

2.1.3 确保网络模块开机自动加载

2.2 kubeadm安装(所有节点)

三、创建master节点(master)

3.1 创建master节点kubeadm init

3.1.1 init初始化前配置

3.1.2 执行init

3.1.3 kubeadm ini大概流程

3.2安装容器网络插件

3.2.1 查看节点状态

3.3安装DashBoard插件

3.3.1 安装

3.3.2 配置

四、创建node节点并加入master中(node)

4.1 kubeadm join 的工作流程

4.2 kubeadm安装

4.3 加入集群

五、 安装容器存储插件Rook(可选 master端 )

5.1 Rook介绍

5.2 为什么我要选择 Rook 项目呢

5.3 安装 Rook-ceph

5.3.1 安装common.yaml 和operator.yaml

5.3.2 查看相关状态

5.3.3 创建群集

5.3.4 使用ceph-toolbox查看集群状态


前面花了5章讲了docker,现在就讲docker其中的一种编排工具k8s,学习k8s建议先用集成工具,不要一起来就直接二进制安装,搞得太复杂,这里推荐用kubeadm,而且高可用HA已经是GA了,有些公司已经用它跑生产环境。

kubeadmin有很多已经设置好,灵活性没有用二进制那么好,但是省了不少麻烦。一般规模不大的应用已经够用了

我这里不使用kubeadm的HA(高可用),使用的是单主,因为是实验给初学者的,生产环境建议用HA

另个也有有一个错的ansible自动化安装k8s的github叫kubeasz 有兴趣的可以看一下,支持的东西挻多的

更新时间:2020.11.12

一、环境说明

Kubeadm 是一个工具,它提供了 kubeadm init 以及 kubeadm join 这两个命令作为快速创建 kubernetes 集群的最佳实践。

kubeadm 通过执行必要的操作来启动和运行一个最小可用的集群。它被故意设计为只关心启动集群,而不是之前的节点准备工作。同样的,诸如安装各种各样值得拥有的插件,例如 Kubernetes Dashboard、监控解决方案以及特定云提供商的插件,这些都不在它负责的范围。

相反,我们期望由一个基于 kubeadm 从更高层设计的更加合适的工具来做这些事情;并且,理想情况下,使用 kubeadm 作为所有部署的基础将会使得创建一个符合期望的集群变得容易。

 

1.1常用问答

问:minikuke和kubeadm有什么区别吗?

答:Minikube只是本地玩玩用的,官网给出Minikube适合做学习环境

minikube基本上你可以认为是一个实验室工具,只能单机部署,里面整合了k8最主要的组件,无法搭建集群,且由于程序做死无法安装各种扩展插件(比如网络插件、dns插件、ingress插件等等),主要作用是给你了解k8用的。而kudeadm搭建出来是一个真正的k8集群,可用于生产环境(HA需要自己做),和二进制搭建出来的集群几乎没有区别。

问:kubeadm目前还是不适用应该于生产环境吗?

答: 已经可以

kubeadm高可用的 Kubernetes 集群已经GA了,即:Etcd、Master 组件都应该是多节点集群

另一方面,Lucas 也正在积极地把 kubeadm phases 开放给用户,即:用户可以更加自由地定制 kubeadm 的每一个部署步骤。这些举措,都可以让这个项目更加完善,我对它的发展走向也充满了信心。

当然,如果你有部署规模化生产环境的需求,我推荐使用kops或者 SaltStack 这样更复杂的部署工具。

问: 怎么二进制部署kubernetes?毕竟kubeadm不适用于生产环境,用二进制部署还是挺复杂的

不建议直接使用二进制文件部署k8s。而建议你花时间了解一下kubeadm的高可用部署。宝贵的时间应该用在刀刃上。

 

1.2 实验目标

  1. 所有节点上安装 Docker 和 kubeadm
  2. 部署 Kubernetes Master节点
  3. 部署容器网络插件(master,node添加会自动安装配置)
  4. 部署 Kubernetes Worker node节点(添加进master)
  5. 部署 Dashboard 可视化插件(master上安装)
  6. 部署容器存储插件:Rook ceph(master上安装)

主要框架图为:

1.3 相关配置说明

参照kubeadm官网要求做了一下调整

  1. 电脑一台,CPU支持VT,内存≥10G,可用磁盘空间大于60G,最好是独立的硬盘不要和系统盘一起,我这里是是独立的固态硬盘
  2. 安装VMware Workstation虚拟机,我这里是vm10
  3. 创建3个虚拟机,安装CentOS-8.x-x86_64,我这里是centos8.2
 

主机名

IP地址(NAT)

IP地址(内网)

描述

 

vm82

eth0:192.168.128.82

eth1:192.168.3.82

最小化安装,4G内存,硬盘系统盘50G 充当master,最少2G内存

最好是3.5G或以上

 

vm21

eth0:192.168.128.21

eth1:192.168.3.21

最小化安装,3G内存,硬盘系统盘50G 充当node节点1

发现2G都卡,最好2.8G或以上

 

vm22

eth0: 192.168.128.22

eth1:192.168.3.22

最小化安装,3G内存,硬盘系统盘50G 充当node节点2

最好是2.8G或以上

注:因为总内存才16G,所以只能先把vm82开着,然后逐台开vm821、vm822

1.3.1关闭selinux

#临时关闭:
setenforce off
#永久性关闭:
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config
sed -n '/SELINUX=/p' /etc/selinux/config
#重启生效
shutdown -r now

1.3.2关闭防火墙

#禁止firewalld及开机启动
systemctl stop firewalld.service
systemctl disable firewalld.service

1.3.3 时间同步

#yum和dnf都可以安装,推荐用dnf
#yum install chrony
dnf install chrony

#配置时间服务器,我这里使用的是中国区的

#备份配置
cp /etc/chrony.conf  /etc/chrony.conf.orig
#注解掉pool
sed -i '/^pool/s/^/#/' /etc/chrony.conf
grep '#pool' /etc/chrony.conf
sed -i '/#pool/a\server cn.pool.ntp.org iburst' /etc/chrony.conf
sed -i '/#pool/a\server ntp.ntsc.ac.cn iburst' /etc/chrony.conf
sed -i '/#pool/a\server ntp1.aliyun.com iburst' /etc/chrony.conf
grep -A 3 '#pool' /etc/chrony.conf

#重启服务

systemctl restart chronyd.service

1.3.4 安装常用软件

#centos8之前用yum
dnf install -y vim  lrzsz wget curl man tree rsync gcc gcc-c++ cmake telnet

1.3.5 安装epel和remi源(本章可不要)

yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm -y
wget http://rpms.remirepo.net/enterprise/remi-release-8.rpm
rpm --import http://rpms.famillecollet.com/RPM-GPG-KEY-remi
rpm -ih remi-release-8.rpm
rm -f remi-release-8.rpm

1.4所涉及的软件及版本

软件

版本

安装方式

备注

xshell

6.0

win exe

ssh连接连接工具

Docker-ce

19.03.13

yum,最新稳定版

master和node都需要

Kubeadm

1.19.3

yum,最新稳定版

master和node都需要

weave

2.7.0

kubectl apply

master端,网络插件

DashBoard

2.0.3

kubectl apply

master端,UI界面

Rook

1.4

kubectl apply

master端,ceph存储插件

1.5 Kubernetes全局架构图

注意:

目前为止,在容器里运行 kubelet,依然没有很好的解决办法,我也不推荐你用容器去部署 Kubernetes 项目,正因为如此,kubeadm 选择了一种妥协方案:把 kubelet 直接运行在宿主机上,然后使用容器部署其他的 Kubernetes 组件。

  • Kubernetes 主控组件(Master 包含三个进程,都运行在集群中的某个节上,通常这个节点被称为 master 节点。这些进程包括:kube-apiserverkube-controller-managerkube-scheduler
  • master 节点,也叫node节点,最主要的是kubelet,和 master 节点进行通信。

有兴趣的可以看一下 Kubernetes组件 

 

二、安装kubeadm(master和node)

根据kubeadm官网文档安装要求

  • 一台或多台运行着下列系统的机器:
    • Ubuntu 16.04+
    • Debian 9+
    • CentOS 7
    • Red Hat Enterprise Linux (RHEL) 7
    • Fedora 25+
    • HypriotOS v1.0.1+
    • Container Linux (测试 1800.6.0 版本)
  • 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响您应用的运行内存)
  • 2 CPU 核或更多
  • 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
  • 节点之中不可以有重复的主机名、MAC 地址或 product_uuid。请参见这里 了解更多详细信息。
  • 开启机器上的某些端口。请参见这里 了解更多详细信息。
  • 禁用交换分区。为了保证 kubelet 正常工作,您 必须 禁用交换分区。

2.1 安装前准备(所有节点)

注:3台主机执行hostname命令分别得到“vm82vm821vm822”,并做了hosts绑定

echo '192.168.3.82 vm82' >>/etc/hosts
echo '192.168.3.21 vm821' >>/etc/hosts
echo '192.168.3.22 vm822' >>/etc/hosts
tail -3 /etc/hosts

2.1.1 关闭swap

#swap没关等等(会影响kubelet的启动),swap的话可以通过设置swapoff -a来进行关闭,
swapoff -a && sysctl -w vm.swappiness=0
sed -i '/swap/s/^/#/' /etc/fstab 
grep 'swap' /etc/fstab

2.1.2 安装docker-ce及添加镜像加速器

#修改内核参数,使它能支持转发功能,要不运行docker后,如果做端口映射会访问不到!

grep net.ipv4.ip_forward /etc/sysctl.conf
#设置转发为1
echo 'net.ipv4.ip_forward=1'>>/etc/sysctl.conf
#使用配置生效
sysctl -p
#查看是否生效
sysctl net.ipv4.ip_forward

K8s除了kubelet组件,其它都慢容器化的,所以要安装docker Docker centos

官方链接:https://docs.docker.com/engine/install/centos/#os-requirements

发现只有centos7的,实现上centos8的docker已经有了

#卸载旧版本,如果存在的话

yum remove docker \
     docker-client \
     docker-client-latest \
     docker-common \
     docker-latest \
     docker-latest-logrotate \
     docker-logrotate \
     docker-engine

#官方的docker安装很慢,我这里选择使用阿里云docker-ce镜像安装

dnf install -y dnf-utils device-mapper-persistent-data lvm2 fuse-overlayfs wget
yum-config-manager --add-repo \
https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/Packages/\
containerd.io-1.3.7-3.1.el8.x86_64.rpm
wget https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/Packages/containerd.io-1.3.7-3.1.el8.x86_64.rpm
wget https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/Packages/docker-ce-19.03.13-3.el8.x86_64.rpm
wget https://mirrors.aliyun.com/docker-ce/linux/centos/8/x86_64/stable/Packages/docker-ce-cli-19.03.13-3.el8.x86_64.rpm
# 安装
yum install -y containerd.io-1.3.7-3.1.el8.x86_64.rpm \
docker-ce-19.03.13-3.el8.x86_64.rpm \
docker-ce-cli-19.03.13-3.el8.x86_64.rpm

 

ps:在上面添加的docker仓库只支持centos7的,实际上centos8已经有docker了,只是路径和centos7不同而已,可以修改一下repo文件内容,我这里就不修改了,直接用rpm包安装

#启动服务

systemctl start docker
#查看运行状态
systemctl status docker
#开机运行
systemctl enable docker

#确保网络模块开机加载

lsmod | grep overlay
lsmod | grep br_netfilter

 #若上面命令无返回值输出或提示文件不存在,需执行以下命令:

cat>/etc/modules-load.d/docker.conf<<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
#配置内核参数,将桥接的IPv4流量传递到iptables的链
cat >/etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
cat /etc/sysctl.d/k8s.conf 
sysctl --system

#验证是否生效,均返回 1 即正确
sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables

#阿里云加速

国内访问 Docker Hub 有时会遇到困难,此时可以配置镜像加速器。国内很多云服务商都提供了加速器服务(常用加速器 可见 docker加速器教程),例如:

  1. DaoCloud 加速器
  2. 灵雀云加速器
  3. 华为云(需要登录,发现找不到了)
  4. 阿里云docker镜像加速器(需要登录)

我这里使用免费的阿里云 ,点 阿里云docker加速器 会弹出窗口,使用阿里云账号登录即可,没有的话可以注册一个,会发现有镜像加速器的使用

 

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<- 'EOF'
{
    "registry-mirrors": ["https://xs363iyg.mirror.aliyuncs.com"]
}
EOF
#重导入配置
sudo systemctl daemon-reload
#重启docker
sudo systemctl restart docker

#备份
cp /etc/docker/daemon.json /etc/docker/daemon.json.orig

 

2.1.3 确保网络模块开机自动加载

lsmod | grep overlay
lsmod | grep br_netfilter

效果如下:

[root@vm82 ~]# lsmod | grep overlay
overlay               126976  0
[root@vm82 ~]# lsmod | grep br_netfilter
br_netfilter           24576  0
bridge                192512  1 br_netfilter


#若上面命令无返回值输出或提示文件不存在,需执行以下命令:
cat > /etc/modules-load.d/docker.conf <<EOF
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter

使桥接流量对iptables可见

#配置内核参数,将桥接的IPv4流量传递到iptables的链
cat >/etc/sysctl.d/k8s.conf<<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
cat /etc/sysctl.d/k8s.conf 
sysctl --system

#验证是否生效,均返回 1 即正确
sysctl -n net.bridge.bridge-nf-call-iptables
sysctl -n net.bridge.bridge-nf-call-ip6tables

2.2 kubeadm安装(所有节点)

根据官网kubeadmin安装文档说明 需要在每台机器上都安装以下的软件包:

  • kubeadm: 用来初始化集群的指令。
  • kubelet: 在集群中的每个节点上用来启动 pod container 等。 master 节点进行通信。
  • kubectl: 用来与集群通信的命令行工具。

kubeadm 不能 帮您安装或管理 kubelet  kubectl ,所以您得保证他们满足通过 kubeadm 安装的 Kubernetes 控制层对版本的要求。如果版本没有满足要求,就有可能导致一些难以想到的错误或问题。然而控制层与 kubelet 间的 小版本号 不一致无伤大雅,不过请记住 kubelet 的版本不可以超过 API server 的版本。例如 1.8.0 API server 可以适配 1.7.0 kubelet,反之就不行了

kubeadm官网安装文档 发现网络不好,所以使用国内的源,我这里使用阿里云镜像Kubernetes

#写入kubernetes源文件
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#关闭selinux,默认我已经关闭了
setenforce 0

ps: 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装

#安装kubeadm(所有节点)

#同时会安装上kubelet kubeadm kubectl kubernetes-cni,并没有任何docker镜像下载
yum install -y kubeadm
#启动服务后一会也会停止掉,属于正常现象,因为 /etc/kubernetes/manifests/ 没有配置文件 
systemctl enable kubelet && systemctl start kubelet

  • kubeadm:k8集群的一键部署工具,通过把k8的各类核心组件和插件以pod的方式部署来简化安装过程
  • kubelet:运行在每个节点上的node agent,k8集群通过kubelet真正的去操作每个节点上的容器,由于需要直接操作宿主机的各类资源,所以没有放在pod里面,还是通过服务的形式装在系统里面
  • kubectl:kubernetes的命令行工具,通过连接api-server完成对于k8的各类操作
  • kubernetes-cni:k8的虚拟网络设备,通过在宿主机上虚拟一个cni0网桥,来完成pod之间的网络通讯,作用和docker0类似。

#安装完了,重启一下

shutdown -r now

 

三、创建master节点(master)

3.1 创建master节点kubeadm init

3.1.1 init初始化前配置

         /proc/sys/net/bridge/bridge-nf-call-iptables这个参数,需要设置为1,否则kubeadm预检也会通不过,貌似网络插件会用到这个内核参数。上面已经设置了

3.1.2 执行init

正常来说我们使用 kubeadm init来初始化启动一个集群就可以了,但是由于初始化过程中需要请求k8s.gcr.io下载镜像,可是国内的网络环境,呃...

所以我们这里采用把相关镜像手动下载下来的方法规避这个问题,下载完后再kubeadm init,由于本地已经有了相关镜像,所以就不会再请求k8s.gcr.io

[root@vm82 ~]# kubeadm config images list
W1111 11:38:28.234309    7481 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.19.3
k8s.gcr.io/kube-controller-manager:v1.19.3
k8s.gcr.io/kube-scheduler:v1.19.3
k8s.gcr.io/kube-proxy:v1.19.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

#使用脚本自动安装

cat>kubeadm_config_images_list.sh<<EOF
#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by hualinux

KUBE_VERSION=v1.19.3
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.7.0
ETCD_VERSION=3.4.13-0

# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:\$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:\$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:\$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:\$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:\$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:\$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:\$ETCD_VERSION

# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:\$KUBE_VERSION  k8s.gcr.io/kube-proxy:\$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:\$KUBE_VERSION k8s.gcr.io/kube-controller-manager:\$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:\$KUBE_VERSION k8s.gcr.io/kube-apiserver:\$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:\$KUBE_VERSION k8s.gcr.io/kube-scheduler:\$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:\$PAUSE_VERSION k8s.gcr.io/pause:\$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:\$CORE_DNS_VERSION k8s.gcr.io/coredns:\$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:\$ETCD_VERSION k8s.gcr.io/etcd:\$ETCD_VERSION

# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:\$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:\$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:\$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:\$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:\$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:\$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:\$ETCD_VERSION

EOF
#修改权限,执行
chmod +x kubeadm_config_images_list.sh
sh kubeadm_config_images_list.sh

#执行之后,看一下7个镜像是否下载完整了

#此命令初始化一个 Kubernetes master 节点

相关参数选项kubeadm-init 

#我这里指定网络段,如果不指定中可以省略--pod,可以加 “--dry-run”试运行,不会有改动
kubeadm init --pod-network-cidr=172.168.6.0/16

可以看到终于安装成功了,kudeadm帮你做了大量的工作,包括kubelet配置、各类证书配置、kubeconfig配置、插件安装等等(这些东西自己搞不知道要搞多久,反正估计用过kubeadm没人会再愿意手工安装了)。执行结果如下

[root@vm82 ~]# kubeadm init --pod-network-cidr=172.168.6.0/16
W1111 14:39:17.447725    8348 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.3
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vm82] and IPs [10.96.0.1 192.168.128.82]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vm82] and IPs [192.168.128.82 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vm82] and IPs [192.168.128.82 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 6.502043 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vm82 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node vm82 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: tox4h2.wq1nq8cbhuqchjdz
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.128.82:6443 --token tox4h2.wq1nq8cbhuqchjdz \
    --discovery-token-ca-cert-hash sha256:17b9e8748b2afb703dcf1871558a518ca8f4ea13e6a284962ae7010d27606c95

其中最后8段话,是提示:

kubeadm提示你,其他节点需要加入集群的话,只需要执行最后这条命令就行了,里面包含了加入集群所需要的token。同时kubeadm还提醒你,要完成全部安装,还需要安装一个网络插件kubectl apply -f [podnetwork].yaml,并且连如何安装网络插件的网址都提供给你了(很贴心啊有木有)。同时也提示你,需要执行

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

而需要这些配置命令的原因是:Kubernetes 集群默认需要加密方式访问。所以,这几条命令,就是将刚刚部署生成的 Kubernetes 集群的安全配置文件,保存到当前用户的.kube 目录下,kubectl 默认会使用这个目录下的授权信息访问 Kubernetes 集群。

如果不这么做的话,我们每次都需要通过 export KUBECONFIG 环境变量告诉 kubectl 这个安全配置文件的位置。

注:如其他node节点的话需要将此配置信息拷贝入node节点的对应目录。

#按上面提示执行:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

#上面工作做完之后,可以重启kubelet了

systemctl start kubelet
systemctl status kubelet

#执行一下下面命令,看是否生效

#查看pod情况
[root@vm82 ~]# kubectl get pods -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-lwmkw        0/1     Pending   0          7m13s
coredns-f9fd979d6-xftks        0/1     Pending   0          7m13s
etcd-vm82                      1/1     Running   0          7m31s
kube-apiserver-vm82            1/1     Running   0          7m31s
kube-controller-manager-vm82   1/1     Running   0          7m31s
kube-proxy-bc6wj               1/1     Running   0          7m13s
kube-scheduler-vm82            1/1     Running   0          7m31s

3.1.3 kubeadm ini大概流程

1.当你执行 kubeadm init 指令后,会进行检查,这一步检查,我们称为“Preflight Checks”,它可以为你省掉很多后续的麻烦

2.在通过了 Preflight Checks 之后,kubeadm 要为你做的,是生成 Kubernetes 对外提供服务所需的各种证书和对应的目录。

kubeadm Kubernetes 项目生成的证书文件都放在 Master 节点的 /etc/kubernetes/pki 目录下。在这个目录下,最主要的证书文件是 ca.crt 和对应的私钥 ca.key

3. 证书生成后,kubeadm 接下来会为其他组件生成访问 kube-apiserver 所需的配置文件

#这些文件的路径是:/etc/kubernetes/xxx.conf:
[root@vm82 ~]# ls /etc/kubernetes/
admin.conf  controller-manager.conf  kubelet.conf  manifests  pki  scheduler.conf

4.kubeadm 会为 Master 组件生成 Pod 配置文件。 Kubernetes 有三个 Master 组件 kube-apiserverkube-controller-managerkube-scheduler,而它们都会被使用 Pod 的方式部署起来。

kubeadm 中,Master 组件的 YAML 文件会被生成在 /etc/kubernetes/manifests 路径下。

5.kubeadm 就会为集群生成一个 bootstrap token。在后面,只要持有这个 token,任何一个安装了 kubelet 和 kubadm 的节点,都可以通过 kubeadm join 加入到这个集群当中。

这个 token 的值和使用方法会,会在 kubeadm init 结束后被打印出来。

6.在 token 生成之后,kubeadm 会将 ca.crt 等 Master 节点的重要信息,通过 ConfigMap 的方式保存在 Etcd 当中,供后续部署 Node 节点使用。这个 ConfigMap 的名字是 cluster-info。

7. kubeadm init 的最后一步,就是安装默认插件

 

3.2安装容器网络插件

         所有插件都是在master节点上安装,它会自动推送到node节点上安装,如需要会连接上master

3.2.1 查看节点状态

#执行 kubectl get node 命令可以查看nodes信息,如下:

[root@vm82 ~]# kubectl get nodes
NAME   STATUS     ROLES    AGE     VERSION
vm82   NotReady   master   8m41s   v1.19.3

可以看到,这个 get 指令输出的结果里,Master 节点的状态是 NotReady,这是为什么呢?

在调试 Kubernetes 集群时,最重要的手段就是用 kubectl describe 来查看这个节点(Node)对象的详细信息、状态和事件(Event),我们来试一下:

[root@vm82 ~]# kubectl describe node vm82
Name:               vm82
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=vm82
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Wed, 11 Nov 2020 14:39:26 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  vm82
  AcquireTime:     <unset>
  RenewTime:       Wed, 11 Nov 2020 14:49:19 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 11 Nov 2020 14:44:31 +0800   Wed, 11 Nov 2020 14:39:23 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 11 Nov 2020 14:44:31 +0800   Wed, 11 Nov 2020 14:39:23 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 11 Nov 2020 14:44:31 +0800   Wed, 11 Nov 2020 14:39:23 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 11 Nov 2020 14:44:31 +0800   Wed, 11 Nov 2020 14:39:23 +0800   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.3.82
  Hostname:    vm82
Capacity:
  cpu:                6
  ephemeral-storage:  49794300Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3855644Ki
  pods:               110
Allocatable:
  cpu:                6
  ephemeral-storage:  45890426805
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             3753244Ki
  pods:               110
System Info:
  Machine ID:                 c959e989d2124fc397bea1330baa9d8f
  System UUID:                564d1ed1-542b-1505-c802-6141726fac60
  Boot ID:                    ee67157f-9d43-4007-96d4-a3ad8270cf54
  Kernel Version:             4.18.0-193.el8.x86_64
  OS Image:                   CentOS Linux 8 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://19.3.13
  Kubelet Version:            v1.19.3
  Kube-Proxy Version:         v1.19.3
PodCIDR:                      172.168.0.0/24
PodCIDRs:                     172.168.0.0/24
Non-terminated Pods:          (5 in total)
  Namespace                   Name                            CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                            ------------  ----------  ---------------  -------------  ---
  kube-system                 etcd-vm82                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m51s
  kube-system                 kube-apiserver-vm82             250m (4%)     0 (0%)      0 (0%)           0 (0%)         9m51s
  kube-system                 kube-controller-manager-vm82    200m (3%)     0 (0%)      0 (0%)           0 (0%)         9m51s
  kube-system                 kube-proxy-bc6wj                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
  kube-system                 kube-scheduler-vm82             100m (1%)     0 (0%)      0 (0%)           0 (0%)         9m51s
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                550m (9%)  0 (0%)
  memory             0 (0%)     0 (0%)
  ephemeral-storage  0 (0%)     0 (0%)
  hugepages-1Gi      0 (0%)     0 (0%)
  hugepages-2Mi      0 (0%)     0 (0%)
Events:
  Type    Reason                   Age    From        Message
  ----    ------                   ----   ----        -------
  Normal  Starting                 9m51s  kubelet     Starting kubelet.
  Normal  NodeAllocatableEnforced  9m51s  kubelet     Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  9m51s  kubelet     Node vm82 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    9m51s  kubelet     Node vm82 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     9m51s  kubelet     Node vm82 status is now: NodeHasSufficientPID
  Normal  Starting                 9m32s  kube-proxy  Starting kube-proxy.

其中一句:

work not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

通过 kubectl describe 指令的输出,我们可以看到 NodeNotReady 的原因在于,我们尚未部署任何网络插件。

我们还可以通过 kubectl 检查这个节点上各个系统 Pod 的状态,其中,kube-system 是 Kubernetes 项目预留的系统 Pod 的工作空间(Namepsace,注意它并不是 Linux Namespace,它只是 Kubernetes 划分不同工作空间的单位):

#-n 后跟 namespace,查看指定的命名空间
[root@vm82 ~]# kubectl get pods -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-bnrdz       0/1     Pending   0          15m
coredns-66bff467f8-zf4nb       0/1     Pending   0          15m
etcd-vm82                      1/1     Running   0          15m
kube-apiserver-vm82            1/1     Running   0          15m
kube-controller-manager-vm82   1/1     Running   0          15m
kube-proxy-kvdhj               1/1     Running   0          15m
kube-scheduler-vm82            1/1     Running   0          15m

可以看到,CoreDNS等依赖于网络的 Pod 都处于 Pending 状态,即调度失败,kube-controller-manager 更是直接CrashLoopBackOff了,这当然是符合预期的:因为这个 Master 节点的网络尚未就绪。

#weave-kube说明文档,安装如下:

[root@vm82 ~]# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created
[root@vm82 ~]# #过一会查看一下,如果不行的话,再执行多一次
[root@vm82 ~]# kubectl get pods --all-namespaces -l name=weave-net
NAMESPACE     NAME              READY   STATUS              RESTARTS   AGE
kube-system   weave-net-ls6k5   0/2     ContainerCreating   0          9s

注:如果上面的weave-net-vh4sx一直处理 ContainerCreating,则可以查看一下日志输出,看理什么问题,其中-c表示容器的名字可以通过 docker images 命令查看

kubectl -n kube-system logs -f weave-net-vh4sx -c weave-npc

#如果是master内存2G的话会很慢,估计要七八分钟
[root@vm82 ~]# kubectl get pods -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-bnrdz       1/1     Running   0          17m
coredns-66bff467f8-zf4nb       0/1     Running   0          17m
etcd-vm82                      1/1     Running   0          17m
kube-apiserver-vm82            1/1     Running   0          17m
kube-controller-manager-vm82   1/1     Running   0          17m
kube-proxy-kvdhj               1/1     Running   0          17m
kube-scheduler-vm82            1/1     Running   0          17m
weave-net-ls6k5                2/2     Running   0          81s
[root@vm82 ~]# 
[root@vm82 ~]# 
[root@vm82 ~]# kubectl get pods -n kube-system
NAME                           READY   STATUS    RESTARTS   AGE
coredns-66bff467f8-bnrdz       1/1     Running   0          19m
coredns-66bff467f8-zf4nb       1/1     Running   0          19m
etcd-vm82                      1/1     Running   0          19m
kube-apiserver-vm82            1/1     Running   0          19m
kube-controller-manager-vm82   1/1     Running   0          19m
kube-proxy-kvdhj               1/1     Running   0          19m
kube-scheduler-vm82            1/1     Running   0          19m
weave-net-ls6k5                2/2     Running   0          2m51s

发现上面都是 Running了,表示安装成功

 

3.3安装DashBoard插件

3.3.1 安装

DashBoard官网说明k8项目提供了一个官方的dashboard,虽然平时还是命令行用的多,但是有个UI总是好的,我们来看看怎么安装。安装其实也是非常简单,标准的k8声明式安装

#先安装docker镜像,因为国外,你懂的,

#github DashBoard安装说明知配置文件

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml

又不能访问了,需要hosts绑定 绑定hosts

通过 https://www.ipaddress.com/ 查询到raw.githubusercontent.com的真实 IP 地址:

[root@vm82 ~]# echo '199.232.68.133 raw.githubusercontent.com' >> /etc/hosts
[root@vm82 ~]# tail -1 /etc/hosts 
199.232.68.133 raw.githubusercontent.co
[root@vm82 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

#过一会查看是否安装成功
[root@vm82 ~]# kubectl get po --all-namespaces|grep dashboard
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-gb82c   0/1     ContainerCreating   0          32s
kubernetes-dashboard   kubernetes-dashboard-5dbf55bd9d-vb76r        0/1     ContainerCreating   0          32s
[root@vm82 ~]# #过一会儿再次查询的结果
[root@vm82 ~]# kubectl get po --all-namespaces|grep dashboard
kubernetes-dashboard   dashboard-metrics-scraper-7b59f7d4df-gb82c   1/1     Running   0          2m49s
kubernetes-dashboard   kubernetes-dashboard-5dbf55bd9d-vb76r        1/1     Running   0          2m49s

需要注意的是,由于 Dashboard 是一个 Web Server,很多人经常会在自己的公有云上无意地暴露 Dashboard 的端口,从而造成安全隐患。所以,1.7 版本之后的 Dashboard 项目部署完成后,默认只能通过 Proxy 的方式在本地访问。具体的操作,你可以查看 Dashboard 项目的官方文档

禁止更新

因为此插件安装的时候,会先从网上pull docker镜像,一会失败,所以需要修改一下yaml配置文件,以后不让它从网上获取镜像

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml
sed -i '/imagePullPolicy/s/Always/Never/' recommended.yaml 
# 10个空格
sed -i '/metrics-scraper:/a\          imagePullPolicy: Never' recommended.yaml
egrep 'imagePullPolicy|metrics-scraper:' recommended.yaml
kubectl apply -f recommended.yaml 
# 命令运行效果
[root@vm82 ~]# kubectl get po --all-namespaces|grep dashboard
kubernetes-dashboard   dashboard-metrics-scraper-84b56c4b74-79kjg   1/1     Running   0          35s
kubernetes-dashboard   kubernetes-dashboard-59cc95c69c-rvqk8        1/1     Running   0          35s

  PS:如果不懂得yaml配置字段怎么使用可以用  kubectl explain  命令,即查看使用说明,相当于linux的man,ansible的ansible-doc

[root@vm82 ~]# kubectl explain Deployment.spec.template.spec.containers.imagePullPolicy
KIND:     Deployment
VERSION:  apps/v1

FIELD:    imagePullPolicy <string>

DESCRIPTION:
     Image pull policy. One of Always, Never, IfNotPresent. Defaults to Always
     if :latest tag is specified, or IfNotPresent otherwise. Cannot be updated.
     More info:
     https://kubernetes.io/docs/concepts/containers/images#updating-images

3.3.2 配置

#1 修改nodeNodePort模式

[root@vm82 ~]# kubectl patch svc -n kubernetes-dashboard kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}'
service/kubernetes-dashboard patched

#查看服务 svc是service是缩写
[root@vm82 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.167.80    <none>        8000/TCP        8m10s
kubernetes-dashboard        NodePort    10.111.23.101   <none>        443:30366/TCP   8m10s

发现内部使用的是443即https,外部使用30107端口,用火狐浏览器输入https://192.168.3.82:30366

输入刚刚初始化生成的token,我的这里是lm15t2.xxkc4z99tcfmwb7c

kubeadm join 192.168.128.82:6443 --token tox4h2.wq1nq8cbhuqchjdz \
    --discovery-token-ca-cert-hash sha256:17b9e8748b2afb703dcf1871558a518ca8f4ea13e6a284962ae7010d27606c95

 

#2. 创建dashboard管理用户及token

# 创建dashboard管理用户
kubectl create serviceaccount dashboard-admin -n kubernetes-dashboard

# 绑定用户为集群管理用户
kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin \
--serviceaccount=kubernetes-dashboard:dashboard-admin

[root@vm82 ~]# # 生成tocken
[root@vm82 ~]# kubectl describe secret -n kubernetes-dashboard dashboard-admin-token
Name:         dashboard-admin-token-dglwh
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 2ca65490-b107-47d2-9877-e3bd48532545

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImcyRkRQZEV3aDN5TjRHY2tVZWIwUHNuYlk5WGQtRjFyaEFVSHhBNzVXVzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tZGdsd2giLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMmNhNjU0OTAtYjEwNy00N2QyLTk4NzctZTNiZDQ4NTMyNTQ1Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.C7KrO5BPJHrwIu_jfsQZ101NwTR2SU5Q2q4kUvjMbgDeq3wdyaZV0ooJX7yuuNWN9S0gfFlbSkPthpWKYlczn_9faWTgXX0g-DGUGPduDEOfVe00OAXKTZ8Gq204pqwLDCxkLyadtvGu7zY7VS6tJE98wmR-7aCGvSO4abGvD6keMTcfGs4SPyJ3JiRcfiJU0SdrppI5y4orLGeN_M5UK1WgXFVFfaTQzqdNklRFX3fb2CPOFJ72CSVqLyywseIx4SKA2n2WeiMBSQ7jEEI9U9E2t7gXKqJZdDTko32-NUHkzI_9ucUKDR8JM4AqMyEt3raFhWVo3UVJ7l6E2GJvog
ca.crt:     1066 bytes
namespace:  20 bytes

#3. 登陆web管理页面

退出web,改用上面生成的token到再次登录

 

四、创建node节点并加入master中(node)

将一个 Node 节点加入到当前集群中的命令如下:

#将一个 Node 节点加入到当前集群中的命令如下:
kubeadm join --token <token> <master-ip>:<master-port> --discovery-token-ca-cert-hash sha256:<hash>

4.1 kubeadm join 的工作流程

         这个流程其实非常简单,kubeadm init 生成 bootstrap token 之后,你就可以在任意一台安装了 kubelet 和 kubeadm 的机器上执行 kubeadm join 了。

任何一台机器想要成为 Kubernetes 集群中的一个节点,就必须在集群的 kube-apiserver 上注册。可是,要想跟 apiserver 打交道,这台机器就必须要获取到相应的证书文件(CA 文件)。可是,为了能够一键安装,我们就不能让用户去 Master 节点上手动拷贝这些文件。

所以,kubeadm 至少需要发起一次“不安全模式”的访问到 kube-apiserver,从而拿到保存在 ConfigMap 中的 cluster-info(它保存了 APIServer 的授权信息)。而 bootstrap token,扮演的就是这个过程中的安全验证的角色。

只要有了 cluster-info 里的 kube-apiserver 的地址、端口、证书,kubelet 就可以以“安全模式”连接到 apiserver 上,这样一个新的节点就部署完成了。

你只要在其他节点上重复这个指令就可以了

 

4.2 kubeadm安装

     上面已经进了kubeadm安装了

还需要安装一些基础组件

#使用脚本自动安装,node节点只需要kubelet、kube-proxy即可

cat>kubeadm_config_images_list.sh<<EOF
#!/bin/bash
# Script For Quick Pull K8S Docker Images
# by hualinux

KUBE_VERSION=v1.19.3
PAUSE_VERSION=3.2


# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:\$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:\$PAUSE_VERSION


# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:\$KUBE_VERSION  k8s.gcr.io/kube-proxy:\$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:\$PAUSE_VERSION k8s.gcr.io/pause:\$PAUSE_VERSION

# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:\$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:\$PAUSE_VERSION


EOF
#修改权限,执行
chmod +x kubeadm_config_images_list.sh
sh kubeadm_config_images_list.sh

#重启一下
systemctl daemon-reload
systemctl restart kubelet

#执行之后,看一下2个镜像是否下载完整了

[root@vm821 ~]# docker images|grep k8s.gcr.io
k8s.gcr.io/kube-proxy   v1.19.3             cdef7632a242        3 weeks ago         118MB
k8s.gcr.io/pause        3.2                 80d28bedfe5d        9 months ago        683kB

4.3 加入集群

#复制master中执行kubeadm init生成的kubeadm join命令即可以加入master节点中,

#node节点执行此命令,我这里是192.168.3.21192.168.3.22,我先添加3.21试下

kubeadm join 192.168.128.82:6443 --token tox4h2.wq1nq8cbhuqchjdz \
    --discovery-token-ca-cert-hash sha256:17b9e8748b2afb703dcf1871558a518ca8f4ea13e6a284962ae7010d27606c95

注:如果kubeadm join有问题,想再次执行上面命令,得先  kubeadm reset ,最好在master上执行删除操作

kubectl delete node vm821

如果发现超时,解决方式先查看kubelet进程是否挂掉,内存太小会造成进程挂掉
systemctl status kubelet
#一旦重置因kubelet没有任何东西会启动不起来,属于正常现象,加入master再启动即可
kubeadm reset
#mster节点上删除相关节点
kubectl delete node vm821

#如果不行就试多几次先kubeadm reset,再在master上删除相关节点  kubectl delete node vm821
[root@vm821 ~]# kubeadm join 192.168.128.82:6443 --token tox4h2.wq1nq8cbhuqchjdz \
>     --discovery-token-ca-cert-hash sha256:17b9e8748b2afb703dcf1871558a518ca8f4ea13e6a284962ae7010d27606c95
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

#在master服务器上执行命令查看节点情况

[root@vm82 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE     VERSION
vm82    Ready    master   84m     v1.19.3
vm821   Ready    <none>   3m28s   v1.19.3
vm822   Ready    <none>   3m25s   v1.19.3

用web登陆管理看一下,我这里是https://192.168.3.82:30366/,发现2个node节点都添加进master中了

[root@vm821 ~]# docker images
REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
k8s.gcr.io/kube-proxy   v1.19.3             cdef7632a242        3 weeks ago         118MB
weaveworks/weave-npc    2.7.0               db66692318fc        3 months ago        41MB
weaveworks/weave-kube   2.7.0               a8ef3e215aac        3 months ago        113MB
k8s.gcr.io/pause        3.2                 80d28bedfe5d        9 months ago        683kB

其中:kube-proxy和pause是手工下载的,而网络插件镜像weaveworks/weave-npc weaveworks/weave-kube是master自动安装的

 

五、 安装容器存储插件Rook(可选 master端 )

因为rook插件是运行非主节点的,至少要有一个node节点情况下安装才行,在这里我主要是使用到它的ceph

安装ceph例子,默认是三个,同一节点不能运行2个实例,所以需要三个node节点,所以我克隆了一份node2,并进行了修改

名字为vm823,ip为192.168.3.23

# 在所有节点,包括master和node,添加一个节点3的 hosts
echo '192.168.3.23 vm823' >>/etc/hosts

把克隆的新节点node3,添加入k8s master 

#在节点3上删除之前添加入master的信息
kubeadm reset
#执行上面命令的提示
rm -rf /etc/cni/net.d


#master集群初始化后,token24小时后就会失效,如果到了token失效时间,node再加入集群,需要重新生产token,所以我得重新生成token,在master端操作
# 查看master token是否存在,如果存在则可直接使用,为空则不存在
kubeadm token list
# 创建token,我这里生成的为 53g1sv.27zb3j3adcgi64qb
kubeadm token create
# 获取--discovery-token-ca-cert-hash值,和原来的一样 17b9e8748b2afb703dcf1871558a518ca8f4ea13e6a284962ae7010d27606c95
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
openssl dgst -sha256 -hex | sed 's/^.* //'

# 加入群集
kubeadm join 192.168.128.82:6443 --token 53g1sv.27zb3j3adcgi64qb \
    --discovery-token-ca-cert-hash sha256:17b9e8748b2afb703dcf1871558a518ca8f4ea13e6a284962ae7010d27606c95

systemctl enable kubelet && systemctl start kubelet

#如发现启动不起来,可以删除再加入master,再启动试下
#yum remove kubeadm -y
#rm -rf /etc/kubernetes
#yum install -y kubeadm

查看node3是否添加到节点上了

[root@vm82 ~]# kubectl get nodes
NAME    STATUS     ROLES    AGE   VERSION
vm82    Ready      master   26h   v1.19.3
vm821   Ready      <none>   25h   v1.19.3
vm822   Ready      <none>   25h   v1.19.3
vm823   NotReady   <none>   28s   v1.19.3
[root@vm82 ~]# kubectl get nodes
NAME    STATUS   ROLES    AGE   VERSION
vm82    Ready    master   26h   v1.19.3
vm821   Ready    <none>   25h   v1.19.3
vm822   Ready    <none>   25h   v1.19.3
vm823   Ready    <none>   38s   v1.19.3

 

5.1 Rook介绍

很多时候我们需要用数据卷(Volume)把外面宿主机上的目录或者文件挂载进容器的 Mount Namespace 中,从而达到容器和宿主机共享这些目录或者文件的目的。容器里的应用,也就可以在这些数据卷中新建和写入文件。

         可是,如果你在某一台机器上启动的一个容器,显然无法看到其他机器上的容器在它们的数据卷里写入的文件。这是容器最典型的特征之一:无状态

         而容器的持久化存储,就是用来保存容器存储状态的重要手段:存储插件会在容器里挂载一个基于网络或者其他机制的远程数据卷,使得在容器里创建的文件,实际上是保存在远程存储服务器上,或者以分布式的方式保存在多个节点上,而与当前宿主机没有任何绑定关系。这样,无论你在其他哪个宿主机上启动新的容器,都可以请求挂载指定的持久化存储卷,从而访问到数据卷里保存的内容。这就是“持久化”的含义。

         由于 Kubernetes 本身的松耦合设计,绝大多数存储项目,比如 Ceph、GlusterFS、NFS 等,都可以为 Kubernetes 提供持久化存储能力。在这次的部署实战中,我会选择部署一个很重要的 Kubernetes 存储插件项目:Rook。

         Rook 项目是一个基于 Ceph 的 Kubernetes 存储插件(它后期也在加入对更多存储实现的支持)。不过,不同于对 Ceph 的简单封装,Rook 在自己的实现中加入了水平扩展、迁移、灾难备份、监控等大量的企业级功能,使得这个项目变成了一个完整的、生产级别可用的容器存储插件。

        

5.2 为什么我要选择 Rook 项目呢

是因为这个项目很有前途

如果你去研究一下 Rook 项目的实现,就会发现它巧妙地依赖了 Kubernetes 提供的编排能力,合理的使用了很多诸如 Operator、CRD 等重要的扩展特性(这些特性我都会在后面的文章中逐一讲解到)。这使得 Rook 项目,成为了目前社区中基于 Kubernetes API 构建的最完善也最成熟的容器存储插件。我相信,这样的发展路线,很快就会得到整个社区的推崇。

备注:

其实,在很多时候,大家说的所谓“云原生”,就是“Kubernetes 原生”的意思。而像 RookIstio 这样的项目,正是贯彻这个思路的典范。在后面学到声明式 API 之后,相信你对这些项目的设计思想会有更深刻的体会。

5.3 安装 Rook-ceph

在安装前根据rook ceph说明文档要求 

o make sure you have a Kubernetes cluster that is ready for Rook, you can follow these instructions.

In order to configure the Ceph storage cluster, at least one of these local storage options are required:

  • Raw devices (no partitions or formatted filesystems)  不要分区和格式化!
  • Raw partitions (no formatted filesystem)
  • PVs available from a storage class in block mode

所以我选择第1个,在node(即vm821和vm822)上添加一个新硬盘,大小100G,添加scsi硬盘不需要关机,添加即可不要分区和格式化。

 

5.3.1 安装common.yaml 和operator.yaml

我这里以最火和rook-ceph为例子你,打开rook-ceph说明文档,我选择目前最新版本v1.4

 按照上面提示执行,在执行之前需要绑定hosts,github是国外的网站,你懂的,我用站长工具DNS查询查看一下有哪些ips可用

 

# 下面命令在master端操作
#绑定hosts
echo '13.229.188.59 github.com' >> /etc/hosts

#安装git
yum install git -y

#建立相关目录
mkdir -pv /disk1/tools
cd /disk1/tools/
#git克隆一份下来
git clone --single-branch --branch v1.4.7 https://github.com/rook/rook.git
#执行相关的yaml文件,这里我使用aplly,群集到后面再创建
cd rook/cluster/examples/kubernetes/ceph/
kubectl apply -f common.yaml
kubectl apply -f operator.yaml

 PS:git慢的话可以直接去rook的github官网下载 zip包了

 下载的zip包为rook-release-1.4.zip

#把它上传到/etc/kubernetes/中再解压

yum install unzip -y
unzip rook-release-1.4.zip 
cd rook-release-1.4/cluster/examples/kubernetes/ceph/

#我这里使用apply 代替create,好用些,有兴趣的可以查一下区别,群集到接下来章节创建
kubectl apply -f common.yaml
kubectl apply -f operator.yaml

ps2:

#如果没有node节点,执行 kubectl get pod -n rook-ceph 会一直Pending,如下图所示:

[root@vm82 ceph]# kubectl get pod -n rook-ceph
NAME                                 READY   STATUS    RESTARTS   AGE
rook-ceph-operator-db86d47f5-v9n9j   0/1     Pending   0          12m

 

5.3.2 查看相关状态

用web登陆图形管理界面,我这是https://192.168.3.82:30107,点看一下命名空间

我这里用命令看一下

kubectl get pod -n rook-ceph
kubectl get pod -n rook-ceph -o wide

#查看详细描述
kubectl describe pods -n rook-ceph

[root@vm82 ceph]# kubectl get pod -n rook-ceph
NAME                                  READY   STATUS              RESTARTS   AGE
rook-ceph-operator-577cb6c8d6-ghd7q   0/1     ContainerCreating   0          31s
[root@vm82 ceph]# kubectl get pod -n rook-ceph
NAME                                  READY   STATUS              RESTARTS   AGE
rook-ceph-operator-577cb6c8d6-ghd7q   1/1     Running             0          69s
rook-discover-4r464                   0/1     ContainerCreating   0          0s
rook-discover-5r4lx                   0/1     ContainerCreating   0          0s
[root@vm82 ceph]# 
[root@vm82 ceph]# kubectl get pod -n rook-ceph
NAME                                  READY   STATUS    RESTARTS   AGE
rook-ceph-operator-577cb6c8d6-ghd7q   1/1     Running   0          3m30s
rook-discover-4r464                   1/1     Running   0          2m21s
rook-discover-5r4lx                   1/1     Running   0          2m21s
[root@vm82 ceph]# 
[root@vm82 ceph]# kubectl describe pods -n rook-ceph
Name:         rook-ceph-operator-577cb6c8d6-ghd7q
Namespace:    rook-ceph
Priority:     0
Node:         vm822/192.168.3.22
Start Time:   Wed, 11 Nov 2020 16:49:31 +0800
Labels:       app=rook-ceph-operator
              pod-template-hash=577cb6c8d6
Annotations:  <none>
Status:       Running
IP:           10.47.0.1
IPs:
  IP:           10.47.0.1
Controlled By:  ReplicaSet/rook-ceph-operator-577cb6c8d6
Containers:
  rook-ceph-operator:
    Container ID:  docker://cdaec00b641e308b6251b211d6f52f4e7c049c414e35d450cb2decd562581e2a
    Image:         rook/ceph:v1.4.7
    Image ID:      docker-pullable://rook/ceph@sha256:950ebc875987ccc375c06e0af97be8ff35194a87f672b1799784c0086415ea01
    Port:          <none>
    Host Port:     <none>
    Args:
      ceph
      operator
    State:          Running
      Started:      Wed, 11 Nov 2020 16:50:40 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      ROOK_CURRENT_NAMESPACE_ONLY:               false
      ROOK_ALLOW_MULTIPLE_FILESYSTEMS:           false
      ROOK_LOG_LEVEL:                            INFO
      ROOK_DISCOVER_DEVICES_INTERVAL:            60m
      ROOK_HOSTPATH_REQUIRES_PRIVILEGED:         false
      ROOK_ENABLE_SELINUX_RELABELING:            true
      ROOK_ENABLE_FSGROUP:                       true
      ROOK_DISABLE_DEVICE_HOTPLUG:               false
      DISCOVER_DAEMON_UDEV_BLACKLIST:            (?i)dm-[0-9]+,(?i)rbd[0-9]+,(?i)nbd[0-9]+
      ROOK_ENABLE_FLEX_DRIVER:                   false
      ROOK_ENABLE_DISCOVERY_DAEMON:              true
      ROOK_UNREACHABLE_NODE_TOLERATION_SECONDS:  5
      NODE_NAME:                                  (v1:spec.nodeName)
      POD_NAME:                                  rook-ceph-operator-577cb6c8d6-ghd7q (v1:metadata.name)
      POD_NAMESPACE:                             rook-ceph (v1:metadata.namespace)
    Mounts:
      /etc/ceph from default-config-dir (rw)
      /var/lib/rook from rook-config (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-system-token-hx6nt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  rook-config:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  default-config-dir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  rook-ceph-system-token-hx6nt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-ceph-system-token-hx6nt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  3m33s  default-scheduler  Successfully assigned rook-ceph/rook-ceph-operator-577cb6c8d6-ghd7q to vm822
  Normal  Pulling    3m32s  kubelet            Pulling image "rook/ceph:v1.4.7"
  Normal  Pulled     2m24s  kubelet            Successfully pulled image "rook/ceph:v1.4.7" in 1m7.112243902s
  Normal  Created    2m24s  kubelet            Created container rook-ceph-operator
  Normal  Started    2m24s  kubelet            Started container rook-ceph-operator


Name:         rook-discover-4r464
Namespace:    rook-ceph
Priority:     0
Node:         vm822/192.168.3.22
Start Time:   Wed, 11 Nov 2020 16:50:40 +0800
Labels:       app=rook-discover
              controller-revision-hash=699dc9cfc9
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.47.0.2
IPs:
  IP:           10.47.0.2
Controlled By:  DaemonSet/rook-discover
Containers:
  rook-discover:
    Container ID:  docker://fdc104eb938477c262c7accded6812e3421b6f62c1d4b76393d19116f638dde1
    Image:         rook/ceph:v1.4.7
    Image ID:      docker-pullable://rook/ceph@sha256:950ebc875987ccc375c06e0af97be8ff35194a87f672b1799784c0086415ea01
    Port:          <none>
    Host Port:     <none>
    Args:
      discover
      --discover-interval
      60m
      --use-ceph-volume
    State:          Running
      Started:      Wed, 11 Nov 2020 16:50:41 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  rook-ceph (v1:metadata.namespace)
      NODE_NAME:       (v1:spec.nodeName)
      POD_NAME:       rook-discover-4r464 (v1:metadata.name)
    Mounts:
      /dev from dev (rw)
      /run/udev from udev (ro)
      /sys from sys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-system-token-hx6nt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:  
  udev:
    Type:          HostPath (bare host directory volume)
    Path:          /run/udev
    HostPathType:  
  rook-ceph-system-token-hx6nt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-ceph-system-token-hx6nt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m24s  default-scheduler  Successfully assigned rook-ceph/rook-discover-4r464 to vm822
  Normal  Pulled     2m23s  kubelet            Container image "rook/ceph:v1.4.7" already present on machine
  Normal  Created    2m23s  kubelet            Created container rook-discover
  Normal  Started    2m23s  kubelet            Started container rook-discover


Name:         rook-discover-5r4lx
Namespace:    rook-ceph
Priority:     0
Node:         vm821/192.168.3.21
Start Time:   Wed, 11 Nov 2020 16:50:40 +0800
Labels:       app=rook-discover
              controller-revision-hash=699dc9cfc9
              pod-template-generation=1
Annotations:  <none>
Status:       Running
IP:           10.44.0.1
IPs:
  IP:           10.44.0.1
Controlled By:  DaemonSet/rook-discover
Containers:
  rook-discover:
    Container ID:  docker://81363c771fed9cc7b16def3b295c42d5a2f9e66beeae971be03540edc0743bb7
    Image:         rook/ceph:v1.4.7
    Image ID:      docker-pullable://rook/ceph@sha256:950ebc875987ccc375c06e0af97be8ff35194a87f672b1799784c0086415ea01
    Port:          <none>
    Host Port:     <none>
    Args:
      discover
      --discover-interval
      60m
      --use-ceph-volume
    State:          Running
      Started:      Wed, 11 Nov 2020 16:51:55 +0800
    Ready:          True
    Restart Count:  0
    Environment:
      POD_NAMESPACE:  rook-ceph (v1:metadata.namespace)
      NODE_NAME:       (v1:spec.nodeName)
      POD_NAME:       rook-discover-5r4lx (v1:metadata.name)
    Mounts:
      /dev from dev (rw)
      /run/udev from udev (ro)
      /sys from sys (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from rook-ceph-system-token-hx6nt (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  dev:
    Type:          HostPath (bare host directory volume)
    Path:          /dev
    HostPathType:  
  sys:
    Type:          HostPath (bare host directory volume)
    Path:          /sys
    HostPathType:  
  udev:
    Type:          HostPath (bare host directory volume)
    Path:          /run/udev
    HostPathType:  
  rook-ceph-system-token-hx6nt:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  rook-ceph-system-token-hx6nt
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/disk-pressure:NoSchedule op=Exists
                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                 node.kubernetes.io/not-ready:NoExecute op=Exists
                 node.kubernetes.io/pid-pressure:NoSchedule op=Exists
                 node.kubernetes.io/unreachable:NoExecute op=Exists
                 node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  2m24s  default-scheduler  Successfully assigned rook-ceph/rook-discover-5r4lx to vm821
  Normal  Pulling    2m23s  kubelet            Pulling image "rook/ceph:v1.4.7"
  Normal  Pulled     69s    kubelet            Successfully pulled image "rook/ceph:v1.4.7" in 1m13.264816751s
  Normal  Created    69s    kubelet            Created container rook-discover
  Normal  Started    69s    kubelet            Started container rook-discover

这个镜像有点大883M,所以要点时间下载 ,过一会儿再执行,发现正常在了

再看一下web的,也正常了

5.3.3 创建群集

在上面的目录中继续创建群集

[root@vm82 ceph]# kubectl apply -f cluster.yaml
cephcluster.ceph.rook.io/rook-ceph created
[root@vm82 ceph]# cd ~

会开始安装一堆东西,在web中看到节点又在安装东西了,慢慢等吧

用命令查看

[root@vm82 ~]# kubectl get pod -n rook-ceph -o wide
NAME                                  READY   STATUS            RESTARTS   AGE     IP          NODE    NOMINATED NODE   READINESS GATES
rook-ceph-csi-detect-version-vnn72    0/1     PodInitializing   0          57s     10.44.0.3   vm821   <none>           <none>
rook-ceph-detect-version-fvdql        0/1     PodInitializing   0          57s     10.44.0.2   vm821   <none>           <none>
rook-ceph-operator-577cb6c8d6-ghd7q   1/1     Running           0          10m     10.47.0.1   vm822   <none>           <none>
rook-discover-4r464                   1/1     Running           0          9m40s   10.47.0.2   vm822   <none>           <none>
rook-discover-5r4lx                   1/1     Running           0          9m40s   10.44.0.1   vm821   <none>           <none>

过一会,按F5刷新页面,发现web界面有变化了

 

可能是时间超时了,k8s自动重试,又好了,下面是其中一个节点所安装的东西

再次用命令查看一下:发现还有好几个在创建中...继续等

[root@vm82 ~]# kubectl get pod -n rook-ceph -o wide
NAME                                            READY   STATUS              RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
csi-cephfsplugin-4q6p2                          3/3     Running             0          17m   192.168.3.21   vm821    <none>           <none>
csi-cephfsplugin-9f5j6                          3/3     Running             0          17m   192.168.3.22   vm822    <none>           <none>
csi-cephfsplugin-provisioner-5c65b94c8d-66xb5   0/6     ContainerCreating   0          17m   <none>         vm822    <none>           <none>
csi-cephfsplugin-provisioner-5c65b94c8d-pg8mx   6/6     Running             0          17m   10.44.0.4      vm821    <none>           <none>
csi-rbdplugin-provisioner-569c75558-cd22d       0/6     ContainerCreating   0          17m   <none>         vm822    <none>           <none>
csi-rbdplugin-provisioner-569c75558-v9dlh       6/6     Running             0          17m   10.44.0.3      vm821    <none>           <none>
csi-rbdplugin-qt6pq                             3/3     Running             0          17m   192.168.3.21   vm821    <none>           <none>
csi-rbdplugin-vgnp8                             3/3     Running             0          17m   192.168.3.22   vm822    <none>           <none>
rook-ceph-mon-a-canary-c95fd858b-6tx9z          1/1     Running             0          54s   10.44.0.2      vm821    <none>           <none>
rook-ceph-mon-b-canary-77ddc4dd59-k7h8x         1/1     Running             0          53s   10.47.0.3      vm822    <none>           <none>
rook-ceph-mon-c-canary-67b687786d-lp879         0/1     Pending             0          53s   <none>         <none>   <none>           <none>
rook-ceph-operator-577cb6c8d6-ghd7q             1/1     Running             0          29m   10.47.0.1      vm822    <none>           <none>
rook-discover-4r464                             1/1     Running             0          28m   10.47.0.2      vm822    <none>           <none>
rook-discover-5r4lx                             1/1     Running             0          28m   10.44.0.1      vm821    <none>           <none>

发现有2个一直处理创建过程中,应该是国外docker pull不了,查看一下

[root@vm82 ~]# kubectl -n rook-ceph describe pod csi-cephfsplugin-provisioner-5c65b94c8d-66xb5
Name:           csi-cephfsplugin-provisioner-5c65b94c8d-66xb5
Namespace:      rook-ceph
Priority:       0
Node:           vm822/192.168.3.22
Start Time:     Wed, 11 Nov 2020 17:01:34 +0800
Labels:         app=csi-cephfsplugin-provisioner
                contains=csi-cephfsplugin-metrics
                pod-template-hash=5c65b94c8d
Annotations:    <none>
Status:         Pending
...
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  21m   default-scheduler  Successfully assigned rook-ceph/csi-cephfsplugin-provisioner-5c65b94c8d-66xb5 to vm822
  Normal  Pulling    21m   kubelet            Pulling image "quay.io/k8scsi/csi-attacher:v2.1.0"
  Normal  Pulled     14m   kubelet            Successfully pulled image "quay.io/k8scsi/csi-attacher:v2.1.0" in 6m29.527771271s
  Normal  Created    14m   kubelet            Created container csi-attacher
  Normal  Started    14m   kubelet            Started container csi-attacher
  Normal  Pulling    14m   kubelet            Pulling image "quay.io/k8scsi/csi-snapshotter:v2.1.1"

[root@vm82 ~]# kubectl -n rook-ceph describe pod csi-rbdplugin-provisioner-569c75558-cd22d
Name:           csi-rbdplugin-provisioner-569c75558-cd22d
Namespace:      rook-ceph
Priority:       0
Node:           vm822/192.168.3.22
Start Time:     Wed, 11 Nov 2020 17:01:33 +0800
Labels:         app=csi-rbdplugin-provisioner
                contains=csi-rbdplugin-metrics
                pod-template-hash=569c75558
Annotations:    <none>
Status:         Pending
...
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  24m   default-scheduler  Successfully assigned rook-ceph/csi-rbdplugin-provisioner-569c75558-cd22d to vm822
  Normal  Pulling    24m   kubelet            Pulling image "quay.io/k8scsi/csi-provisioner:v1.6.0"
  Normal  Pulled     19m   kubelet            Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.6.0" in 5m49.726902744s
  Normal  Created    19m   kubelet            Created container csi-provisioner
  Normal  Started    19m   kubelet            Started container csi-provisioner
  Normal  Pulling    19m   kubelet            Pulling image "quay.io/k8scsi/csi-resizer:v0.4.0"

 看来得手工下载:看来又是国外地址导致,下载很慢...慢慢等吧,大概1小时这样,下载好了

[root@vm82 ~]# kubectl get pods -n rook-ceph -o wide
NAME                                              READY   STATUS      RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
csi-cephfsplugin-22fbx                            3/3     Running     0          22m   192.168.3.23   vm823   <none>           <none>
csi-cephfsplugin-4q6p2                            3/3     Running     3          24m   192.168.3.21   vm821   <none>           <none>
csi-cephfsplugin-9f5j6                            3/3     Running     3          24m   192.168.3.22   vm822   <none>           <none>
csi-cephfsplugin-provisioner-5c65b94c8d-66xb5     6/6     Running     6          24m   10.47.0.3      vm822   <none>           <none>
csi-cephfsplugin-provisioner-5c65b94c8d-pg8mx     6/6     Running     14         24m   10.44.0.2      vm821   <none>           <none>
csi-rbdplugin-5hfxz                               3/3     Running     0          22m   192.168.3.23   vm823   <none>           <none>
csi-rbdplugin-provisioner-569c75558-cd22d         6/6     Running     6          24m   10.47.0.2      vm822   <none>           <none>
csi-rbdplugin-provisioner-569c75558-v9dlh         6/6     Running     13         24m   10.44.0.3      vm821   <none>           <none>
csi-rbdplugin-qt6pq                               3/3     Running     3          24m   192.168.3.21   vm821   <none>           <none>
csi-rbdplugin-vgnp8                               3/3     Running     3          24m   192.168.3.22   vm822   <none>           <none>
rook-ceph-crashcollector-vm821-579bcc6d5f-8b8xm   1/1     Running     0          22m   10.44.0.6      vm821   <none>           <none>
rook-ceph-crashcollector-vm822-774bcd5f9d-tbx8f   1/1     Running     0          22m   10.47.0.7      vm822   <none>           <none>
rook-ceph-crashcollector-vm823-749c66c8df-vx8nh   1/1     Running     0          18m   10.36.0.6      vm823   <none>           <none>
rook-ceph-mgr-a-78766dc497-bqqpf                  1/1     Running     0          19m   10.36.0.3      vm823   <none>           <none>
rook-ceph-mon-a-6845dccbc-pmd9q                   1/1     Running     0          22m   10.44.0.4      vm821   <none>           <none>
rook-ceph-mon-b-58dd956cdb-2pzzb                  1/1     Running     0          22m   10.47.0.5      vm822   <none>           <none>
rook-ceph-mon-c-794fcb444f-kkb2g                  1/1     Running     0          20m   10.36.0.2      vm823   <none>           <none>
rook-ceph-operator-577cb6c8d6-ghd7q               1/1     Running     1          24h   10.47.0.4      vm822   <none>           <none>
rook-ceph-osd-0-574ff47b99-kv66p                  1/1     Running     0          18m   10.44.0.5      vm821   <none>           <none>
rook-ceph-osd-1-58c994c6bb-wtlff                  1/1     Running     0          18m   10.47.0.6      vm822   <none>           <none>
rook-ceph-osd-2-786798fd7d-8nn6b                  1/1     Running     0          18m   10.36.0.5      vm823   <none>           <none>
rook-ceph-osd-prepare-vm821-9c5mt                 0/1     Completed   0          19m   10.44.0.5      vm821   <none>           <none>
rook-ceph-osd-prepare-vm822-l465x                 0/1     Completed   0          19m   10.47.0.6      vm822   <none>           <none>
rook-ceph-osd-prepare-vm823-d4567                 0/1     Completed   0          19m   10.36.0.5      vm823   <none>           <none>
rook-discover-4r464                               1/1     Running     1          24h   10.47.0.1      vm822   <none>           <none>
rook-discover-5r4lx                               1/1     Running     1          24h   10.44.0.1      vm821   <none>           <none>
rook-discover-hwsgp                               1/1     Running     0          22m   10.36.0.1      vm823   <none>           <none>

 PS:东西太多,内存太少的话,master的k8s会经常挂

PS1:使用中科大镜像

如果我们拉取的quay.io镜像是以下形式:

docker pull quay.io/xxx/yyy:zzz

那么使用中科大镜像,应该是这样拉取:

docker pull quay.mirrors.ustc.edu.cn/xxx/yyy:zzz

ps2:也可以尝试下面的

### quay.io 地址替换

  将 quay.io 替换为 quay-mirror.qiniu.com

### gcr.io 地址替换

  将 gcr.io 替换为 registry.aliyuncs.com

5.3.4 使用ceph-toolbox查看集群状态

要验证集群是否处于正常状态,请连接到 Rook toolbox并运行ceph status命令。

注:上面的链接,版本更新可能失效,把版本号改一下即可

Rook toolbox该工具箱基于CentOS,因此yum可以轻松安装您选择的更多工具。该工具箱可以两种模式运行:

交互式:启动工具箱窗格,您可以在其中从外壳连接并执行Ceph命令
一次性作业:使用Ceph命令运行脚本并从作业日志中收集结果

Rook toolbox 文档中有,相关安装说明,照着做就行了

# 建立相关配置目录
mkdir -pv /disk1/myk8s
cd /disk1/myk8s/

# 创建yaml配置
cat>rook-ceph-tools.yaml<<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rook-ceph-tools
  namespace: rook-ceph
  labels:
    app: rook-ceph-tools
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rook-ceph-tools
  template:
    metadata:
      labels:
        app: rook-ceph-tools
    spec:
      dnsPolicy: ClusterFirstWithHostNet
      containers:
      - name: rook-ceph-tools
        image: rook/ceph:v1.4.7
        command: ["/tini"]
        args: ["-g", "--", "/usr/local/bin/toolbox.sh"]
        imagePullPolicy: IfNotPresent
        env:
          - name: ROOK_CEPH_USERNAME
            valueFrom:
              secretKeyRef:
                name: rook-ceph-mon
                key: ceph-username
          - name: ROOK_CEPH_SECRET
            valueFrom:
              secretKeyRef:
                name: rook-ceph-mon
                key: ceph-secret
        volumeMounts:
          - mountPath: /etc/ceph
            name: ceph-config
          - name: mon-endpoint-volume
            mountPath: /etc/rook
      volumes:
        - name: mon-endpoint-volume
          configMap:
            name: rook-ceph-mon-endpoints
            items:
            - key: data
              path: mon-endpoints
        - name: ceph-config
          emptyDir: {}
      tolerations:
        - key: "node.kubernetes.io/unreachable"
          operator: "Exists"
          effect: "NoExecute"
          tolerationSeconds: 5
EOF
# 创建 Deployment
kubectl apply -f rook-ceph-tools.yaml

查看相关状态 :

# 查看pod 状态 
[root@vm82 myk8s]# kubectl -n rook-ceph get pod -l "app=rook-ceph-tools"
NAME                              READY   STATUS    RESTARTS   AGE
rook-ceph-tools-fff8ccb89-f8d9q   1/1     Running   0          119s
# 查看ceph状态
[root@vm82 myk8s]# kubectl -n rook-ceph exec rook-ceph-tools-fff8ccb89-f8d9q -it -- bash
[root@rook-ceph-tools-fff8ccb89-f8d9q /]# ceph status
  cluster:
    id:     16c3d3d6-be8e-4d48-8993-00a9fa516acc
    health: HEALTH_WARN
            clock skew detected on mon.b, mon.c
 
  services:
    mon: 3 daemons, quorum a,b,c (age 28m)
    mgr: a(active, since 28m)
    osd: 3 osds: 3 up (since 28m), 3 in (since 28m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 297 GiB / 300 GiB avail
    pgs:     1 active+clean
 
[root@rook-ceph-tools-fff8ccb89-f8d9q /]# ceph osd status
ID  HOST    USED  AVAIL  WR OPS  WR DATA  RD OPS  RD DATA  STATE      
 0  vm821  1026M  98.9G      0        0       0        0   exists,up  
 1  vm822  1026M  98.9G      0        0       0        0   exists,up  
 2  vm823  1026M  98.9G      0        0       0        0   exists,up  
[root@rook-ceph-tools-fff8ccb89-f8d9q /]# exit
exit

 具体 如何使用ceph可以看相关文档,我这里就不进行详说了。有兴趣的话,等熟悉使用k8s可以研究一下rook。

PS:如果你要卸载的话可以执行下面命令

kubectl -n rook-ceph delete deployment rook-ceph-tools

 

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐