Kubernetes文档
一: Kubernetes 简介Kubernetes 是谷歌在在2014年开源的一个容器集群管理系统,简称K8S。是 Google 多年大规模容器管理技术 Borg 的开源版本,主要功能包括:基于容器的应用部署、维护和滚动升级负载均衡和服务发现跨机器和跨地区的集群调度自动伸缩 无状态服务和有状态服务广泛的 Volume 支持 插件机制保证扩展性Kubernetes用于容器化应用程序的部署,扩展和管
一: Kubernetes 简介
Kubernetes 是谷歌在在2014年开源的一个容器集群管理系统,简称K8S。是 Google 多年大规模容器管理技术 Borg 的开源版本,主要功能包括:
- 基于容器的应用部署、
- 维护和滚动升级
- 负载均衡和服务发现
- 跨机器和跨地区的集群调度
- 自动伸缩 无
- 状态服务和有状态服务
- 广泛的 Volume 支持 插件机制保证扩展性
Kubernetes用于容器化应用程序的部署,扩展和管理,目标是让部署容器化应用简单高效。
有了Docker,为什么还用Kubernetes?
1.1: Kubernetes集群架构与组件
Master角色
kube-apiserver
Kubernetes API,集群的统一入口,各组件协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
端口默认值为6443,可通过启动参数“--secure-port”的值来修改默认值。
默认IP地址为非本地Non-Localhost网络端口,通过启动参数“--bind-address”设置该值
该端口用于接收客户端、dashboard等外部HTTPS请求。
用于其于Token文件或客户端证书及HTTP Base的认证。
用于基于策略的授权。
kube-controller-manager
处理集群中常规后台任务,负责维护集群的状态,比如故障检测、自动扩展、滚动更新 等;一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的。
kube-scheduler
根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。
etcd
分布式键值存储系统。用于保存集群状态数据,比如Pod、Service等对象信息。
- CoreDNS
CoreDNS 作为集群的必备扩 展来提供命名服务。
Worker Node角色
kubelet
kubelet是Master在Node节点上的Agent,管理本机运行容器的生命周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。
kube-proxy
Kubernetes网络代理运行在node上,它反映了node上Kubernetes API中定义的服务,并可以通过一组后端进行简单的TCP、UDP和SCTP流转发或者在一组后端进行循环TCP、UDP和SCTP转发,用户必须使用apiserver API创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行连接转发实现Kubernetes服务访问。
kube-proxy运行在每个节点上,监听API Server中服务对象的变化,再通过管理Iptables或者IPVS规则来实现网络的转发。
kubelet
每个计算节点中都包含一个 kubelet,这是一个与控制平面通信的微型应用。kublet 可确保容器在容器集内运行。当控制平面需要在节点中执行某个操作时,kubelet 就会执行该操作。
是运行在每个worker节点的代理组件,它会监视已分配给节点的pod,具体功能如下:
- 向master汇报node节点的状态信息;
- 授受指令并在Pod中创建docker容器;
- 准备pod所需的数据卷;
- 返回pod的运行状态;
- 在node节点执行容器健康检查 ;
(负责POD/容器 生命周期,创建或删除pod)
docker或rocket
容器引擎,运行容器。
Kubernetes 设计理念和功能其实就是一个类似 Linux 的分层架构,如下图所示
- 核心层:Kubernetes 最核心的功能,对外提供 API 构建高层的应用,对内提供插 件式应用执行环境
- 应用层:部署(无状态应用、有状态应用、批处理任务、集群应用等)和路由(服 务发现、DNS 解析等)
- 管理层:系统度量(如基础设施、容器和网络的度量),自动化(如自动扩展、动 态 Provision 等)以及策略管理(RBAC、Quota、PSP、NetworkPolicy 等)
- 接口层:kubectl 命令行工具、客户端 SDK 以及集群联邦
- 生态系统:在接口层之上的庞大容器集群管理调度的生态系统,可以划分为两个范 畴
Kubernetes 外部:日志、监控、配置管理、CI、CD、Workflow、FaaS、OTS 应 用、ChatOps 等
Kubernetes 内部:CRI、CNI、CVI、镜像仓库、Cloud Provider、集群自身的 配置和管理等
K8s系统最核心的两个设计理念:一个是容错性,一个是易扩展性。容错性实际是保证K8s系统稳定性和安全性的基础,易扩展性是保证K8s对变更友好,可以快速迭代增加新功能的基础。
1.2: Kubernetes 基本概念
1.2.1: Pod
Kubernetes 使用 Pod 来管理容器,每个 Pod 可以包含一个或多个紧密关联的容器。
Pod 是一组紧密关联的容器集合,它们共享 PID、IPC、Network 和 UTS namespace, 是 Kubernetes 调度的基本单位。Pod 内的多个容器共享网络和文件系统,可以通过进 程间通信和文件共享这种简单高效的方式组合完成服务。
在 Kubernetes 中,所有对象都使用 manifest(yaml 或 json)来定义,比如一个简单的 nginx 服务可以定义为 nginx.yaml,它包含一个镜像为 nginx 的容器:
apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80
Pod 启动流程
用户-> kubectl 发起命令请求-> 通过kubeconfig进行认证-> apiserver 认证-> apiserver 将yaml中的信息存储到etcd中-> controller-manager 这里判断是否是create、update-> scheduler 决定调度到那个工作节点 -> kubelet 汇报自身状态和watch apiserver 接口中的pod调度请求
1.2.2: node
Node 是 Pod 真正运行的主机,可以是物理机,也可以是虚拟机。为了管理 Pod,每个 Node 节点上至少要运行 container runtime(比如 docker 或者 rkt)、 kubelet 和 kube-proxy 服务。
1.2.3: Namespace
Namespace 是对一组资源和对象的抽象集合,比如可以用来将系统内部的对象划分为 不同的项目组或用户组。常见的 pods, services, replication controllers 和 deployments 等都是属于某一个 namespace 的(默认是 default),而 node, persistentVolumes 等则 不属于任何 namespace。
1.2.4:Service
Service 是应用服务的抽象,通过 labels 为应用提供负载均衡和服务发现。服务发现完成的工作,是针对客 户端访问的服务,找到对应的的后端服务实例。匹配 labels 的 Pod IP 和端口列表组成 endpoints,由 kube-proxy 负责将服务 IP 负载均衡到这些 endpoints 上。
每个 Service 都会自动分配一个 cluster IP(仅在集群内部可访问的虚拟地址)和 DNS 名,其他容器可以通过该地址或 DNS 来访问服务,而不需要了解后端容器的运行。
apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 8078 # the port that this service should serve on name: http # the container on each pod to connect to, can be a name # (e.g. 'www') or a number (e.g. 80) targetPort: 80 protocol: TCP selector: app: nginx
1.2.5:Label
Label 是识别 Kubernetes 对象的标签,以 key/value 的方式附加到对象上(key 最长不
能超过 63 字节,value 可以为空,也可以是不超过 253 字节的字符串)。
Label 不提供唯一性,并且实际上经常是很多对象(如 Pods)都使用相同的 label 来标 志具体的应用。
Label 定义好后其他对象可以使用 Label Selector 来选择一组相同 label 的对象(比如 ReplicaSet 和 Service 用 label 来选择一组 Pod)。
1.2.6:复制控制器(Replication Controller,RC)和 副本集(Replica Set,RS)
RC是K8s集群中最早的保证Pod高可用的API对象。通过监控运行中的Pod来保证集群中 运行指定数目的Pod副本。指定的数目可以是多个也可以是1个;少于指定数目,RC就 会启动运行新的Pod副本;多于指定数目,RC就会杀死多余的Pod副本。即使在指定数 目为1的情况下,通过RC运行Pod也比直接运行Pod更明智,因为RC也可以发挥它高可 用的能力,保证永远有1个Pod在运行。
RS是新一代RC,提供同样的高可用能力,区别主要在于RS后来居上,能支持更多种类 的匹配模式。副本集对象一般不单独使用,而是作为Deployment的理想状态参数使用。
1.2.7:部署(Deployment)
部署表示用户对K8s集群的一次更新操作。部署是一个比RS应用模式更广的API对象, 可以是创建一个新的服务,更新一个新的服务,也可以是滚动升级一个服务。滚动升级 一个服务,实际是创建一个新的RS,然后逐渐将新RS中副本数增加到理想状态,将旧 RS中的副本数减小到0的复合操作;
1.2.8:密钥对象(Secret)
Secret是用来保存和传递密码、密钥、认证凭证这些敏感信息的对象。使用Secret的好 处是可以避免把敏感信息明文写在配置文件里。在K8s集群中配置和使用服务不可避免 的要用到各种敏感信息实现登录、认证等功能,例如访问AWS存储的用户名密码。为了 避免将类似的敏感信息明文写在所有需要使用的配置文件中,可以将这些信息存入一个 Secret对象,而在配置文件中通过Secret对象引用这些敏感信息。这种方式的好处包 括:意图明确,避免重复,减少暴露机会。
1.2.9:用户帐户(User Account)和服务帐户(Service Account)
顾名思义,用户帐户为人提供账户标识,而服务账户为计算机进程和K8s集群中运行的 Pod提供账户标识。用户帐户和服务帐户的一个区别是作用范围;用户帐户对应的是人 的身份,人的身份与服务的namespace无关,所以用户账户是跨namespace的;而服务 帐户对应的是一个运行中程序的身份,与特定namespace是相关的。
1.2.10:RBAC访问授权
K8s在1.3版本中发布了alpha版的基于角色的访问控制(Role-based Access Control, RBAC)的授权模式。相对于基于属性的访问控制(Attribute-based Access Control, ABAC),RBAC主要是引入了角色(Role)和角色绑定(RoleBinding)的抽象概念。 在ABAC中,K8s集群中的访问策略只能跟用户直接关联;而在RBAC中,访问策略可以跟某个角色关联,具体的用户在跟一个或多个角色相关联。显然,RBAC像其他新功能 一样,每次引入新功能,都会引入新的API对象,从而引入新的概念抽象,而这一新的 概念抽象一定会使集群服务管理和使用更容易扩展和重用。
二:k8s集群环境搭建
主机名 | ip地址 | 备注 |
test-docker-151 | 172.18.1.151 | 部署节点 |
test-harbor-152 | 172.18.1.152 | 172.18.1.160 keepalived+haproxy |
test-harbor-153 | 172.18.1.153 | 172.18.1.160 keepalived+haproxy |
test-k8s-master-200 | 172.18.1.200 | 172.18.1.210 |
test-k8s-master-201 | 172.18.1.201 | 172.18.1.210 |
test-k8s-master-202 | 172.18.1.202 | 172.18.1.210 |
test-k8s-etcd-203 | 172.18.1.203 | |
test-k8s-etcd-204 | 172.18.1.204 | |
test-k8s-etcd-205 | 172.18.1.205 | |
test-k8s-node-206 | 172.18.1.206 | |
test-k8s-node-207 | 172.18.1.207 | |
test-k8s-node-208 | 172.18.1.208 |
2.1 Harbor之https
2.1.1安装harbor
[root@test-harbor-152 app]# mkdir certs
[root@test-harbor-152 certs]# cd certs/
#生成私有key
[root@test-harbor-152 certs]# openssl genrsa -out harbor-ca.key
#签发证书
[root@test-harbor-152 certs]# openssl req -x509 -new -nodes -key harbor-ca.key -subj "/CN=harbor.zixuan.net" -days 7120 -out harbor-ca.crt
[root@test-harbor-152 certs]# vim /usr/local/harbor/harbor.yml
# Configuration file of Harbor
# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: harbor.zixuan.net
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
# https related config
https:
# https port for harbor, default is 443
port: 443
# The path of cert and key files for nginx
certificate: /data/ops/app/certs/harbor-ca.crt
private_key: /data/ops/app/certs/harbor-ca.key
....
# The default data volume
data_volume: /data/ops/app/harbor/
#安装harbor
[root@test-harbor-152 harbor]# ./install.sh --with-trivy
2.1.2客户端节点同步证书测试登录:
#在客户端创建保存KEY的目录
root@test-docker-151:~# mkdir /etc/docker/certs.d/harbor.zixuan.net -p
#将证书拷贝到客户端
[root@test-harbor-152 harbor]# scp /data/ops/app/certs/harbor-ca.crt 172.18.1.150:/etc/docker/certs.d/harbor.zixuan.net/
#在客户端添加host
root@test-docker-151:~# vim /etc/hosts
172.18.1.160 harbor.zixuan.net
#重启docker
root@test-docker-151:~# systemctl restart docker
#登录验证
root@test-docker-151:~# docker login harbor.zixuan.net
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
#打tag
root@test-docker-151:~# docker tag reg.local.com/library/centos-base:v1 harbor.zixuan.net/library/centos-base:v1
#上传镜像测试
root@test-docker-151:~# docker push harbor.zixuan.net/library/centos-base:v1
2.2 部署⾼可⽤负载均衡:
2.2.1: keepalived
[root@test-harbor-152 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 1
priority 100
advert_int 3
unicast_src_ip 172.18.1.152
unicast_peer {
172.18.1.153
}
authentication {
auth_type PASS
auth_pass 123abc
}
virtual_ipaddress {
172.18.1.160 dev eth0 label eth0:1
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 2
priority 100
advert_int 3
unicast_src_ip 172.18.1.152
unicast_peer {
172.18.1.153
}
authentication {
auth_type PASS
auth_pass 123abc
}
virtual_ipaddress {
172.18.1.210 dev eth0 label eth0:2
}
}
[root@test-harbor-152 ~]# systemctl restart keepalived
[root@test-harbor-152 ~]# systemctl status keepalived
[root@test-harbor-153 ~]# vim /etc/keepalived/keepalived.conf
vrrp_instance VI_1 {
state SLAVE
interface eth0
virtual_router_id 1
priority 100
advert_int 3
unicast_src_ip 172.18.1.153
unicast_peer {
172.18.1.152
}
authentication {
auth_type PASS
auth_pass 123abc
}
virtual_ipaddress {
172.18.1.160 dev eth0 label eth0:1
}
}
vrrp_instance VI_2 {
state MASTER
interface eth0
virtual_router_id 2
priority 100
advert_int 3
unicast_src_ip 172.18.1.153
unicast_peer {
172.18.1.152
}
authentication {
auth_type PASS
auth_pass 123abc
}
virtual_ipaddress {
172.18.1.210 dev eth0 label eth0:2
}
}
[root@test-harbor-153 ~]# systemctl restart keepalived
[root@test-harbor-153 ~]# systemctl status keepalived
2.2.2:haproxy
[root@test-harbor-152 ~]# vim /etc/haproxy/haproxy.cfg
listen harbor_443
bind 172.18.1.160:8443
mode tcp
# balance source
server 172.18.1.152 172.18.1.152:443 check inter 2000 fall 3 rise 5
server 172.18.1.153 172.18.1.153:443 check inter 2000 fall 3 rise 5
listen _k8s_api_nodes_6443
bind 172.18.1.210:6443
mode tcp
server 172.18.1.200 172.18.1.200:6443 check inter 2000 fall 3 rise 5
server 172.18.1.201 172.18.1.201:6443 check inter 2000 fall 3 rise 5
server 172.18.1.202 172.18.1.202:6443 check inter 2000 fall 3 rise 5
[root@test-harbor-153 ~]# vim /etc/haproxy/haproxy.cfg
listen harbor_443
bind 172.18.1.160:8443
mode tcp
# balance source
server 172.18.1.152 172.18.1.152:443 check inter 2000 fall 3 rise 5
server 172.18.1.153 172.18.1.153:443 check inter 2000 fall 3 rise 5
listen _k8s_api_nodes_6443
bind 172.18.1.210:6443
mode tcp
server 172.18.1.200 172.18.1.200:6443 check inter 2000 fall 3 rise 5
server 172.18.1.201 172.18.1.201:6443 check inter 2000 fall 3 rise 5
server 172.18.1.202 172.18.1.202:6443 check inter 2000 fall 3 rise 5
2.3 在部署节点安装ansible
root@test-docker-151:~# apt install python3-pip git
root@test-docker-151:~# pip3 install ansible -i https://mirrors.aliyun.com/pypi/simple/
root@test-docker-151:~# ansible --version
[DEPRECATION WARNING]: Ansible will require Python 3.8 or newer on the controller starting with Ansible 2.12. Current version: 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]. This feature will be removed from ansible-core in version 2.12. Deprecation
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
ansible [core 2.11.5]
config file = None
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.6.9 (default, Jan 26 2021, 15:33:00) [GCC 8.4.0]
jinja version = 3.0.1
libyaml = True
2.4:配置ssh免密登录和harbor秘钥实现https登录
#创建证书使用的目录
root@test-docker-151:~# mkdir /etc/docker/certs.d/harbor.zixuan.net -p
#在harbor拷贝证书到ansible部署节点
[root@test-harbor-152 ~]# scp /data/ops/app/certs/harbor-ca.crt 172.18.1.151:/etc/docker/certs.d/harbor.zixuan.net/
#在ansible部署节点测试
root@test-docker-151:~# vim /etc/hosts
root@test-docker-151:~# systemctl restart docker
root@test-docker-151:~# docker login harbor.zixuan.net
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
#⽣成密钥对
root@test-docker-151:~# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)?
#安装sshpass命令⽤于同步公钥到各k8s服务器
root@test-docker-151:~# apt-get install sshpass
#给master和node分发公钥脚本:
root@test-docker-151:~# cat scp-key.sh
#!/bin/bash
IP="
172.18.1.200
172.18.1.201
172.18.1.202
172.18.1.203
172.18.1.204
172.18.1.205
172.18.1.206
172.18.1.207
172.18.1.208
"
for node in ${IP};do
sshpass -p 123456 ssh-copy-id ${node} -o StrictHostKeyChecking=no
if [ $? -eq 0 ];then
echo "${node} 秘钥copy完成"
ssh ${node} "mkdir /etc/docker/certs.d/harbor.zixuan.net -p"
echo "Harbor 证书⽬录创建成功!"
scp /etc/docker/certs.d/harbor.zixuan.net/harbor-ca.crt ${node}:/etc/docker/certs.d/harbor.zixuan.net/harbor-ca.crt
echo "Harbor 证书拷⻉成功!"
ssh ${node} "echo "172.18.1.160 harbor.zixuan.net" >> /etc/hosts"
echo "host ⽂件拷⻉完成"
scp -r /root/.docker ${node}:/root/
echo "Harbor 认证⽂件拷⻉完成!"
else
echo "${node} 秘钥copy失败"
fi
done
2.5:基于ansible进安装部署kubernetes集群(kubeasz)
2.5.1:下载easzlab安装脚本和下载安装文件
root@test-docker-151:~# cd /usr/local/src/
root@test-docker-151:/usr/local/src# export release=3.1.0
root@test-docker-151:~# curl -C- -fLO --retry 3 https://github.com/easzlab/kubeasz/releases/download/${release}/ezdown
root@test-docker-151:~# cp ezdown bak.ezdown
root@test-docker-151:~# chmod a+x ezdown
root@test-docker-151:~# vim ezdown
DOCKER_VER=19.03.15 #指定的docker版本
K8S_BIN_VER=v1.21.0 #指定k8s版本,这个脚本是到hub.docker.com/easzlab/kubeasz-k8s-bin:v1.21.0对应的版本,可以到hub.docker.com查询要使用的版本;
BASE="/etc/kubeasz" #下载相关配置文件和镜像位置
root@test-docker-151:~# bash ./ezdown -D
root@test-docker-151:~# cd /etc/kubeasz/
root@test-docker-151:/etc/kubeasz# ls
ansible.cfg bin docs down example ezctl ezdown manifests pics playbooks README.md roles tools
#创建名字为k8s-01的集群
root@test-docker-151:/etc/kubeasz# ./ezctl new k8s-01
2.5.2:修改kubeasz的hosts文件
root@test-docker-151:~# vim /etc/kubeasz/clusters/k8s-01/hosts
#修改此项-写明对应etcd和master以及node的ip地址
# 'etcd' cluster should have odd member(s) (1,3,5,...)
[etcd]
172.18.1.203
172.18.1.204
172.18.1.205
#修改此项-此处先写2个时候试验增加
# master node(s)
[kube_master]
172.18.1.200
172.18.1.201
#修改此项-此处先写2个时候试验增加
# work node(s)
[kube_node]
172.18.1.206
172.18.1.207
# [optional] harbor server, a private docker registry
# 'NEW_INSTALL': 'true' to install a harbor server; 'false' to integrate with existed one
[harbor]
#192.168.1.8 NEW_INSTALL=false
#修改此项-lb的地址,此前已经自己部署,填写相关地址即可
# [optional] loadbalance for accessing k8s from outside
[ex_lb]
172.18.1.152 LB_ROLE=backup EX_APISERVER_VIP=172.18.1.210 EX_APISERVER_PORT=6443
172.18.1.153 LB_ROLE=master EX_APISERVER_VIP=172.18.1.210 EX_APISERVER_PORT=6443
# [optional] ntp server for the cluster
[chrony]
#192.168.1.1
[all:vars]
# --------- Main Variables ---------------
# Secure port for apiservers
SECURE_PORT="6443"
# Cluster container-runtime supported: docker, containerd
CONTAINER_RUNTIME="docker"
修改此项-更换为calico
# Network plugins supported: calico, flannel, kube-router, cilium, kube-ovn
CLUSTER_NETWORK="calico"
修改此项-更换为ipvs
# Service proxy mode of kube-proxy: 'iptables' or 'ipvs'
PROXY_MODE="ipvs"
#修改此项-地址不要与现行环境冲突
# K8S Service CIDR, not overlap with node(host) networking
SERVICE_CIDR="10.150.0.0/16"
#修改此项-地址不要与现行环境冲突
# Cluster CIDR (Pod CIDR), not overlap with node(host) networking
CLUSTER_CIDR="10.180.0.0/16"
#修改此项-生产环境可更具业务量决定
# NodePort Range
NODE_PORT_RANGE="30000-42767"
#修改此项-此处是与harbor域名一致
# Cluster DNS Domain
CLUSTER_DNS_DOMAIN="zixuan.locaal"
# -------- Additional Variables (don't change the default value right now) ---
# Binaries Directory
#修改此项-不容易删除
bin_dir="/usr/local/bin"
#修改此项-默认工作目录
# Deploy Directory (kubeasz workspace)
base_dir="/etc/kubeasz"
# Directory for a specific cluster
cluster_dir="{{ base_dir }}/clusters/k8s-01"
# CA and other components cert/key Directory
ca_dir="/etc/kubernetes/ssl"
2.5.3:修改kubeasz的config.yml文件
root@test-docker-151:/etc/kubeasz# vim /etc/kubeasz/clusters/k8s-01/config.yml
############################
# prepare
############################
# 可选离线安装系统软件包 (offline|online)
INSTALL_SOURCE: "online"
# 可选进行系统安全加固 github.com/dev-sec/ansible-collection-hardening
OS_HARDEN: false
# 设置时间源服务器【重要:集群内机器时间必须同步】
ntp_servers:
- "ntp1.aliyun.com"
- "time1.cloud.tencent.com"
- "0.cn.pool.ntp.org"
# 设置允许内部时间同步的网络段,比如"10.0.0.0/8",默认全部允许
local_network: "0.0.0.0/0"
############################
# role:deploy
############################
# default: ca will expire in 100 years
# default: certs issued by the ca will expire in 50 years
CA_EXPIRY: "876000h"
CERT_EXPIRY: "438000h"
# kubeconfig 配置参数
CLUSTER_NAME: "cluster1"
CONTEXT_NAME: "context-{{ CLUSTER_NAME }}"
############################
# role:etcd
############################
# 设置不同的wal目录,可以避免磁盘io竞争,提高性能
ETCD_DATA_DIR: "/var/lib/etcd"
ETCD_WAL_DIR: ""
############################
# role:runtime [containerd,docker]
############################
# ------------------------------------------- containerd
# [.]启用容器仓库镜像
ENABLE_MIRROR_REGISTRY: true
#修改此项-默认pause-amd64是在国外下载,此可docker下载后上传到本地镜像服务器
# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor.zixuan.net/baseimages/pause-amd64:3.4.1"
#修改此项-修改目录到/data/下
# [containerd]容器持久化存储目录
CONTAINERD_STORAGE_DIR: "/data/ops/app/containerd"
# ------------------------------------------- docker
# [docker]容器存储目录
修改此项-修改目录到/data/下
DOCKER_STORAGE_DIR: "/data/ops/app/docker"
# [docker]开启Restful API
ENABLE_REMOTE_API: false
#修改此项-信任的仓库地址,如非https此处必须添加
# [docker]信任的HTTP仓库
INSECURE_REG: '["127.0.0.1/8","172.18.1.160"]'
############################
# role:kube-master
############################
# k8s 集群 master 节点证书配置,可以添加多个ip和域名(比如增加公网ip和域名)
MASTER_CERT_HOSTS:
- "10.1.1.1"
- "k8s.test.io"
#- "www.test.com"
# node 节点上 pod 网段掩码长度(决定每个节点最多能分配的pod ip地址)
# 如果flannel 使用 --kube-subnet-mgr 参数,那么它将读取该设置为每个节点分配pod网段
# https://github.com/coreos/flannel/issues/847
NODE_CIDR_LEN: 24
############################
# role:kube-node
############################
# Kubelet 根目录
KUBELET_ROOT_DIR: "/var/lib/kubelet"
#修改此项-此处更具生产环境决定
# node节点最大pod 数
MAX_PODS: 300
# 配置为kube组件(kubelet,kube-proxy,dockerd等)预留的资源量
# 数值设置详见templates/kubelet-config.yaml.j2
KUBE_RESERVED_ENABLED: "yes"
# k8s 官方不建议草率开启 system-reserved, 除非你基于长期监控,了解系统的资源占用状况;
# 并且随着系统运行时间,需要适当增加资源预留,数值设置详见templates/kubelet-config.yaml.j2
# 系统预留设置基于 4c/8g 虚机,最小化安装系统服务,如果使用高性能物理机可以适当增加预留
# 另外,集群安装时候apiserver等资源占用会短时较大,建议至少预留1g内存
SYS_RESERVED_ENABLED: "no"
# haproxy balance mode
BALANCE_ALG: "roundrobin"
############################
# role:network [flannel,calico,cilium,kube-ovn,kube-router]
############################
# ------------------------------------------- flannel
# [flannel]设置flannel 后端"host-gw","vxlan"等
FLANNEL_BACKEND: "vxlan"
DIRECT_ROUTING: false
# [flannel] flanneld_image: "quay.io/coreos/flannel:v0.10.0-amd64"
flannelVer: "v0.13.0-amd64"
flanneld_image: "easzlab/flannel:{{ flannelVer }}"
# [flannel]离线镜像tar包
flannel_offline: "flannel_{{ flannelVer }}.tar"
# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"
# ------------------------------------------- cilium
# [cilium]CILIUM_ETCD_OPERATOR 创建的 etcd 集群节点数 1,3,5,7...
ETCD_CLUSTER_SIZE: 1
# [cilium]镜像版本
cilium_ver: "v1.4.1"
# [cilium]离线镜像tar包
cilium_offline: "cilium_{{ cilium_ver }}.tar"
# ------------------------------------------- kube-ovn
# [kube-ovn]选择 OVN DB and OVN Control Plane 节点,默认为第一个master节点
OVN_DB_NODE: "{{ groups['kube_master'][0] }}"
# [kube-ovn]离线镜像tar包
kube_ovn_ver: "v1.5.3"
kube_ovn_offline: "kube_ovn_{{ kube_ovn_ver }}.tar"
# ------------------------------------------- kube-router
# [kube-router]公有云上存在限制,一般需要始终开启 ipinip;自有环境可以设置为 "subnet"
OVERLAY_TYPE: "full"
# [kube-router]NetworkPolicy 支持开关
FIREWALL_ENABLE: "true"
# [kube-router]kube-router 镜像版本
kube_router_ver: "v0.3.1"
busybox_ver: "1.28.4"
# [kube-router]kube-router 离线镜像tar包
kuberouter_offline: "kube-router_{{ kube_router_ver }}.tar"
busybox_offline: "busybox_{{ busybox_ver }}.tar"
############################
# role:cluster-addon
############################
#修改此项-修改为no,生产环境可开启(此处为后续做实验)
# coredns 自动安装
dns_install: "no"
corednsVer: "1.8.0"
#修改此项-修改为false,生产环境可开启(此处为后续做实验)
ENABLE_LOCAL_DNS_CACHE: false
dnsNodeCacheVer: "1.17.0"
# 设置 local dns cache 地址
LOCAL_DNS_CACHE: "169.254.20.10"
# metric server 自动安装
metricsserver_install: "no"
metricsVer: "v0.3.6"
#修改此项-修改为no,生产环境可开启(此处为后续做实验)
# dashboard 自动安装
dashboard_install: "no"
dashboardVer: "v2.2.0"
dashboardMetricsScraperVer: "v1.0.6"
#修改此项-修改为no,生产环境可开启(此处为后续做实验)
# ingress 自动安装
ingress_install: "no"
ingress_backend: "traefik"
traefik_chart_ver: "9.12.3"
#修改此项-修改为no,生产环境可开启(此处为后续做实验)
# prometheus 自动安装
prom_install: "no"
prom_namespace: "monitor"
prom_chart_ver: "12.10.6"
# nfs-provisioner 自动安装
nfs_provisioner_install: "no"
nfs_provisioner_namespace: "kube-system"
nfs_provisioner_ver: "v4.0.1"
nfs_storage_class: "managed-nfs-storage"
nfs_server: "192.168.1.10"
nfs_path: "/data/nfs"
############################
# role:harbor
############################
# harbor version,完整版本号
HARBOR_VER: "v2.1.3"
HARBOR_DOMAIN: "harbor.zixuan.com"
HARBOR_TLS_PORT: 8443
# if set 'false', you need to put certs named harbor.pem and harbor-key.pem in directory 'down'
HARBOR_SELF_SIGNED_CERT: true
# install extra component
HARBOR_WITH_NOTARY: false
HARBOR_WITH_TRIVY: false
HARBOR_WITH_CLAIR: false
HARBOR_WITH_CHARTMUSEUM: true
2.5.4:其他默认调用的模板文件
root@test-docker-151:~# tree /etc/kubeasz/roles/prepare/
/etc/kubeasz/roles/prepare/
├── files
│ └── sctp.conf
├── tasks
│ ├── centos.yml
│ ├── common.yml
│ ├── main.yml
│ ├── offline.yml
│ └── ubuntu.yml
└── templates
├── 10-k8s-modules.conf.j2
├── 30-k8s-ulimits.conf.j2
├── 95-k8s-journald.conf.j2
└── 95-k8s-sysctl.conf.j2
3 directories, 10 files
#kube-proxy 初始化安装前可以指定scheduler算法,根据安装需求设置;不修改也可以
root@test-k8s-master-200:/etc/kubeasz# vim /etc/kubeasz/roles/kube-node/templates/kube-proxy-config.yaml.j2
...
mode: "{{ PROXY_MODE }}"
ipvs:
scheduler: wr
# 查看docker的daemon.json模板(不做修改)
root@test-k8s-master-200:/etc/kubeasz# cat /etc/kubeasz/roles/docker/templates/daemon.json.j2
{
"data-root": "{{ DOCKER_STORAGE_DIR }}",
"exec-opts": ["native.cgroupdriver={{ CGROUP_DRIVER }}"],
{% if ENABLE_MIRROR_REGISTRY %}
"registry-mirrors": [
"https://docker.mirrors.ustc.edu.cn",
"http://hub-mirror.c.163.com"
],
{% endif %}
{% if ENABLE_REMOTE_API %}
"hosts": ["tcp://0.0.0.0:2376", "unix:///var/run/docker.sock"],
{% endif %}
"insecure-registries": {{ INSECURE_REG }},
"max-concurrent-downloads": 10,
"live-restore": true,
"log-driver": "json-file",
"log-level": "warn",
"log-opts": {
"max-size": "50m",
"max-file": "1"
},
"storage-driver": "overlay2"
}
# coredns 模板位置
vim /etc/kubeasz/roles/cluster-addon/templates/dns/coredns.yaml.j2
# dashboard的模板位置
root@test-docker-151:/etc/kubeasz# vim /etc/kubeasz/roles/cluster-addon/templates/dashboard/kubernetes-dashboard.yaml
2.5.4:安装的帮助目录(此处为实现环境按步骤部署,生产环境可all)
root@test-docker-151:/etc/kubeasz# ./ezctl help setup
Usage: ezctl setup <cluster> <step>
available steps:
01 prepare to prepare CA/certs & kubeconfig & other system settings
02 etcd to setup the etcd cluster
03 container-runtime to setup the container runtime(docker or containerd)
04 kube-master to setup the master nodes
05 kube-node to setup the worker nodes
06 network to setup the network plugin
07 cluster-addon to setup other useful plugins
90 all to run 01~07 all at once
10 ex-lb to install external loadbalance for accessing k8s from outside
11 harbor to install a new harbor server or to integrate with an existed one
examples: ./ezctl setup test-k8s 01 (or ./ezctl setup test-k8s prepare)
./ezctl setup test-k8s 02 (or ./ezctl setup test-k8s etcd)
./ezctl setup test-k8s all
./ezctl setup test-k8s 04 -t restart_master
2.5.5:准备CA和基础系统设置01
root@test-docker-151:/etc/kubeasz# ./ezctl setup k8s-01 01
...
PLAY RECAP *****************************************************************************************************************************************************************************************************************************************************
172.18.1.200 : ok=25 changed=8 unreachable=0 failed=0 skipped=113 rescued=0 ignored=0
172.18.1.201 : ok=25 changed=10 unreachable=0 failed=0 skipped=113 rescued=0 ignored=0
172.18.1.203 : ok=21 changed=5 unreachable=0 failed=0 skipped=117 rescued=0 ignored=0
172.18.1.204 : ok=21 changed=5 unreachable=0 failed=0 skipped=117 rescued=0 ignored=0
172.18.1.205 : ok=21 changed=6 unreachable=0 failed=0 skipped=117 rescued=0 ignored=0
172.18.1.206 : ok=24 changed=9 unreachable=0 failed=0 skipped=114 rescued=0 ignored=0
localhost : ok=34 changed=24 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0
2.5.5:部署etcd集群02
root@test-docker-151:/etc/kubeasz# ./ezctl setup k8s-01 02
172.18.1.203 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.18.1.204 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
172.18.1.205 : ok=10 changed=7 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
#在任意etcd验证各etcd节点服务状态:
root@test-k8s-etcd-205:~# export NODE_IPS="172.18.1.203 172.18.1.204 172.18.1.205"
root@test-k8s-etcd-205:~# for ip in ${NODE_IPS}; do ETCDCTL_API=3 etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/ssl/ca.pem --cert=/etc/kubernetes/ssl/etcd.pem --key=/etc/kubernetes/ssl/etcd-key.pem endpoint health; done
#返回信息表示etcd集群运⾏正常,否则异常!
https://172.18.1.203:2379 is healthy: successfully committed proposal: took = 13.132751ms
https://172.18.1.204:2379 is healthy: successfully committed proposal: took = 14.141567ms
https://172.18.1.205:2379 is healthy: successfully committed proposal: took = 12.364205ms
2.5.6:部署master节点03
root@test-docker-151:~# cd /etc/kubeasz/
root@test-docker-151:/etc/kubeasz# grep SANDBOX_IMAGE ./clusters/* -R
./clusters/k8s-01/config.yml:SANDBOX_IMAGE: "harbor.zixuan.net/baseimages/pause-amd64:3.4.1"
root@test-docker-151:/etc/kubeasz# docker pull easzlab/pause-amd64:3.4.1
3.4.1: Pulling from easzlab/pause-amd64
fac425775c9d: Pull complete
Digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d
Status: Downloaded newer image for easzlab/pause-amd64:3.4.1
docker.io/easzlab/pause-amd64:3.4.1
root@test-k8s-master-200:/etc/kubeasz# docker tag easzlab/pause-amd64:3.4.1 harbor.zixuan.net/baseimages/pause-amd64:3.4.1
root@test-docker-151:/etc/kubeasz# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
easzlab/pause-amd64 3.4.1 0f8457a4c2ec 8 months ago 683kB
harbor.zixuan.net/baseimages/pause-amd64 3.4.1 0f8457a4c2ec 8 months ago 683kB
#测试上传基础镜像
root@test-docker-151:/etc/kubeasz# docker push harbor.zixuan.net/baseimages/pause-amd64:3.4.1
The push refers to repository [harbor.zixuan.net/baseimages/pause-amd64]
915e8870f7d1: Pushed
3.4.1: digest: sha256:9ec1e780f5c0196af7b28f135ffc0533eddcb0a54a0ba8b32943303ce76fe70d size: 526
#将pause-amd64通过本地镜像下载(默认速度下载也挺快,不走本地也可以)
root@test-k8s-master-200:/etc/kubeasz# vim ./clusters/k8s-01/config.yml
# [containerd]基础容器镜像
SANDBOX_IMAGE: "harbor.zixuan.net/baseimages/pause-amd64:3.4.1"
#node节点必须安装docke,若没有自行安装,脚本会判断,自己通过在线安装(速度也不是特别慢)
root@test-docker-151:/etc/kubeasz# ./ezctl setup k8s-01 03
#可以在node节点看到docker已经自动安装
root@test-k8s-node-206:~# docker version
Client: Docker Engine - Community
Version: 19.03.15
API version: 1.40
Go version: go1.13.15
Git commit: 99e3ed8
Built: Sat Jan 30 03:11:43 2021
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 19.03.15
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 99e3ed8
Built: Sat Jan 30 03:18:13 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.3.9
GitCommit: ea765aba0d05254012b0b9e595e995c09186427f
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
2.5.7:部署master节点04
root@test-docker-151:/etc/kubeasz# ./ezctl setup k8s-01 04
#可以看到master已经就绪
root@test-docker-151:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 38s v1.21.0
172.18.1.201 Ready,SchedulingDisabled master 38s v1.21.0
2.5.8:部署node节点05
root@test-docker-151:/etc/kubeasz# ./ezctl setup k8s-01 05
172.18.1.206 : ok=35 changed=18 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0
172.18.1.207 : ok=37 changed=19 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0
root@test-docker-151:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 3m32s v1.21.0
172.18.1.201 Ready,SchedulingDisabled master 3m32s v1.21.0
172.18.1.206 Ready node 28s v1.21.0
172.18.1.207 Ready node 29s v1.21.0
2.5.8:使⽤calico⽹络组件06
#(默认速度下载也挺快,不走本地也可以)
root@test-docker-151:/etc/kubeasz# vim ./clusters/k8s-01/config.yml
...
# ------------------------------------------- calico
# [calico]设置 CALICO_IPV4POOL_IPIP=“off”,可以提高网络性能,条件限制详见 docs/setup/calico.md
CALICO_IPV4POOL_IPIP: "Always"
# [calico]设置 calico-node使用的host IP,bgp邻居通过该地址建立,可手工指定也可以自动发现
IP_AUTODETECTION_METHOD: "can-reach={{ groups['kube_master'][0] }}"
# [calico]设置calico 网络 backend: brid, vxlan, none
CALICO_NETWORKING_BACKEND: "brid"
# [calico]更新支持calico 版本: [v3.3.x] [v3.4.x] [v3.8.x] [v3.15.x]
calico_ver: "v3.15.3"
# [calico]calico 主版本
calico_ver_main: "{{ calico_ver.split('.')[0] }}.{{ calico_ver.split('.')[1] }}"
# [calico]离线镜像tar包
calico_offline: "calico_{{ calico_ver }}.tar"
#将calico镜像上传到本地
root@test-k8s-master-200:/etc/kubeasz# grep image roles/calico/templates/calico-v3.15.yaml.j2
image: calico/cni:v3.15.3
image: calico/pod2daemon-flexvol:v3.15.3
image: calico/node:v3.15.3
image: calico/kube-controllers:v3.15.3
root@test-docker-151:/etc/kubeasz# docker tag calico/cni:v3.15.3 harbor.zixuan.net/baseimages/calico-cni:v3.15.3
root@test-docker-151:/etc/kubeasz# docker push harbor.zixuan.net/baseimages/calico-cni:v3.15.3
root@test-docker-151:/etc/kubeasz# docker tag docker.io/calico/pod2daemon-flexvol:v3.15.3 harbor.zixuan.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
root@test-docker-151:/etc/kubeasz# docker push harbor.zixuan.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
root@test-docker-151:/etc/kubeasz# docker tag calico/node:v3.15.3 harbor.zixuan.net/baseimages/calico-node:v3.15.3
root@test-docker-151:/etc/kubeasz# docker push harbor.zixuan.net/baseimages/calico-node:v3.15.3
root@test-docker-151:/etc/kubeasz# docker tag calico/kube-controllers:v3.15.3 harbor.zixuan.net/baseimages/calico-kube-controllers:v3.15.3
root@test-docker-151:/etc/kubeasz# docker push harbor.zixuan.net/baseimages/calico-kube-controllers:v3.15.3
#将模板中四个镜像更换成本地镜像
root@test-docker-151:/etc/kubeasz# grep image roles/calico/templates/calico-v3.15.yaml.j2
image: harbor.zixuan.net/baseimages/calico-cni:v3.15.3
image: harbor.zixuan.net/baseimages/calico-pod2daemon-flexvol:v3.15.3
image: harbor.zixuan.net/baseimages/calico-node:v3.15.3
image: harbor.zixuan.net/baseimages/calico-kube-controllers:v3.15.3
#执行calico安装
root@test-docker-151:/etc/kubeasz# ./ezctl setup k8s-01 06
root@test-k8s-master-200:~# kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system calico-kube-controllers-647f956d86-pw459 1/1 Running 0 6m44s 172.18.1.207 172.18.1.207 <none> <none>
kube-system calico-node-8hnvc 1/1 Running 0 6m44s 172.18.1.206 172.18.1.206 <none> <none>
kube-system calico-node-mznfg 1/1 Running 1 6m44s 172.18.1.200 172.18.1.200 <none> <none>
kube-system calico-node-t8gtd 1/1 Running 0 6m44s 172.18.1.201 172.18.1.201 <none> <none>
kube-system calico-node-z7x9n 1/1 Running 0 6m44s 172.18.1.207 172.18.1.207 <none> <none>
2.5.9:创建容器测试⽹络通信:
root@test-k8s-master-200:~# docker pull alpine
root@test-k8s-master-200:~# docker tag alpine harbor.zixuan.net/baseimages/alpine
root@test-k8s-master-200:~# docker push harbor.zixuan.net/baseimages/alpine
root@test-k8s-master-200:~# kubectl run net-test1 --image=harbor.zixuan.net/baseimages/alpine sleep 360000
pod/net-test1 created
root@test-k8s-master-200:~# kubectl run net-test2 --image=harbor.zixuan.net/baseimages/alpine sleep 360000
pod/net-test2 created
root@test-k8s-master-200:~# kubectl run net-test3 --image=harbor.zixuan.net/baseimages/alpine sleep 360000
pod/net-test3 created
root@test-k8s-master-200:~# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
net-test1 1/1 Running 0 7m7s 10.180.67.129 172.18.1.206 <none> <none>
net-test2 1/1 Running 0 4m25s 10.180.67.130 172.18.1.206 <none> <none>
net-test3 1/1 Running 0 4m17s 10.180.248.65 172.18.1.207 <none> <none>
root@test-k8s-master-200:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping 223.6.6.6
PING 223.6.6.6 (223.6.6.6): 56 data bytes
64 bytes from 223.6.6.6: seq=0 ttl=116 time=5.434 ms
64 bytes from 223.6.6.6: seq=1 ttl=116 time=5.435 ms
^C
--- 223.6.6.6 ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 5.434/5.434/5.435 ms
/ # ping 10.180.248.65
PING 10.180.248.65 (10.180.248.65): 56 data bytes
64 bytes from 10.180.248.65: seq=0 ttl=62 time=0.413 ms
64 bytes from 10.180.248.65: seq=1 ttl=62 time=0.579 ms
2.6:部署kube-dns
2.6.1部署coredns:
主要配置参数:
- error: #错误⽇志输出到stdout。
- health: #CoreDNS的运⾏状况报告为http://localhost:8080/health.
- cache: #启⽤coredns缓存。
- reload:#配置⾃动重新加载配置⽂件,如果修改了ConfigMap的配置,会在两分钟后⽣效. loadbalance:#⼀个域名有多个记录会被轮询解析。
- cache 30 #缓存时间
- kubernetes:#CoreDNS将根据指定的service domain名称在Kubernetes SVC中进⾏域名解析。
- forward: #不是Kubernetes集群域内的域名查询都进⾏转发指定的服务器(/etc/resolv.conf)
- prometheus:#CoreDNS的指标数据可以配置Prometheus 访问http://coredns svc:9153/metrics 进⾏收 集。
- ready:#当coredns 服务启动完成后会进⾏在状态监测,会有个URL 路径为/ready返回200状态码,否则返回报 错。
#创建coredns yaml
root@test-k8s-master-200:~# vim /data/ops/app/kubernetes/coredns.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: coredns
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
rules:
- apiGroups:
- ""
resources:
- endpoints
- services
- pods
- namespaces
verbs:
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:coredns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:coredns
subjects:
- kind: ServiceAccount
name: coredns
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
bind 0.0.0.0
ready
kubernetes magedu.local in-addr.arpa ip6.arpa {
fallthrough in-addr.arpa ip6.arpa
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. Default is 1.
# 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
priorityClassName: system-cluster-critical
serviceAccountName: coredns
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
containers:
- name: coredns
image: coredns/coredns:1.8.3
imagePullPolicy: IfNotPresent
resources:
limits:
memory: 170Mi
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "CoreDNS"
spec:
type: NodePort
selector:
k8s-app: kube-dns
clusterIP: 10.150.0.2 #此ip一般是cluster的第2个ip地址 可以启动一个测试环境看/ # cat /etc/resolv.conf
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
targetPort: 9153
nodePort: 30009
#如果默认没有镜像会从官网下载,为了节省时间,可以load导入镜像
root@test-k8s-master-200:~# docker load -i /data/ops/packages/coredns-image-v1.8.3.tar.gz
#然后执行apply创建coredns
root@test-k8s-master-200:~# kubectl apply -f /data/ops/app/kubernetes/coredns.yaml
#coredns-f97dc456d-ld4k7 状态是running说明就没有问题了
root@test-k8s-master-200:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test1 1/1 Running 0 42m
default net-test2 1/1 Running 0 39m
default net-test3 1/1 Running 0 39m
kube-system calico-kube-controllers-647f956d86-pw459 1/1 Running 0 54m
kube-system calico-node-8hnvc 1/1 Running 0 54m
kube-system calico-node-mznfg 1/1 Running 1 54m
kube-system calico-node-t8gtd 1/1 Running 0 54m
kube-system calico-node-z7x9n 1/1 Running 0 54m
kube-system coredns-f97dc456d-ld4k7 1/1 Running 0 9m36s
#进入创建的pod中可以ping测试
root@test-k8s-master-200:~# kubectl exec -it net-test1 sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # ping baidu.com
PING baidu.com (220.181.38.148): 56 data bytes
64 bytes from 220.181.38.148: seq=0 ttl=47 time=3.653 ms
64 bytes from 220.181.38.148: seq=1 ttl=47 time=3.604 ms
2.6.2验证coredns指标数据:
2.7:部署dashboard
2.7.1:修改dashboard yaml配置
#上传镜像到本地harbor
root@test-k8s-master-200:~# docker pull kubernetesui/dashboard:v2.3.1
root@test-k8s-master-200:~# docker tag kubernetesui/dashboard:v2.3.1 harbor.zixuan.net/baseimages/dashboard:v2.3.1
root@test-k8s-master-200:~# docker push harbor.zixuan.net/baseimages/dashboard:v2.3.1
#上传镜像到本地harbor
root@test-k8s-master-200:~# docker pull kubernetesui/metrics-scraper:v1.0.6
root@test-k8s-master-200:~# docker tag kubernetesui/metrics-scraper:v1.0.6 harbor.zixuan.net/baseimages/metrics-scraper:v1.0.6
root@test-k8s-master-200:~# docker push harbor.zixuan.net/baseimages/metrics-scraper:v1.0.6
修改dashboard yaml文件
root@test-k8s-master-200:/data/ops/app/kubernetes# vim dashboard-v2.3.1.yaml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #此处要修改成NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30002 #外网访问的端口
selector:
k8s-app: kubernetes-dashboard
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-certs
namespace: kubernetes-dashboard
type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: harbor.zixuan.net/baseimages/dashboard:v2.3.1 #本地harbor镜像
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
- --token-ttl=43200
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: harbor.zixuan.net/baseimages/metrics-scraper:v1.0.6 #本地harbor镜像
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
2.7.2:执行部署dashboard并验证
root@test-k8s-master-200:/data/ops/app/kubernetes# kubectl apply -f dashboard-v2.3.1.yaml
root@test-k8s-master-200:/data/ops/app/kubernetes# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default net-test1 1/1 Running 0 4d
default net-test2 1/1 Running 0 4d
default net-test3 1/1 Running 0 4d
kube-system calico-kube-controllers-647f956d86-pw459 1/1 Running 0 4d1h
kube-system calico-node-8hnvc 1/1 Running 0 4d1h
kube-system calico-node-mznfg 1/1 Running 1 4d1h
kube-system calico-node-t8gtd 1/1 Running 0 4d1h
kube-system calico-node-z7x9n 1/1 Running 0 4d1h
kube-system coredns-f97dc456d-ld4k7 1/1 Running 0 4d
kubernetes-dashboard dashboard-metrics-scraper-7f4656df8-68rh8 1/1 Running 0 21m
kubernetes-dashboard kubernetes-dashboard-5df9659db-ghchr 1/1 Running 0 21m
#创建访问dashboard的用户
root@test-k8s-master-200:/data/ops/app/kubernetes# vim admin-user.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
root@test-k8s-master-200:/data/ops/app/kubernetes# kubectl apply -f admin-user.yml
root@test-k8s-master-200:/data/ops/app/kubernetes# kubectl get secret -A | grep admin
kubernetes-dashboard admin-user-token-9s2hg kubernetes.io/service-account-token 3 16m
root@test-k8s-master-200:/data/ops/app/kubernetes# kubectl -n kubernetes-dashboard describe secret admin-user-token-9s2hg
Name: admin-user-token-9s2hg
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: admin-user
kubernetes.io/service-account.uid: e81d6eca-7c54-4aac-8532-4308d8cbd672
Type: kubernetes.io/service-account-token
Data
====
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImVFX085NjExLWk0VDZ2VXV5MjdlVVlEUU9JMF9aNTlsLUxKRVRLQUpyd2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTlzMmhnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlODFkNmVjYS03YzU0LTRhYWMtODUzMi00MzA4ZDhjYmQ2NzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.ltNz9QfZ8ti9KwF3rgG0aoO39hh2zYsLtd8Uj8lcYYiDR3QfqYJKhGM_GEOIYtoK_0jZ0PTCeqfIQBnuEcT2dkDguLEEnTBwGBREuLlS_e7_S2rYCbCAN8Gil6G1lziIzgNO8Haaasn5U1T6FRrcavSXTCRxZC4f43ClHHk80vHmekyA_PWY81hYhrulQwUTi96IdzxsrIefiCr4A7lhtvVvCPr6TZeEYhnvmHLSymGBpF23r5C_QeeEftvJjkQLxTB_nsKpkN3LvYnsLPruno0l16trqnUTpWPXosZJkzL7GMsDmiPuwPa1u1q01RprwzgnZPgQ6_5O7Sx78SVAzQ
ca.crt: 1350 bytes
2.7.3:使用Kubeconfig登录dashboard
root@test-k8s-master-200:~# cp /root/.kube/config /opt/
root@test-k8s-master-200:~# vim /opt/config
#在config最下面增加token(前面有4个空格)
token:eyJhbGciOiJSUzI1NiIsImtpZCI6ImVFX085NjExLWk0VDZ2VXV5MjdlVVlEUU9JMF9aNTlsLUxKRVRLQUpyd2sifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLTlzMmhnIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlODFkNmVjYS03YzU0LTRhYWMtODUzMi00MzA4ZDhjYmQ2NzIiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.ltNz9QfZ8ti9KwF3rgG0aoO39hh2zYsLtd8Uj8lcYYiDR3QfqYJKhGM_GEOIYtoK_0jZ0PTCeqfIQBnuEcT2dkDguLEEnTBwGBREuLlS_e7_S2rYCbCAN8Gil6G1lziIzgNO8Haaasn5U1T6FRrcavSXTCRxZC4f43ClHHk80vHmekyA_PWY81hYhrulQwUTi96IdzxsrIefiCr4A7lhtvVvCPr6TZeEYhnvmHLSymGBpF23r5C_QeeEftvJjkQLxTB_nsKpkN3LvYnsLPruno0l16trqnUTpWPXosZJkzL7GMsDmiPuwPa1u1q01RprwzgnZPgQ6_5O7Sx78SVAzQ
把config拷贝到需要登录的机器(这个类似ssh的key)
2.8:集群管理
2.8.1 添加node节点
root@test-docker-151:/etc/kubeasz# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d18h v1.21.0
172.18.1.201 Ready,SchedulingDisabled master 4d18h v1.21.0
172.18.1.206 Ready node 4d18h v1.21.0
172.18.1.207 Ready node 4d18h v1.21.0
#添加node(可以自己安装docker)
root@test-docker-151:/etc/kubeasz# ./ezctl add-node k8s-01 172.18.1.208
2.8.2 添加master节点
root@test-docker-151:/etc/kubeasz# ./ezctl add-master k8s-01 172.18.1.202
root@test-docker-151:/etc/kubeasz# kubectl get node -A
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d19h v1.21.0
172.18.1.201 Ready,SchedulingDisabled master 4d19h v1.21.0
172.18.1.202 Ready,SchedulingDisabled master 33m v1.21.0
172.18.1.206 Ready node 4d19h v1.21.0
172.18.1.207 Ready node 4d19h v1.21.0
172.18.1.208 Ready node 20m v1.21.0
2.8.3 验证⽹络组件calico状态
root@test-k8s-master-200:~# calicoctl node status
Calico process is running.
IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 172.18.1.201 | node-to-node mesh | up | 03:40:54 | Established |
| 172.18.1.202 | node-to-node mesh | up | 03:39:32 | Established |
| 172.18.1.206 | node-to-node mesh | up | 03:40:14 | Established |
| 172.18.1.207 | node-to-node mesh | up | 03:39:32 | Established |
| 172.18.1.208 | node-to-node mesh | up | 03:59:15 | Established |
+--------------+-------------------+-------+----------+-------------+
IPv6 BGP status
No IPv6 peers found.
2.8.4 验证node节点路由
root@test-k8s-node-208:~# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
0.0.0.0 172.18.1.1 0.0.0.0 UG 0 0 0 eth0
10.180.0.0 172.18.1.200 255.255.255.255 UGH 0 0 0 tunl0
10.180.67.128 172.18.1.206 255.255.255.192 UG 0 0 0 tunl0
10.180.106.0 172.18.1.201 255.255.255.192 UG 0 0 0 tunl0
10.180.120.0 172.18.1.200 255.255.255.192 UG 0 0 0 tunl0
10.180.148.64 172.18.1.202 255.255.255.192 UG 0 0 0 tunl0
10.180.231.192 0.0.0.0 255.255.255.192 U 0 0 0 *
10.180.248.64 172.18.1.207 255.255.255.192 UG 0 0 0 tunl0
172.17.0.0 0.0.0.0 255.255.0.0 U 0 0 0 docker0
172.18.1.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
2.8.5 升级master的版本
#将下载好的二进制上传到master进行解压
root@test-k8s-master-200:/data/ops/packages# tar -xf kubernetes-client-linux-amd64.tar.gz
root@test-k8s-master-200:/data/ops/packages# tar -xf kubernetes-node-linux-amd64.tar.gz
root@test-k8s-master-200:/data/ops/packages# tar -xf kubernetes-server-linux-amd64.tar.gz
root@test-k8s-master-200:/data/ops/packages# tar -xf kubernetes.tar.gz
#查看更新的版本
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# ./kube-apiserver --version
Kubernetes v1.21.5
#现在三个master是启动的
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d21h v1.21.0
172.18.1.201 Ready,SchedulingDisabled master 4d21h v1.21.0
172.18.1.202 Ready,SchedulingDisabled master 143m v1.21.0
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 130m v1.21.0
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# kube
kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler
#停止所有服务
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# systemctl stop kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
#可以在部署节点看到172.18.1.200已经NotReady
root@test-docker-151:~# kubectl get node -A
NAME STATUS ROLES AGE VERSION
172.18.1.200 NotReady,SchedulingDisabled master 4d21h v1.21.0
172.18.1.201 Ready,SchedulingDisabled master 4d21h v1.21.0
172.18.1.202 Ready,SchedulingDisabled master 150m v1.21.0
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 138m v1.21.0
#确认kubectl安装的目录
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# which kubectl
/usr/kube/bin/kubectl
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# \cp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectl /usr/kube/bin/
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# systemctl start kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
#可以看到200的版本已经更新
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d21h v1.21.5
172.18.1.201 Ready,SchedulingDisabled master 4d21h v1.21.0
172.18.1.202 Ready,SchedulingDisabled master 157m v1.21.0
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 144m v1.21.0
#在201关闭服务
root@test-k8s-master-201:~# systemctl stop kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
#查看201已经关闭
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d21h v1.21.5
172.18.1.201 NotReady,SchedulingDisabled master 4d21h v1.21.0
172.18.1.202 Ready,SchedulingDisabled master 160m v1.21.0
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 148m v1.21.0
#拷贝二进制到201
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectl 172.18.1.201:/usr/kube/bin/
#在201启动服务
root@test-k8s-master-201:~# systemctl start kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
root@test-k8s-master-201:~# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d21h v1.21.5
172.18.1.201 Ready,SchedulingDisabled master 4d21h v1.21.5
172.18.1.202 Ready,SchedulingDisabled master 161m v1.21.0
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 149m v1.21.0
root@test-k8s-master-202:~# systemctl stop kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
root@test-k8s-master-202:~# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d21h v1.21.5
172.18.1.201 Ready,SchedulingDisabled master 4d21h v1.21.5
172.18.1.202 NotReady,SchedulingDisabled master 165m v1.21.0
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 152m v1.21.0
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# scp kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet kubectl 172.18.1.202:/usr/kube/bin/
root@test-k8s-master-202:~# systemctl start kube-apiserver kube-controller-manager kube-proxy kube-scheduler kubelet
root@test-k8s-master-202:~# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d22h v1.21.5
172.18.1.201 Ready,SchedulingDisabled master 4d22h v1.21.5
172.18.1.202 Ready,SchedulingDisabled master 166m v1.21.5
172.18.1.206 Ready node 4d21h v1.21.0
172.18.1.207 Ready node 4d21h v1.21.0
172.18.1.208 Ready node 154m v1.21.0
2.8.6 升级node的版本
root@test-k8s-node-206:~# systemctl stop kubelet kube-proxy
root@test-k8s-node-206:~# ls /usr/kube/bin/kube
kubectl kubelet kube-proxy
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# scp kubectl kube-proxy kubelet 172.18.1.206:/usr/kube/bin/
root@test-k8s-node-206:~# systemctl start kubelet kube-proxy
root@test-k8s-node-206:~# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d22h v1.21.5
172.18.1.201 Ready,SchedulingDisabled master 4d22h v1.21.5
172.18.1.202 Ready,SchedulingDisabled master 172m v1.21.5
172.18.1.206 Ready node 4d22h v1.21.5
172.18.1.207 Ready node 4d22h v1.21.0
172.18.1.208 Ready node 160m v1.21.0
#207和208node节点也是相同操作
root@test-k8s-master-200:/data/ops/packages/kubernetes/server/bin# kubectl get node
NAME STATUS ROLES AGE VERSION
172.18.1.200 Ready,SchedulingDisabled master 4d22h v1.21.5
172.18.1.201 Ready,SchedulingDisabled master 4d22h v1.21.5
172.18.1.202 Ready,SchedulingDisabled master 174m v1.21.5
172.18.1.206 Ready node 4d22h v1.21.5
172.18.1.207 Ready node 4d22h v1.21.5
172.18.1.208 Ready node 162m v1.21.5
更多推荐
所有评论(0)