1. 云原生介绍

1.1 云原生简介

在这里插入图片描述

1.2 云原生定义

官网地址:https://github.com/cncf/toc/blob/main/DEFINITION.md#%E4%B8%AD%E6%96%87%E7%89%88%E6%9C%AC

云原生技术有利于各组织在公有云、私有云和混合云等新型动态环境中,构建和运行可弹性扩展的应用。
云原生的代表技术包括容器、服务网格、微服务、不可变基础设施和声明式API。

1.3 云原生技术栈

在这里插入图片描述

1.4 云原生景观图

官网地址:https://landscape.cncf.io/

在这里插入图片描述

1.5 云原生项目分类

在这里插入图片描述

2. K8S介绍

2.1 K8S的来源

在这里插入图片描述

2.2 容器化部署的演变过程

在这里插入图片描述

2.3 k8s逻辑架构

在这里插入图片描述

2.4 K8s组件介绍

2.4.1 kube-apiserver

2.4.1.1 kube-apiserver介绍

kube-apiserver是Kubernetes集群中最重要的核心组件之一,它是所有服务访问的统一入口,并提供了k8s各类资源对象的增删改查及watch等HTTP Rest接口,并且在交互过程中还提供了鉴权(检测请求是否有权限)和准入(有权限的能行,没权限的报错)能力。
在这里插入图片描述

2.4.1.2 Pod创建过程

(1)客户端请求api-server,api-server收到请求后会先进行权限验证,只有具备相应权限的请求才准入,然后把数据(yaml)写入到etcd中。
(2)由于Scheduler是一直watch(监测)着api-server的,他就会收到这个新的事件,来进行综合的一个调度(预选、优选),调度完成后,返回结果到api-server,由api-server把调度结果写入到etcd。
(3)在node节点上,kubelet会按照事件信息(使用的镜像、容器名、暴露的端口等),来调用容器运行时(docker或者containerd)来对容器进行创建,并把创建结果返回给api-server,由api-server。
(4)在这个过程中,kube-proxy也会从api-server获取到网络事件信息,再调用宿主机内核来修改iptables或者ipvs的规则(如果有nodeport或者netHost的话,外部就能访问了)。

在这里插入图片描述

2.4.1.3 Api-Server鉴权准入流程

(1)客户端的请求会先到Api-server进行身份验证(鉴权,验证config配置文件中的证书和key是否合法),api-servcer的地址和客户端的权限信息,默认均来自/root/.kube/config文件,其中server的配置就是api-server地址,由于生产api-server基本都由3个副本组成高可用集群,所以可以把api-server地址放到负载均衡中,然后把负载均衡地址写到config配置文件中。


(2)到这里身份验证通过了,证明我们有了一个合法的身份能够和apiserver进行交互,接下来就是验证请求的合法性,比如客户端有一个操作yaml的请求,apiserver会验证yaml的数据是不是正常的(比如少没少字段,缩进有没有问题啥的),有异常的话也是直接报错。


(3)如果第二步的请求验证通过的话,这部分数据就会写入到etcd中。

在这里插入图片描述

在这里插入图片描述

2.4.1.4 Api-Server版本介绍

在这里插入图片描述

2.4.1.5 K8s公有云环境架构

在这里插入图片描述

2.4.2 kube-scheduler

2.4.2.1 kube-scheduler介绍

kube-scheduler是一个控制面(管理)进程(控制器管理器),负责将 Pods 按照一定的调度策略指派到目的节点上。
kube-scheduler负责分配调度Pod到集群内的节点上,它监听kube-apiserver,查询还未分配Node的Pod,然后根据调度策略为这些Pod分配节点(更新Pod的NodeName字段,也就是pod绑定node),调度完毕后,再把数据返回给api server,由api server把调度结果写入到etcd。

2.4.2.2 调度策略

官方文档:https://v1-26.docs.kubernetes.io/zh-cn/docs/reference/scheduling/policies/
在这里插入图片描述

2.4.2.3 调度过程

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

2.4.3 kube-controller-manager

控制器管理器

2.4.3.1 kube-controller-manager介绍

当pod调度成功并创建后,就由controller-manager来保障pod的稳定运行

controller-manager是Kubernetes的大脑,它通过apiserver监测etcd,来获取整个集群的状态,并确保集群处于预期的工作状态。


Controller Manager(控制器管理器)还包括一些子控制器(副本控制器、节点控制器、命名空间控制器和服务账号控制器等),
控制器作为集群内部的管理控制中心,负责集群内的Node、Pod副本、服务端点(Endpoint)、命名空间(Namespace)、服务账号
(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,Controller Manager会及时发现并执行自动化修复流程,确保集群中的pod副本始终处于预期的工作状态。


controller-manager控制器每间隔5秒检查一次节点的状态。
如果controller-manager控制器没有收到自节点的心跳,则将该node节点被标记为不可达。
controller-manager将在标记为无法访问之前等待40秒。
如果该node节点被标记为无法访问后5分钟还没有恢复,controller-manager会删除当前node节点的所有pod并在其它可用节点重建这些pod。

2.4.3.2 kube-controller-manager高可用

2.4.4 kube-proxy

2.4.4.1 kube-proxy介绍

kube-proxy主要作用是在node上实现容器间的互相访问的,实现方法主要是靠维护节点上的ipvs和iptables规则,来实现目的报文转发。
并且这些规则不需要人为去维护,都是由kube-proxy自动去维护的。
比如访问k8s集群内部域名(xx.svc),其实就是svc代理请求到后端的pod上,也是基于ipvs或iptables规则实现的。
我们每删除或新增一个pod,kube-porxy都会对规则进行相应的变更,kube-proxy为什么会知道呢,是因为它也是一直监听这api-server的。
当我们请求涉及到跨主机时,还会去读我们机器上的路由表(由flannel或calico维护),来实现请求。


kube-proxy目前仅支持TCP和UDP,不支持HTTP路由,并且也没有健康检查机制。这些可以通过自定义Ingress Controller的方法来解决。

2.4.4.2 iptable和ipvs

k8s从1.11版本开始,默认就使用ipvs了,原因是因为iptables的性能会有瓶颈,比如node节点已经成百上千了,这个时候就不适合使用iptables了。

在这里插入图片描述

在这里插入图片描述

2.4.4.3 配置使用IPVS及指定调度算法

在这里插入图片描述

2.4.4.4 会话保持

如果希望一段时间内,同一个客户端地址的请求,都转发到同一个pod,可以进行如下配置

在这里插入图片描述

2.4.5 kubelet

kubelet是运行在每个worker节点的代理组件,主要是基于PodSpec来工作,每个 PodSpec 是一个描述 Pod 的 YAML 或 JSON 对象。主要职责如下:
(1)接受指令并在Pod中创建容器。
(2)准备Pod所需的数据卷。
(3)在node节点执行容器健康检查。
(4)定期上报node节点信息和pod状态信息到api server,再存到etcd。

在这里插入图片描述

2.4.6 kubectl

是一个通过命令行对kubernetes集群进行管理的客户端工具。


kubectl 在 $HOME/.kube 目录中查找一个名为 config 的配置文件。可以通过设置 KUBECONFIG 环境变量或设置 --kubeconfig参数来指定其它 kubeconfig 文件。

在这里插入图片描述

2.4.7 etcd

etcd 是CoreOS公司开发目前是Kubernetes默认使用的key-value数据存储系统,用于保存kubernetes的所有集群数据,etcd支持
分布式集群功能,生产环境使用时需要为etcd数据提供定期备份机制。

在这里插入图片描述
在这里插入图片描述

2.4.8 CoreDNS

DNS负责为整个集群提供DNS服务,从而实现服务之间的访问。


(1)解析Kubernetes服务和Pod的DNS名称。当Pod访问其他Pod、Service或外部服务时,需要使用DNS名称来进行通信。
(2)支持服务发现和负载均衡。CoreDNS会自动将Service名称解析为对应的后端Pod IP地址,并且会提供一些扩展的DNS记录类型(如:SRV记录)来支持负载均衡等功能。
(3)支持自定义域名解析。Kubernetes集群中的应用可以使用自定义的域名来进行通信,CoreDNS可以支持这些自定义域名的解析。
(4)支持插件扩展。CoreDNS可以通过插件扩展功能,比如:支持Prometheus监控、支持DNSSEC安全等。


dns版本:
sky-dns # 早期使用
kube-dns: 1.18 # 1.18停止使用
coredns # 现在主流

2.4.9 Dashboard

Dashboard是基于网页的Kubernetes用户界面,可以使用Dashboard获取运行在集群中的应用的概览信息,也可以创建或者修改Kubernetes资源(如 Deployment,Job,DaemonSet 等等),也可以对Deployment实现弹性伸缩、发起滚动升级、删除 Pod 或者使用向导创建新的应用。

3. containerd安装

3.1 containerd介绍

官方文档:https://github.com/containerd/containerd

containerd 是行业标准的容器运行时,强调简单性、健壮性和可移植性。它可以作为Linux和Windows的守护进程,可以管理其主机系统的整个容器生命周期:映像传输和存储,容器执行和监督,低级存储和网络附件等。


并且在containerd中也有namespace这个概念,但是与linux namespace和k8s的namespace无关。


containerd和dockerd的区别

  • 性质不同:containerd是一个轻量级的容器运行时,专注于底层的容器生命周期管理,如容器的创建、启动、停止和销毁;dockerd是一个开源的容器平台,提供了一整套容器解决方案,包括构建、打包、分发和运行容器。
  • 调用链不同:containerd不需要经过dockershim,所以调用链更短,组件更少,更稳定,占用节点资源更少;docker需要经过dockershim,所以调用链更长。
  • 扩展性不同:containerd的设计更加简单和稳定,提供了API,可以与其他容器编排工具集成使用,使其更加灵活和可扩展;

3.2 常见的容器运行时

(1)runc:目前docker和containerd默认的runtime,基于go语言开发,遵循oci规范。
(2)redhat退出的运行时,基于c语言开发,集成在podman内部,遵循oci规范。
(3)gVisor:google推出的运行时,基于go语言开发,遵循oci规范。


并且容器运行时还细分为高级和低级
(1)High-Level:高级运行时提供基于API的远程管理操作,客户端可以通过高级别运行时管理容器的整个生命周期(创建、删除、重启、停止),高级别运行时并不真正直接运行容器,而是调用低级别运行时运行,比如dockerd、containerd都是高级别运行时。


(2)Low-Level:接受高级别运行时的指令,按照响应的指令运行容器,因此低级别运行时真是运行容器的地方,例如runc。

1.24版本前,k8s创建容器流程
在这里插入图片描述

k8s支持运行时图解
在这里插入图片描述

3.3 二进制安装containerd

通过官方二进制安装containerd、runc及CNIkubernetes从v1.24.0开始默认使用containerd作为容器运行时,因此需要提前安装好containerd之后在安装v1.24或更高版本的kubernetes(如果要继续使用docker,则需要单独安装docker及cri-dockerd、https:/lgithub.com/Mirantis/cri-dockerd)。

3.3.1 下载安装包并解压

官网地址:https://github.com/containerd/containerd/releases/tag/v1.6.20

[root@containerd ~]# wget https://github.com/containerd/containerd/releases/download/v1.6.20/containerd-1.6.20-linux-amd64.tar.gz

[root@containerd ~]# ll -h
total 43M
-rw-r--r--  1 root root  43M Apr 12 15:27 containerd-1.6.20-linux-amd64.tar.gz

[root@containerd ~]# tar xf containerd-1.6.20-linux-amd64.tar.gz
[root@containerd ~]# ll bin/
total 125756
-rwxr-xr-x 1 root root 52255608 Mar 31 04:51 containerd
-rwxr-xr-x 1 root root  7352320 Mar 31 04:51 containerd-shim
-rwxr-xr-x 1 root root  9469952 Mar 31 04:51 containerd-shim-runc-v1
-rwxr-xr-x 1 root root  9486336 Mar 31 04:51 containerd-shim-runc-v2
-rwxr-xr-x 1 root root 23079704 Mar 31 04:51 containerd-stress
-rwxr-xr-x 1 root root 27126424 Mar 31 04:51 ctr

3.3.2 安装containerd并验证

[root@containerd ~]# cp bin/* /usr/local/bin/
[root@containerd ~]# containerd -v  # 能显示版本表示安装成功
containerd github.com/containerd/containerd v1.6.20 2806fc1057397dbaeefbea0e4e17bddfbd388f38

3.2.3 配置systemd管理containerd的启停

官网文档:https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

[root@containerd ~]# cat /lib/systemd/system/containerd.service
# Copyright The containerd Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
#uncomment to enable the experimental sbservice (sandboxed) version of containerd/cri integration
#Environment="ENABLE_CRI_SANDBOXES=sandboxed"
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

3.2.4 编辑配置⽂件

[root@containerd ~]# mkdir /etc/containerd
[root@containerd ~]# containerd config default > /etc/containerd/config.toml

# 修改配置文件
sandbox_image = "registry.k8s.io/pause:3.6" # 沙箱镜像,提供pod的底层网络的,这个值默认是国外地址,可修改为国内地址,修改为如下值:
sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.7" # 修改后的值

[plugins."io.containerd.grpc.v1.cri".registry.mirrors] # 镜像仓库地址,这个也需要修改为国内的镜像仓库地址,做镜像加速。具体修改如下:
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
 [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] # 就是加了这一行和下面这行
   endpoint = ["https://9916w1ow.mirror.aliyuncs.com"]

3.2.5 启动containerd

[root@containerd ~]# systemctl start containerd
[root@containerd ~]# systemctl enable containerd
[root@containerd ~]# systemctl status containerd
● containerd.service - containerd container runtime
   Loaded: loaded (/usr/lib/systemd/system/containerd.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-04-12 16:08:16 CST; 7s ago
   
[root@containerd ~]# ll /run/containerd/containerd.sock
srw-rw---- 1 root root 0 Apr 12 16:08 /run/containerd/containerd.sock

3.4 安装runc

官网地址:https://github.com/opencontainers/runc/releases

3.4.1 下载安装包

[root@containerd ~]# wget https://github.com/opencontainers/runc/releases/download/v1.1.6/runc.amd64
[root@containerd ~]# ll -th
total 52M
-rw-r--r--  1 root root 9.0M Apr 12 16:25 runc.amd64

[root@containerd ~]# chmod a+x runc.amd64
[root@containerd ~]# mv runc.amd64 /usr/bin/runc

3.4.2 下载测试镜像并验证

[root@containerd ~]# ctr images pull docker.io/library/alpine:latest
docker.io/library/alpine:latest:                                                  resolved       |++++++++++++++++++++++++++++++++++++++| 
index-sha256:124c7d2707904eea7431fffe91522a01e5a861a624ee31d03372cc1d138a3126:    done           |++++++++++++++++++++++++++++++++++++++| manifest-sha256:b6ca290b6b4cdcca5b3db3ffa338ee0285c11744b4a6abaa9627746ee3291d8d: done           |++++++++++++++++++++++++++++++++++++++| config-sha256:9ed4aefc74f6792b5a804d1d146fe4b4a2299147b0f50eaf2b08435d7b38c27e:   done           |++++++++++++++++++++++++++++++++++++++| layer-sha256:f56be85fc22e46face30e2c3de3f7fe7c15f8fd7c4e5add29d7f64b87abdaa09:    done           |++++++++++++++++++++++++++++++++++++++| elapsed: 7.7 s                                                                    total:  3.2 Mi (428.2 KiB/s)                                     unpacking linux/amd64 sha256:124c7d2707904eea7431fffe91522a01e5a861a624ee31d03372cc1d138a3126...
done: 218.339004ms

[root@containerd ~]# ctr images ls
REF                             TYPE                                                      DIGEST                                                                  SIZE    PLATFORMS                                                                                LABELS 
docker.io/library/alpine:latest application/vnd.docker.distribution.manifest.list.v2+json sha256:124c7d2707904eea7431fffe91522a01e5a861a624ee31d03372cc1d138a3126 3.2 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -  

3.4.3 ctr客户端创建测试容器

[root@containerd ~]# ctr run -t --net-host docker.io/library/alpine:latest test-container sh
/ # ls
bin    dev    etc    home   lib    media  mnt    opt    proc   root   run    sbin   srv    sys    tmp    usr    var
/ # exit

到这里,containerd就算是安装完成了,但是这个containerd是不能被k8s调用的,因为没有安装CNI插件

3.5 containerd客户端⼯具扩展

介绍crictl及nerdctl的使⽤

3.5.1 客户端工具种类

(1)自带的ctr  # 超级难用
(2)kubernetes-sigs的crictl # 这个也不咋地
(3)containerd官网推荐的nerdctl(docker开源) # 这个客户端工具操作起来基本和docker客户端工具差不多

3.5.2 下载安装nerdctl

官网地址:https://github.com/containerd/nerdctl

[root@containerd ~]# wget https://github.com/containerd/nerdctl/releases/download/v1.3.0/nerdctl-1.3.0-linux-amd64.tar.gz

[root@containerd ~]# ll -th
total 51M
-rw-------  1 root root 8.9M Apr 12 16:47 nerdctl-1.3.0-linux-amd64.tar.gz
[root@containerd ~]# tar xf nerdctl-1.3.0-linux-amd64.tar.gz -C /usr/local/bin/

[root@containerd ~]# nerdctl ps -a
CONTAINER ID    IMAGE                              COMMAND    CREATED           STATUS     PORTS    NAMES
test-contain    docker.io/library/alpine:latest    "sh"       24 minutes ago    Created

3.5.3 修改配置文件

[root@containerd ~]# mkdir /etc/nerdctl/
[root@containerd ~]# vim /etc/nerdctl/nerdctl.toml
namespace = "k8s.io" # 指定默认的ns
debug = false
debug_full = false
insecure_registry = true  # 启用非安全的镜像仓库

3.6 安装CNI网络插件

默认情况下,创建的容器只能使用–net-host网络,如果想使用docker一样的桥接网络,就必须安装该插件

3.6.1 下载并安装

[root@containerd ~]# wget https://github.com/containernetworking/plugins/releases/download/v1.2.0/cni-plugins-linux-amd64-v1.2.0.tgz
[root@containerd ~]# ll -th
total 90M
-rw-r--r--  1 root root  39M Apr 12 17:21 cni-plugins-linux-amd64-v1.2.0.tgz

[root@containerd ~]# mkdir /opt/cni/bin -p
[root@containerd ~]# tar xvf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin/
./
./loopback
./bandwidth
./ptp
./vlan
./host-device
./tuning
./vrf
./sbr
./dhcp
./static
./firewall
./macvlan
./dummy
./bridge
./ipvlan
./portmap
./host-local

3.6.2 创建Nginx测试容器并指定端⼝

[root@containerd ~]# nerdctl run -p 80:80 -d nginx
[root@containerd ~]# nerdctl ps
CONTAINER ID    IMAGE                             COMMAND                   CREATED          STATUS    PORTS                 NAMES
f57e42ef7f39    docker.io/library/nginx:latest    "/docker-entrypoint.…"    5 seconds ago    Up        0.0.0.0:80->80/tcp    nginx-f57e4

[root@containerd ~]# curl -I localhost
HTTP/1.1 200 OK
Server: nginx/1.23.4
Date: Wed, 12 Apr 2023 09:30:05 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 28 Mar 2023 15:01:54 GMT
Connection: keep-alive
ETag: "64230162-267"
Accept-Ranges: bytes


[root@containerd ~]# iptables -t nat -vnL # 这里和docker有点不一样,映射的端口没法通过ss命令查看,只能通过iptables规则来查看
……省略部分内容
Chain CNI-DN-3bab9412690a86a5a3556 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 CNI-HOSTPORT-SETMARK  tcp  --  *      *       10.4.0.0/24          0.0.0.0/0            tcp dpt:80
    1    60 CNI-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:80
    1    60 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:80 to:10.4.0.3:80

3.6.3 创建Tomcat测试容器并指定端⼝

[root@containerd ~]# nerdctl run -d -p 8080:8080 --name=tomcat --restart=always tomcat
[root@containerd ~]# nerdctl ps -l
CONTAINER ID    IMAGE                              COMMAND              CREATED           STATUS    PORTS                     NAMES
0ddecf9acc42    docker.io/library/tomcat:latest    "catalina.sh run"    10 seconds ago    Up        0.0.0.0:8080->8080/tcp    tomcat

4. kubeadm安装k8s+containerd

4.1 配置要求

准备3台机器,1M2n,本环境作为学习环境,所以不需要高可用。
root@k8s-master1
root@k8s-node1
root@k8s-node2

4.2 安装containerd

安装过程跟上面一样,只是用脚本代替了

[root@k8s-master1 ~]# tar xf runtime-docker20.10.19-containerd1.6.20-binary-install.tar.gz
[root@k8s-master1 ~]# sh runtime-install.sh containerd

[root@k8s-node1 ~]# tar xf runtime-docker20.10.19-containerd1.6.20-binary-install.tar.gz
[root@k8s-node1 ~]# sh runtime-install.sh containerd

[root@k8s-node2 ~]# tar xf runtime-docker20.10.19-containerd1.6.20-binary-install.tar.gz
[root@k8s-node2 ~]# sh runtime-install.sh containerd

~]# vim /etc/containerd/config.toml  # 这里安装完了后 所有机器都要修改containerd的cgroups为systemd,因为kubelet是用的systemd,不改会报错
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
  …………
  SystemdCgroup = true

~]# systemctl restart containerd

4.3 安装kubeadm、kubectl、kubelet(1.26.3)

4.3.1 配置hosts文件、kubernetes的yun源

所有机器相同操作

~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.31.200.100 k8s-master1
10.31.200.101 k8s-node1
10.31.200.102 k8s-node2

 ~]# vim /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=kubernetes
baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearch
enabled=1

 ~]# yum makecache
 ~]# yum list kubeadm --showduplicates | sort -r # 选择指定版本

4.3.2 安装kubeadm、kubectl、kubelet(1.26.3)

所有机器相同操作

 ~]# yum -y install --nogpgcheck  kubeadm-1.26.3-0 kubelet-1.26.3-0 kubectl-1.26.3-0

4.4 下载kubenetes镜像

master节点操作

[root@k8s-master1 ~]# kubeadm config images list --kubernetes-version v1.26.3 # 指定镜像版本
registry.k8s.io/kube-apiserver:v1.26.3
registry.k8s.io/kube-controller-manager:v1.26.3
registry.k8s.io/kube-scheduler:v1.26.3
registry.k8s.io/kube-proxy:v1.26.3
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.6-0
registry.k8s.io/coredns/coredns:v1.9.3

# 由于镜像默认都在国外 所以这里需要把镜像地址改到国内
## 方法1
[root@k8s-master1 ~]# cat images-down.sh
#!/bin/bash
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.3
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
nerdctl pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

## 方法2
[root@k8s-master1 ~]# kubeadm config images pull --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" --kubernetes-version=v1.26.3



# 这里我使用第二种办法
[root@k8s-master1 ~]# kubeadm config images pull --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" --kubernetes-version=v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.26.3
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.6-0
[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:v1.9.3

4.5 内核参数优化

所有机器相同操作

~]# cat /etc/sysctl.conf
net.ipv4.ip_forward=1
vm.max_map_count=262144
kernel.pid_max=4194303
fs.file-max=1000000
net.ipv4.tcp_max_tw_buckets=6000
net.netfilter.nf_conntrack_max=2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness=0

# 内核模块开机挂载
~]# cat /etc/sysconfig/modules/ipvs.modules 
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_lblc ip_vs_lblcr ip_vs_rr ip_vs_wrr ip_vs_sh ip_vs_dh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp ip_vs_sh ip_tables ip_set ipt_set ipt_rpfilter ipt_REJECT ipip xt_set br_netfilter nf_conntrack overlay"
for kernel_module in ${ipvs_modules}; do
  /sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
  if [ $? -eq 0 ]; then
    /sbin/modprobe ${kernel_module}
  fi
done

~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
ip_vs_ftp              13079  0 
nf_nat                 26583  1 ip_vs_ftp
ip_vs_sed              12519  0 
ip_vs_nq               12516  0 
ip_vs_dh               12688  0 
ip_vs_sh               12688  0 
ip_vs_wrr              12697  0 
ip_vs_rr               12600  0 
ip_vs_lblcr            12922  0 
ip_vs_lblc             12819  0 
ip_vs_lc               12516  0 
ip_vs                 145497  20 ip_vs_dh,ip_vs_lc,ip_vs_nq,ip_vs_rr,ip_vs_sh,ip_vs_ftp,ip_vs_sed,ip_vs_wrr,ip_vs_lblcr,ip_vs_lblc
nf_conntrack          139224  2 ip_vs,nf_nat
libcrc32c              12644  4 xfs,ip_vs,nf_nat,nf_conntrack

~]# sysctl -p
net.ipv4.ip_forward = 1
vm.max_map_count = 262144
kernel.pid_max = 4194303
fs.file-max = 1000000
net.ipv4.tcp_max_tw_buckets = 6000
net.netfilter.nf_conntrack_max = 2097152
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
vm.swappiness = 0

4.6 kubernetes集群初始化

4.6.1 参数介绍

官方文档:https://v1-26.docs.kubernetes.io/zh-cn/docs/reference/setup-tools/kubeadm/kubeadm-init/

--apiserver-advertise-address=10.31.200.100  # apiserver的地址,这里我写的宿主机IP,因为只有一台master,所以也就是说apiserver没有高可用
--apiserver-bind-port=6443  # apiserver端口
--kubernetes-version=v1.26.3 # k8s版本
--pod-network-cidr=10.100.0.0/16 # pod网络,这个网络最好规划的大一些,防止pod变多后,IP地址不够用,并且该网段不能和公司所有网段相同,不然会出问题
--service-cidr=10.200.0.0/16  # svc网络,注意不能和公司现有网络冲突
--service-dns-domain=cluster.local # svc域名后缀
--image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers # 镜像仓库地址
--ignore-preflight-errors=swap  # 如果初始化集群时报swap分区的错误,该配置可以忽略报错

4.6.2 初始化k8s集群

[root@k8s-master1 ~]# kubeadm init --apiserver-advertise-address=10.31.200.100 --apiserver-bind-port=6443 --kubernetes-version=v1.26.3 --pod-network-cidr=10.100.0.0/16 --service-cidr=10.200.0.0/16 --service-dns-domain=cluster.local --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap
…………省略部分输出
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.31.200.100:6443 --token 1d0mrm.5wqa777m7xw4eyfr \
        --discovery-token-ca-cert-hash sha256:717703381134dfadc39a940574012847b6e73c6e6ef6d1288e0f1cc5b0815231

[root@k8s-master1 ~]#   mkdir -p $HOME/.kube
[root@k8s-master1 ~]#   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master1 ~]#   sudo chown $(id -u):$(id -g) $HOME/.kube/config
[root@k8s-master1 ~]# ll .kube/config 
-rw------- 1 root root 5637 Apr 14 10:32 .kube/config

[root@k8s-master1 ~]# kubectl get no
NAME          STATUS     ROLES           AGE   VERSION
k8s-master1   NotReady   control-plane   18m   v1.26.3
[root@k8s-master1 ~]# kubectl get po -A
NAMESPACE     NAME                                  READY   STATUS    RESTARTS   AGE
kube-system   coredns-567c556887-2xt64              0/1     Pending   0          18m  # 之所以挂起是因为coredns只会安装在node节点,并且还需要安装对应的网络插件
kube-system   coredns-567c556887-6lstn              0/1     Pending   0          18m
kube-system   etcd-k8s-master1                      1/1     Running   0          19m
kube-system   kube-apiserver-k8s-master1            1/1     Running   0          19m
kube-system   kube-controller-manager-k8s-master1   1/1     Running   4          19m
kube-system   kube-proxy-2jgqb                      1/1     Running   0          18m
kube-system   kube-scheduler-k8s-master1            1/1     Running   4          19m

4.7 添加node节点

所有node节点相同操作

~]# kubeadm join 10.31.200.100:6443 --token 1d0mrm.5wqa777m7xw4eyfr \
         --discovery-token-ca-cert-hash sha256:717703381134dfadc39a940574012847b6e73c6e6ef6d1288e0f1cc5b0815231
[preflight] Running pre-flight checks
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

master节点查看节点

[root@k8s-master1 ~]# kubectl get no  # 这里节点都是notready是因为还没安装网络插件
NAME          STATUS     ROLES           AGE     VERSION
k8s-master1   NotReady   control-plane   7m13s   v1.26.3
k8s-node1     NotReady   <none>          4m24s   v1.26.3
k8s-node2     NotReady   <none>          9s      v1.26.3

4.8 安装网络插件calico

k8s官方文档:https://v1-26.docs.kubernetes.io/zh-cn/docs/concepts/cluster-administration/addons/#networking-and-network-policy
Calico官方文档:https://docs.tigera.io/calico/latest/getting-started/kubernetes/self-managed-onprem/onpremises#install-calico

在这里插入图片描述

4.8.1 准备calico的yaml清单

官网下的yaml一共有四千多行,这里就不贴出来了,就说下需要变更的地方

- name: FELIX_WIREGUARDMTU # 搜这个值,也是唯一的
……省略部分内容
      key: veth_mtu
# 下面的配置默认就有,只是注释了,取消注释就行
- name: CALICO_IPV4POOL_CIDR
  value: "10.100.0.0/16" # 这个和pod网段相同,不然没法建立路由关系
# 下面新增内容
# 自定义子网范围
- name: CALICO_IPV4POOL_BLOCK_SIZE
  value: "24"  # 这里默认是给每个节点分配了一个26位的小子网,手动调整为24,免得机器多了不够用。如果手动配置的还是不够,它会自己再划分一个。

## 搜索
- name: CLUSTER_TYPE  # 搜这个值,这个值唯一
  value: "k8s,bgp"
# 指定基于eth0网卡IP建立BGP连接。默认为服务器的第一块网卡,https://projectcalico.docs.tigera.io/reference/node/configuration
- name: IP_AUTODETECTION_METHOD  # 添加这两行
  value: "interface=ens192" # 如果宿主机只有一块网卡,这两个配置可以不加,如果有多块网卡,就一定要加,来指定一块具体的网卡。还有一种情况,就是网卡名称不固定,如ens33、ens110这种,每台机都不同,那可以使用通配符的方式,如:value: "interface=ens.*"


# 还有一个注意事项,如果宿主机无法上外网,这个yaml中的镜像需要提前下载下来

4.8.2 创建calico

[root@k8s-master1 yaml]# kubectl get po -A|grep -v Running
NAMESPACE     NAME                                       READY   STATUS     RESTARTS   AGE
kube-system   calico-kube-controllers-5857bf8d58-p8f6b   0/1     Pending    0          28s
kube-system   calico-node-b9wnj                          0/1     Init:0/3   0          28s
kube-system   calico-node-sv744                          0/1     Init:0/3   0          28s
kube-system   calico-node-t96xz                          0/1     Init:0/3   0          28s
kube-system   coredns-567c556887-2nmwr                   0/1     Pending    0          145m
kube-system   coredns-567c556887-xds46                   0/1     Pending    0          145m


[root@k8s-master1 yaml]# kubectl get po -A|grep -v Running
NAMESPACE     NAME    
[root@k8s-master1 yaml]# kubectl get no  # 这里可以看到都Ready了
NAME          STATUS   ROLES           AGE    VERSION
k8s-master1   Ready    control-plane   162m   v1.26.3
k8s-node1     Ready    <none>          159m   v1.26.3
k8s-node2     Ready    <none>          155m   v1.26.3

4.9 配置kube-proxy使用ipvs

4.9.1 修改kube-proxy的configmap

[root@k8s-master1 ~]# kubectl get cm -A |grep proxy
kube-system       kube-proxy                           2      3h28m
[root@k8s-master1 ~]# kubectl edit cm -n kube-system kube-proxy
……省略部分内容
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs" # 这个默认是空值,加个ipvs退出

# 然后这里为了验证之前的所有配置都是重启也能继续生效的,所以重启下所有节点
~]# reboot

4.9.2 检查集群

~]# ipvsadm -Ln # 有下面的输出就行了(这个命令需要安装)
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.200.0.1:443 rr
  -> 10.31.200.100:6443           Masq    1      4          0         
TCP  10.200.0.10:53 rr
  -> 10.100.113.4:53              Masq    1      0          0         
  -> 10.100.113.5:53              Masq    1      0          0         
TCP  10.200.0.10:9153 rr
  -> 10.100.113.4:9153            Masq    1      0          0         
  -> 10.100.113.5:9153            Masq    1      0          0         
UDP  10.200.0.10:53 rr
  -> 10.100.113.4:53              Masq    1      0          0         
  -> 10.100.113.5:53              Masq    1      0          0  

4.10 部署web服务

[root@k8s-master1 yaml]# cat  nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: myserver
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-nginx-deployment-label
  name: myserver-nginx-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-nginx-selector
  template:
    metadata:
      labels:
        app: myserver-nginx-selector
    spec:
      containers:
      - name: myserver-nginx-container
        image: nginx
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 80
          protocol: TCP
          name: http
        - containerPort: 443
          protocol: TCP
          name: https
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 2
#            memory: 2Gi
#          requests:
#            cpu: 500m
#            memory: 1Gi


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-nginx-service-label
  name: myserver-nginx-service
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 30004
  - name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 30443
  selector:
    app: myserver-nginx-selector

[root@k8s-master1 yaml]# kubectl apply -f nginx.yaml
namespace/myserver created
deployment.apps/myserver-nginx-deployment created
service/myserver-nginx-service created


[root@k8s-master1 yaml]# cat tomcat.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: myserver
---
kind: Deployment
#apiVersion: extensions/v1beta1
apiVersion: apps/v1
metadata:
  labels:
    app: myserver-tomcat-app1-deployment-label
  name: myserver-tomcat-app1-deployment
  namespace: myserver
spec:
  replicas: 1
  selector:
    matchLabels:
      app: myserver-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: myserver-tomcat-app1-selector
    spec:
      containers:
      - name: myserver-tomcat-app1-container
        image: registry.cn-hangzhou.aliyuncs.com/zhangshijie/tomcat-app1:v1
        #command: ["/apps/tomcat/bin/run_tomcat.sh"]
        #imagePullPolicy: IfNotPresent
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        - name: "age"
          value: "18"
#        resources:
#          limits:
#            cpu: 2
#            memory: 2Gi
#          requests:
#            cpu: 500m
#            memory: 1Gi


---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: myserver-tomcat-app1-service-label
  name: myserver-tomcat-app1-service
  namespace: myserver
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30005
  selector:
    app: myserver-tomcat-app1-selector

[root@k8s-master1 yaml]# kubectl apply -f tomcat.yaml
namespace/myserver unchanged
deployment.apps/myserver-tomcat-app1-deployment created
service/myserver-tomcat-app1-service created



[root@k8s-master1 yaml]# kubectl get po,svc -n myserver
NAME                                                   READY   STATUS    RESTARTS   AGE
pod/myserver-nginx-deployment-596d5d9799-dstzh         1/1     Running   0          4m18s
pod/myserver-tomcat-app1-deployment-6bb596979f-v5gn6   1/1     Running   0          3m5s

NAME                                   TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
service/myserver-nginx-service         NodePort   10.200.98.47    <none>        80:30004/TCP,443:30443/TCP   4m18s
service/myserver-tomcat-app1-service   NodePort   10.200.109.47   <none>        80:30005/TCP                 3m6s

访问测试
在这里插入图片描述
在这里插入图片描述

5. 部署官⽅dashboard

推荐dashboard
kuboard:https://www.kuboard.cn/

5.1 准备yaml

[root@k8s-master1 ~]# cd yaml/
[root@k8s-master1 yaml]# ls
calico-3.25.1.yaml  nginx.yaml  tomcat.yaml
[root@k8s-master1 yaml]# mkdir dashboard-v2.7.0
[root@k8s-master1 yaml]# cd dashboard-v2.7.0
[root@k8s-master1 dashboard-v2.7.0]# ls
admin-secret.yaml  admin-user.yaml  dashboard-v2.7.0.yaml

[root@k8s-master1 dashboard-v2.7.0]# cat admin-secret.yaml 
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: dashboard-admin-user
  namespace: kubernetes-dashboard 
  annotations:
    kubernetes.io/service-account.name: "admin-user"

[root@k8s-master1 dashboard-v2.7.0]# cat admin-secret.yaml 
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
  name: dashboard-admin-user
  namespace: kubernetes-dashboard 
  annotations:
    kubernetes.io/service-account.name: "admin-user"
[root@k8s-master1 dashboard-v2.7.0]# cat admin-user.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard


[root@k8s-master1 dashboard-v2.7.0]# cat dashboard-v2.7.0.yaml 
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

5.2 部署dashboard并创建账户及授权

[root@k8s-master1 dashboard-v2.7.0]# kubectl create ns kubernetes-dashboard
[root@k8s-master1 dashboard-v2.7.0]# kubectl apply -f admin-user.yaml -f admin-secret.yaml -f dashboard-v2.7.0.yaml

[root@k8s-master1 dashboard-v2.7.0]# kubectl get po,svc -n kubernetes-dashboard
NAME                                            READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-7bc864c59-4fvxj   1/1     Running   0          6m7s
pod/kubernetes-dashboard-6c7ccbcf87-zxs8c       1/1     Running   0          6m7s

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.200.234.72   <none>        8000/TCP        6m7s
service/kubernetes-dashboard        NodePort    10.200.125.63   <none>        443:30000/TCP   6m8s

5.3 获取登录token

[root@k8s-master1 dashboard-v2.7.0]# kubectl get secret -A |grep admin
kubernetes-dashboard   dashboard-admin-user              kubernetes.io/service-account-token   3      3m13s

[root@k8s-master1 dashboard-v2.7.0]# kubectl describe secret -n kubernetes-dashboard dashboard-admin-user
Name:         dashboard-admin-user
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: f81fa465-9691-43ed-ac9c-d3080a93f6c9

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:  # 复制这个
eyJhbGciOiJSUzI1NiIsImtpZCI6IkpRMHdMN1RKTW1MTVp3MmQ1dkxvcGpVal9XWXp6eUFyNWZiU2tldFR2aW8ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZjgxZmE0NjUtOTY5MS00M2VkLWFjOWMtZDMwODBhOTNmNmM5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.XUuCUy5_Zx3CRMzuNCaOFnYGsAzWIs07xo_Azn9ywTJk6kBWRsp-pEtZ-7r4FuPeXgfpEiCgBIJ9XkKVIEJ0hUoNL31v-l4vdGs8TKbFY0xE1t2uFGeab3pVS3iKlVTlgaJCerK5xZWkgCXkGZu3yYyq-giWekWy2zbASJPRZU5QlirUBvds6N4tdWYzuEf-GucsBLPd920FDRBjQb6SLvu8cKtWUygAnJZiTvBpM1GH-jMk22D_Ue5RxPlr3oJxuNtRhyQHjJPU8B8-AMoDVnXl_Mv34QnthnmvS3uxxjhJKemeKh_TDLCgRVQlGOfNWVcuSBi9Dw5bxqUFah_TuA

5.4 登录dashboard

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐