目录

 简介

k8s的工作原理以及流程

        1、pod:

        2、控制器:

        3、service:

        4、资源对象:

        5、调度器:

        6、存储:

Master组件:

API Server:

etcd:

Scheduler:

Node组件:

kubelet:

kube-proxy:

Container Runtime:

Controller:

        Replication Controller(ReplicaSet):

        Deployment:

        StatefulSet:

工作流程:

K8s在DevOps中扮演的角色

如何部署k8s

一、环境规划

1、主机规划

2、软件规划

3、网段

二、基础环境配置

1、修改主机名

2、所有主机关闭SELinux, 防火墙

3、所有主机确认时间同步

4、所有主机配置免密SSH

5、所有主机相互添加主机名解析

6、所有主机禁用swap交换分区

7、所有主机调整系统的资源限制

三、配置软件仓库

1、配置docker仓库

2、配置k8s仓库

3、将上述软件仓库复制到另外两台主机

4、将所有系统升级到最新版

四、调整系统参数

1、所有主机安装ipvs

2、所有主机加载ipvs模块

3、调整内核参数

五、在所有主机安装软件

1、安装docker, 修改镜像国内下载地址

2、安装kubeadm相关软件

六、kubernetes集群初始化

1、在master节点上准备初始化文件

2、在master节点事先下载初始化需要的镜像

3、在master节点上初始化集群

4、定义kubeconfig环境变量

5、将工作节点加入集群

6、查看集群节点

七、部署calico网络,实现容器间的通信

1、下载calico部署文件

2、在calico-etcd.yaml指定etcd数据库的连接地址

3、指定连接etcd数据库使用的证书信息

4、指定通信的网段

5、部署calico网络

6、再次查看集群节点的状态

八、验证集群状态

1、查看集群节点的状态

2、查看核心组件的状态

九、安装metric插件

1、将front-proxy-ca.crt证书拷贝到所有的工作节点

2、部署Metric插件

3、测试Metric工作正常

十、部署dashboard插件

1、部署dashboard

2、将dashboard服务类型修改为nodePort

3、访问dashboard

1) 查看dashboard容器运行的主机

4、获取令牌,登录dashboard


 简介

        在当今快节奏的科技领域,Kubernetes已经成为云原生应用部署和管理的不二之选。它不仅仅是一个容器编排工具,更是一项革命性的技术,为开发者和运维团队提供了强大的自动化和弹性扩展能力。无论你是正在考虑迁移到云原生架构,还是已经是K8s的老手,本文将为你揭示Kubernetes的魅力,深入解析其核心概念,并分享一些实战经验,助你在容器编排的舞台上腾飞。

        首先,让我们一同探索Kubernetes是如何重新定义应用部署的。从单一节点到集群的无缝扩展,K8s为应用提供了稳健的基础设施,实现了高可用性和容错性。你将了解到如何通过简单的配置文件,轻松定义和管理复杂的多层应用,从而实现快速部署和持续交付。

        但Kubernetes的魅力远不止于此。文章还将深入研究其服务发现、负载均衡、自动伸缩等功能,带你领略K8s在微服务架构中的精妙设计。通过本文,你将不仅仅拥有Kubernetes的理论知识,更能够将其应用于实际场景,为你的应用注入新的活力。

        无论你是初学者还是资深从业者,本文都将为你呈现一个全面而深入的Kubernetes导览。让我们一同踏上这场云原生之旅,发现K8s的无限可能性!

k8s的工作原理以及流程

        在详细了解各个组件功能的时候,我们先来了解一下K8s的组成以及详细信息。

        1、pod:

                 在k8s中,每一个容器都是一个单独的pod,可以把pod理解为一个小匣盒,在这个小匣盒子里放着一个容器,当然也可以放多个多个容易,但是为了便于容器的管理,一般一个pod对应一个容器。

        2、控制器:

                负责管理pod状态,数量,版本等。它监视Kubernetes中的资源状态,并采取行动来满足其期望的状态。

        3、service:

                在K8s中,Pod是动态创建和销毁的,因此他们的IP地址是临时的。service解决了这个问题,并提供了一个稳定的内部DNS名称作为应用程序的入口点。

        4、资源对象:

                 k8s中的所有资源(例如Pod,Service,Volume等)都是由一个或多个资源对象表示的。资源对象是通过YAML或JSON文件进行定义,并提供k8sAPI作为单个API入口点。

        5、调度器:

                  调度器是k8s的另外一个核心组件。他负责确定哪个节点将运行特定的Pod。

        6、存储:

                 K8s为存储提供了不同的选项,例如本地存储,网络存储,对象存储等。用户可以根据指定的要求从这些存储选项中选择最合适的。

                K8s中的存储是以卷(Volume)的形式进行管理的。一个卷一个目录,文件,块设备等。它是可以持久的,这意味着挂载在Pod中的数据将所需时跨节点保留。

Master组件:

API Server:

        kubernetes API server 作为集群的核心,负责集群各功能之间的通信, 集群内的各个功能模块通过API Server将信息存入etcd,当需要获取和操作这些数据的时候,各个功能模块去询问APIServer,然后APIServer会去etcd数据库查找

etcd:

        角色:分布式键值对存储,是一个非关系型数据库,保存整个集群的状态和配置信息

        原理:存储集群的整体状态、配置信息和元数据,被 kube-apiserver 用于存储配置信息、Pod 状态、Service 数据等。

Scheduler:

        是用来调度工作节点,将新创建的pod放到合适的工作节点上,并将pod信息传递给APIServer,从而存入到etcd数据库。

kube-controller-manager:

        角色: 包含多个控制器,负责维护集群中各种资源的状态。
  原理: 监听 API Server 中的资源变化,如 Replication Controller、Node Controller 等,通过 API Server 更新集群状态以达到期望的状态。

Node组件:

kubelet:

        角色: 在每个 Node 上运行,负责维护 Pod 的生命周期。

        原理: 从 API Server 获取 Pod 的期望状态,与容器运行时(如 Docker)交互,管理 Pod 的创建、启动、停止,并将节点状态报告给 API Server。

kube-proxy:

        角色: 提供网络代理和负载均衡服务,确保 Pod 之间和外部的流量正确路由。

        原理: 监听 API Server 中 Service 和 Endpoint 的变化,维护节点上的网络规则,确保流量正确地到达后端的 Pod。

Container Runtime:

        角色: 负责运行容器,将容器镜像转换为正在运行的容器实例。

        原理: 根据 Pod 的规格与 kubelet 交互,下载镜像并运行容器,同时处理容器的生命周期。
 

Controller:

        Replication Controller(ReplicaSet):

                角色: 确保指定数量的 Pod 副本在集群中运行。

                原理: 监听 Pod 的运行状态,当 Pod 数量不符合预期时,通过 API Server 更新 Pod 的数量。

        Deployment:

                角色: 提供声明性语法,用于部署和更新应用。
          原理: 定义期望的应用状态,Deployment Controller 负责在集群中创建、更新、删除 Pod 以满足定义的状态。

        StatefulSet:

                角色: 用于有状态应用的管理,确保每个 Pod 有唯一标识和稳定的网络标识。
          原理: 与 Deployment 类似,但为有状态的应用提供了更好的唯一性和稳定性支持。

工作流程:

  • 用户通过kubectl向api-server发起创建pod请求。
  • apiserver通过对应的kubeconfig进行认证,认证通过后将yaml中的po信息存到etcd。
  • Controller-Manager通过apiserver的watch接口发现了pod信息的更新,执行该资源所依赖的拓扑结构整合,整合后将对应的信息交给apiserver,apiserver写到etcd,此时pod已经可以被调度。
  • Scheduler同样通过apiserver的watch接口更新到pod可以被调度,通过算法给pod分配合适的node,并将pod和对应节点绑定的信息交给apiserver,apiserver写到etcd,然后将pod交给kubelet。
  • kubelet收到pod后,k8s调用标准接口与docker交互(调用CNI接口给pod创建pod网络,调用CRI接口去启动容器,调用CSI进行存储卷的挂载),Docker 提供了容器的运行时环境,负责启动、停止、管理容器的生命周期。kubelet 通过与 Docker 通信,使用 Docker 的 API 来创建和管理容器。
  • 网络,容器,存储创建完成后pod创建完成,等业务进程启动后,pod运行成功。

       

K8s在DevOps中扮演的角色

        DevOps是2009年提出的概念,是Development+Operations的缩写,目标是开发运维一体化。即通过自动化的软件交付流程,使得构建、测试和发布软件更加快捷和可靠。但每个团队都有自己的理解,并没有一个统一的标准,不过大体上都符合下图的描述。

        最重要的目标是,解决开发和运维严重脱节的问题,让产品开发的后期阶段(比如打包和部署)始终可以在开发代码变更的每次运行中完成。

        但这个概念却只在最近4-5年才开始变得火热,为什么?因为容器化,尤其是K8s的兴起,环境的一致性和部署的简单性才使得Dev和Ops有可能自动化,一体化! 

        CI(Continuous Integration):启动流水线的过程。包括,自动化监测一个或多个源码仓库是否发生变更,当变更被推送到仓库时,它会检测到更改、下载副本、构建并运行任何相关的单元测试。

        CD(Continuous Delivery):类比工厂里的流水线以快速、自动化和可重复的方式从原材料生产出消费品。软件交付以同样的方式从源代码生成并发布版本,包括自动构建、测试、打包至可直接部署的版本,并自动在指定内部环境发布更新,提交CT没有人为干预。

        两个概念中的C(Continuous)主要强调随时可运行,包括频繁发布、自动化流程、可重复、快速迭代和快速失败。其中,实现“快速失败”最重要的点就是单元测试!

如何部署k8s

一、环境规划

        因为我们环境有限,在这里只用了三台机器,k8s这个功能可以管理数万台node工作节点。

1、主机规划

        192.168.140.10 k8s-master.linux.com master节点 2U, 2G内存

        192.168.140.11 k8s-node01.linux.com node节点

        192.168.140.12 k8s-node02.linux.com node节点

2、软件规划

kubernetes 1.20.7版本

docker 19.03版本

3、网段

pod网段: 192.168.0.0/16

service网段: 172.16.0.0/16

二、基础环境配置

1、修改主机名

[root@k8s-master ~]# hostnamectl set-hostname k8s-master.linux.com

2、所有主机关闭SELinux, 防火墙

[root@k8s-master ~]# systemctl stop firewalld
[root@k8s-master ~]# systemctl disable firewalld
[root@k8s-master ~]# systemctl mask  firewalld

[root@k8s-master ~]# getenforce 
Disabled

3、所有主机确认时间同步

[root@k8s-master ~]# crontab -e
*/30 * * * *   /usr/sbin/ntpdate  120.25.115.20 &> /dev/null

4、所有主机配置免密SSH

[root@k8s-master ~]# ssh-keygen -t rsa
[root@k8s-master ~]# mv /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys

[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.140.11:/root/
[root@k8s-master ~]# scp -r /root/.ssh/ root@192.168.140.12:/root/
[root@k8s-master ~]# for i in 10 11 12
> do
> ssh root@192.168.140.$i hostname; date
> done

k8s-master.linux.com
Fri Nov 18 12:16:04 CST 2022
k8s-node01.linux.com
Fri Nov 18 12:16:04 CST 2022
k8s-node02.linux.com
Fri Nov 18 12:16:04 CST 2022

5、所有主机相互添加主机名解析

[root@k8s-master ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.140.10	k8s-master.linux.com
192.168.140.11	k8s-node01.linux.com
192.168.140.12	k8s-node02.linux.com

[root@k8s-master ~]# scp /etc/hosts root@192.168.140.11:/etc/hosts
hosts                                                                                                                      100%  267   154.5KB/s   00:00    
[root@k8s-master ~]# scp /etc/hosts root@192.168.140.12:/etc/hosts
hosts                                                                                                                      100%  267    61.9KB/s   00:00    

6、所有主机禁用swap交换分区

[root@k8s-master ~]# swapoff -a
[root@k8s-master ~]# free -m
              total        used        free      shared  buff/cache   available
Mem:           3932         113        3658          11         160        3599
Swap:             0           0           0
[root@k8s-master ~]# sysctl -w vm.swappiness=0
vm.swappiness = 0
[root@k8s-master ~]# sed -ri '/swap/d' /etc/fstab 

7、所有主机调整系统的资源限制

[root@k8s-master ~]#  ulimit -SHn 65535
[root@k8s-master ~]# vim /etc/security/limits.conf 
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
[root@k8s-master ~]# scp /etc/security/limits.conf root@192.168.140.11:/etc/security/limits.conf 
limits.conf                                                                                                                100% 2555     1.4MB/s   00:00    
[root@k8s-master ~]# scp /etc/security/limits.conf root@192.168.140.12:/etc/security/limits.conf 
limits.conf                                                                                                                100% 2555     2.1MB/s   00:00    

三、配置软件仓库

1、配置docker仓库

[root@k8s-master ~]# cat /etc/yum.repos.d/docker.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.aliyun.com/docker-ce/linux/centos/$releasever/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/docker-ce/linux/centos/gpg

2、配置k8s仓库

[root@k8s-master ~]# vim /etc/yum.repos.d/k8s.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@k8s-node02 ~]# sed -ri -e '$d' -e '/^repo_/d' -e '/^gpgcheck/s|1|0|' /etc/yum.repos.d/k8s.repo

3、将上述软件仓库复制到另外两台主机

[root@k8s-master ~]# scp /etc/yum.repos.d/docker.repo root@192.168.140.11:/etc/yum.repos.d/
docker.repo                                                                                                                100% 2081     1.8MB/s   00:00    
[root@k8s-master ~]# scp /etc/yum.repos.d/docker.repo root@192.168.140.12:/etc/yum.repos.d/
docker.repo                                                                                                                100% 2081     2.8MB/s   00:00    
[root@k8s-master ~]# 
[root@k8s-master ~]# scp /etc/yum.repos.d/k8s.repo root@192.168.140.11:/etc/yum.repos.d/
k8s.repo                                                                                                                   100%  276   360.6KB/s   00:00    
[root@k8s-master ~]# scp /etc/yum.repos.d/k8s.repo root@192.168.140.12:/etc/yum.repos.d/
k8s.repo                    

4、将所有系统升级到最新版

# yum update -y
# init 6

四、调整系统参数

1、所有主机安装ipvs

[root@k8s-master ~]# yum install ipvsadm 

2、所有主机加载ipvs模块

[root@k8s-master ~]# vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack_ipv4
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master ~]# scp /etc/modules-load.d/ipvs.conf root@192.168.183.11:/etc/modules-load.d/ipvs.conf 
ipvs.conf                                                                                                                  100%  211   123.4KB/s   00:00    
[root@k8s-master ~]# scp /etc/modules-load.d/ipvs.conf root@192.168.183.12:/etc/modules-load.d/ipvs.conf 
ipvs.conf
[root@k8s-master ~]# systemctl enable --now systemd-modules-load

3、调整内核参数

[root@k8s-master ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
[root@k8s-master ~]# scp /etc/sysctl.d/k8s.conf root@192.168.183.11:/etc/sysctl.d/
k8s.conf                                                                                                                   100%  704   654.4KB/s   00:00    
[root@k8s-master ~]# scp /etc/sysctl.d/k8s.conf root@192.168.183.12:/etc/sysctl.d/
k8s.conf  
[root@k8s-node02 ~]# sysctl --system 

五、在所有主机安装软件

1、安装docker, 修改镜像国内下载地址

[root@k8s-master ~]# yum install -y docker-ce-19.03* 

[root@k8s-master ~]# systemctl enable --now docker

[root@k8s-master ~]# vim /etc/docker/daemon.json
{
   "registry-mirrors": ["http://hub-mirror.c.163.com"]
}


[root@k8s-master ~]# systemctl restart docker

2、安装kubeadm相关软件

[root@k8s-master ~]# yum install -y kubeadm-1.20.7 kubelet-1.20.7 kubectl-1.20.7 
[root@k8s-master ~]# vim /etc/sysconfig/kubelet 
KUBELET_EXTRA_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.2"

[root@k8s-master ~]# systemctl enable --now kubelet

六、kubernetes集群初始化

1、在master节点上准备初始化文件

[root@k8s-master ~]# kubeadm config print init-defaults > new.yaml 
[root@k8s-master ~]# cat new.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: 7t2weq.bjbawausm0jaxury
  ttl: 96h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.183.10      //master节点IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 192.168.183.10              //master节点IP
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 192.168.183.10:6443            //master节点IP
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.20.7
networking:
  dnsDomain: cluster.local
  podSubnet: 10.88.0.0/16
  serviceSubnet: 172.16.0.0/16
scheduler: {}

2、在master节点事先下载初始化需要的镜像

[root@k8s-master ~]# kubeadm config images pull --config /root/new.yaml 

3、在master节点上初始化集群

[root@k8s-master ~]# kubeadm init --config /root/new.yaml --upload-certs 
[init] Using Kubernetes version: v1.20.7
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING Hostname]: hostname "k8s-master01" could not be reached
	[WARNING Hostname]: hostname "k8s-master01": lookup k8s-master01 on 114.114.114.114:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [172.16.0.1 192.168.183.10]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.183.10 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.183.10 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.026038 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
4c95b29d3f062b0b005d7f6ddd74dc1b227ad8f76dbe5b5c2a13da883d93d73f
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 7t2weq.bjbawausm0jaxury
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.183.10:6443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:26b04c860890abb6fabbca8dd194eb089aa6a3811be1fe2edff3b4a08c6bb38c \
    --control-plane --certificate-key 4c95b29d3f062b0b005d7f6ddd74dc1b227ad8f76dbe5b5c2a13da883d93d73f

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.183.10:6443 --token 7t2weq.bjbawausm0jaxury \
    --discovery-token-ca-cert-hash sha256:26b04c860890abb6fabbca8dd194eb089aa6a3811be1fe2edff3b4a08c6bb38c 

4、定义kubeconfig环境变量

[root@k8s-master ~]# vim /etc/profile
export KUBECONFIG=/etc/kubernetes/admin.conf

[root@k8s-master ~]# source /etc/profile

5、将工作节点加入集群

[root@k8s-node01 ~]# kubeadm join 192.168.183.10:6443 --token 7t2weq.bjbawausm0jaxury \
>     --discovery-token-ca-cert-hash sha256:26b04c860890abb6fabbca8dd194eb089aa6a3811be1fe2edff3b4a08c6bb38c

6、查看集群节点

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS     ROLES                  AGE     VERSION
k8s-master01           NotReady   control-plane,master   7m55s   v1.20.7
k8s-node01.linux.com   NotReady   <none>                 97s     v1.20.7
k8s-node02.linux.com   NotReady   <none>                 42s     v1.20.7
[root@k8s-master ~]# kubectl get pods -A -o wide 
NAMESPACE     NAME                                           READY   STATUS    RESTARTS   AGE     IP               NODE                   NOMINATED NODE   READINESS GATES
kube-system   coredns-54d67798b7-d8ncn                       0/1     Pending   0          8m41s   <none>           <none>                 <none>           <none>
kube-system   coredns-54d67798b7-tv8lg                       0/1     Pending   0          8m41s   <none>           <none>                 <none>           <none>
kube-system   etcd-k8s-master.linux.com                      1/1     Running   0          8m54s   192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-apiserver-k8s-master.linux.com            1/1     Running   0          8m54s   192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-controller-manager-k8s-master.linux.com   1/1     Running   0          8m54s   192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-proxy-27xht                               1/1     Running   0          8m42s   192.168.140.10   k8s-master.linux.com   <none>           <none>
kube-system   kube-proxy-gc82n                               1/1     Running   0          3m6s    192.168.140.11   k8s-node01.linux.com   <none>           <none>
kube-system   kube-proxy-xdw99                               1/1     Running   0          2m46s   192.168.140.12   k8s-node02.linux.com   <none>           <none>
kube-system   kube-scheduler-k8s-master.linux.com            1/1     Running   0          8m54s   192.168.140.10   k8s-master.linux.com   <none>           <none>

七、部署calico网络,实现容器间的通信

  • calico基于BGP协议实现通信
  • 只需要在master节点上进行操作即可

1、下载calico部署文件

[root@k8s-master ~]# cp calico-etcd.yaml calico-etcd.yaml.bak

2、在calico-etcd.yaml指定etcd数据库的连接地址

[root@k8s-master ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://192.168.183.10:2379"#g' calico-etcd.yaml

3、指定连接etcd数据库使用的证书信息

[root@k8s-master ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d '\n'`
[root@k8s-master ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d '\n'`
[root@k8s-master ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d '\n'`
[root@k8s-master ~]# 
[root@k8s-master ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yaml
[root@k8s-master ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml

4、指定通信的网段

[root@k8s-master ~]# vim calico-etcd.yaml

            - name: CALICO_IPV4POOL_CIDR
              value: "192.168.0.0/16"

5、部署calico网络

[root@k8s-master ~]# kubectl create -f calico-etcd.yaml
[root@k8s-master ~]# kubectl get pods -A -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE    IP               NODE                   NOMINATED NODE   READINESS GATES
kube-system   calico-kube-controllers-5f6d4b864b-sv9kc   1/1     Running   0          10m    192.168.183.10   k8s-master01           <none>           <none>
kube-system   calico-node-bcsxg                          1/1     Running   0          10m    192.168.183.11   k8s-node01.linux.com   <none>           <none>
kube-system   calico-node-g6k4d                          1/1     Running   0          10m    192.168.183.12   k8s-node02.linux.com   <none>           <none>
kube-system   calico-node-wp8hj                          1/1     Running   0          10m    192.168.183.10   k8s-master01           <none>           <none>

6、再次查看集群节点的状态

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS   ROLES                  AGE    VERSION
k8s-master01           Ready    control-plane,master   130m   v1.20.7
k8s-node01.linux.com   Ready    <none>                 124m   v1.20.7
k8s-node02.linux.com   Ready    <none>                 123m   v1.20.7

八、验证集群状态

  • 至些,k8s集群搭建完毕

1、查看集群节点的状态

[root@k8s-master ~]# kubectl get nodes
NAME                   STATUS   ROLES                  AGE    VERSION
k8s-master01           Ready    control-plane,master   130m   v1.20.7
k8s-node01.linux.com   Ready    <none>                 124m   v1.20.7
k8s-node02.linux.com   Ready    <none>                 123m   v1.20.7

2、查看核心组件的状态

[root@k8s-master ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-5f6d4b864b-sv9kc   1/1     Running   0          13m
kube-system   calico-node-bcsxg                          1/1     Running   0          13m
kube-system   calico-node-g6k4d                          1/1     Running   0          13m
kube-system   calico-node-wp8hj                          1/1     Running   0          13m
kube-system   coredns-54d67798b7-8fz4t                   1/1     Running   0          132m
kube-system   coredns-54d67798b7-s2s9l                   1/1     Running   0          132m
kube-system   etcd-k8s-master01                          1/1     Running   0          132m
kube-system   kube-apiserver-k8s-master01                1/1     Running   0          132m
kube-system   kube-controller-manager-k8s-master01       1/1     Running   0          132m
kube-system   kube-proxy-hpst5                           1/1     Running   0          126m
kube-system   kube-proxy-llpzp                           1/1     Running   0          125m
kube-system   kube-proxy-w9ff4                           1/1     Running   0          132m
kube-system   kube-scheduler-k8s-master01                1/1     Running   0          132m

九、安装metric插件

  • 搜集、监控node节点CPU、内存使用情况

1、将front-proxy-ca.crt证书拷贝到所有的工作节点

[root@k8s-master ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt root@192.168.183.11:/etc/kubernetes/pki/front-proxy-ca.crt 
front-proxy-ca.crt                                                                                                                       100% 1078   754.4KB/s   00:00    
[root@k8s-master ~]# scp /etc/kubernetes/pki/front-proxy-ca.crt root@192.168.183.12:/etc/kubernetes/pki/front-proxy-ca.crt 
front-proxy-ca.crt                   

2、部署Metric插件

[root@k8s-master ~]# kubectl create -f comp.yaml
[root@k8s-master ~]# kubectl get pods -A 
kube-system   metrics-server-545b8b99c6-5jqv4            1/1     Running   0          25s

3、测试Metric工作正常

[root@k8s-master ~]# kubectl top nodes 
NAME                   CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%     
k8s-node02.linux.com   154m         3%     834Mi           22%         

十、部署dashboard插件

  • 提供web UI界面

1、部署dashboard

[root@k8s-master ~]# cd dashboard/
[root@k8s-master dashboard]# kubectl create -f ./ 

查看dashboard对应的POD

[root@k8s-master dashboard]# kubectl get pods -n kubernetes-dashboard 
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7645f69d8c-hxz8n   1/1     Running   0          86s
kubernetes-dashboard-78cb679857-8hdnc        1/1     Running   0          86s

查看dashboard对应的service服务

[root@k8s-master dashboard]# kubectl get service -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   172.16.188.225   <none>        8000/TCP   4m13s
kubernetes-dashboard        ClusterIP   172.16.1.18      <none>        443/TCP    4m14s

2、将dashboard服务类型修改为nodePort

[root@k8s-master dashboard]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard

  type: NodePort
[root@k8s-master dashboard]# kubectl get service -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   172.16.188.225   <none>        8000/TCP        8m44s
kubernetes-dashboard        NodePort    172.16.1.18      <none>        443:32205/TCP   8m45s

3、访问dashboard

1) 查看dashboard容器运行的主机
[root@k8s-master dashboard]# kubectl get pods -n kubernetes-dashboard -o wide 
NAME                                         READY   STATUS    RESTARTS   AGE   IP                NODE                   NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7645f69d8c-hxz8n   1/1     Running   0          11m   192.168.242.130   k8s-node02.linux.com   <none>           <none>
kubernetes-dashboard-78cb679857-8hdnc  

4、获取令牌,登录dashboard

[root@k8s-master dashboard]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name:         admin-user-token-crx2f
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 289b1d24-93ec-4af0-abdc-99d51dafa133

Type:  kubernetes.io/service-account-token

Data
====
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IlBUNDdadjNDQ3hPbWZ1enlNMGZCU09SNlpZOW9GdkIxckI1LWdWclIwUTgifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNyeDJmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiIyODliMWQyNC05M2VjLTRhZjAtYWJkYy05OWQ1MWRhZmExMzMiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.FuU7y1LTqlEMSLFMkilOS2-9Q6uaoSYSvn7hr2aS4vGN5CvIeXbWRr-SJKmMTsDGr3ZfQcPc06ixFN1IafSAXZ0Ao6V6dPpuY37DmJ4Uv8Kvinrg1gRZeVSjpeFsTa9cXWod6tDDI7zxEF7byimkGTwXPjgGui2eRwFObu7UzdhAyNZMbALhKM3ot36Acbt8kQoZgkPeZLrbsuy--Qd1tdUH3rirvNEI9v_YDUYx_o5NxMM5OHvrtWConVtenRBYmIllsV4gx_-KQHFwWvx8IAtV4fQFMp5E-hcjOcxhIXkjnUPzr1BlhV68H7yZF2YYkama4y7EKPI6E1hlBYPcHA
ca.crt:     1066 bytes

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐