一、kubernetes架构

官方网址

Kubernetes

k8s的特性

  • 自动装箱,自动修复,水平扩展,服务发现和负载均衡,自动发布和回滚
  • 密钥和配置管理,存储编排,任务批量处理执行

K8S核心优势

  • 基于yaml文件实现容器的自动创建、删除
  • 更快速实现业务的弹性横向扩容
  • 动态发现新扩容的容器并对自动用户提供访问
  • 更简单、更快速的实现业务代码升级和回滚

二、k8s组件介绍

master/node:k8s物理集群架构为master/node模型

master:3个核心组件,API Server,Scheduler,Controller-Manager

node:3个组件,docker,kube-proxy,kubelet(集群代理),

kube-apiserver

Kubernetes API 服务器验证并配置 API 对象的数据, 这些对象包括 pods、services、replicationcontrollers 等。 API 服务器为 REST 操作提供服务,并为集群的共享状态提供前端, 所有其他组件都通过该前端进行交互。

kube-apiserver | Kubernetes

kube-scheduler

kube-scheduler是Kubernetes的pod调度器,负责将 Pods 指派到节点上。kube-schedule调度器基于约束和可用资源为调度队列中每个 Pod 确定其可合法放置的节点。 kube-scheduler是一个拥有丰富策略、能够感知拓扑变化、支持特定负载的功能组件, kube-scheduler 需要考虑独立的和集体的资源需求、服务质量需求、硬件/软件/策略限制、亲和与反亲和规范等需求。 在同一个集群中可以使用多个不同的调度器;

kube-scheduler | Kubernetes

kube-controller-manager

kube-controller-manager:controller-manager作为集群内部的管理控制中心,负载集群内的Node、Pode副本、服务端点(Endpoint)、命名空间(NameSpace)、服务账号(ServiceAccount)、资源定额(ResourceQuota)的管理,当某个Node意外宕机时,controller-manager会及时发现并执行自动化修复流程,确保集群中的pod副本始终处于预期的工作状态

kube-controller-manager | Kubernetes

kube-proxy

kube-proxy: Kubernetes 网络代理运行在每个节点上,它反映了每个节点上 Kubernetes API 中定义的服务,并且可以执行简单的 TCP、UDP 和 SCTP 流转发,或者在一组后端进行 循环 TCP、UDP 和 SCTP 转发。用户必须使用apiserver API创建一个服务来配置代理,其实就是kube-proxy通过在主机上维护网络规则并执行连接转发来实现kubernetes服务访问

kube-proxy | Kubernetes

kubelet

kubelet:是运行在每个worker节点的代理组件,它会监视已分配给节点的pod,具体功能如下:

  • 向master汇报node节点的状态信息
  • 接收指令并在pod中创建docker容器
  • 准备pod所需的数据卷
  • 返回pod的运行状态
  • 在node节点执行容器健康检查

kubelet | Kubernetes

etcd

etcd是coreos公司开发目前是kubernetes默认使用的key-value数据存储系统,用于保存所有集群数据,支持分布式集群功能,生成环境使用时需要为etcd数据提供定期备份机制

为 Kubernetes 运行 etcd 集群 | Kubernetes

三、部署高可用集群

使用批量部署工具如(ansible/saltstack)、手动二进制、kubeadm、apt-get/yum等安装,以守护进程的方式启动在宿主机上,类似于是nginx一样使用service脚本启动。

以下示例使用k8s官方提供的部署工具kubeadm自动安装,需要在master和node节点上安装docker等组件,然后初始化,把管理的的控制服务和node上的服务都以pod的方式运行

注意事项:

禁用swap   #swapoff -a

关闭selinux #Centos系统

关闭iptables

优化内核参数及资源限制参数

环境准备

服务器环境:最小化安装基础系统,如果使用centos系统,则关闭防火墙、swap和selinux,更新软件源、时间同步,推荐使用centos7.5及以上系统。ubuntu推荐18.04及以上稳定版本,以下示例使用ubuntu18.04

master1:172.20.22.10

master2:172.20.22.11

master3:172.20.22.12

ha1:172.20.22.13

ha2:172.20.22.14

vip:172.20.22.29

前提:基于keepalived及haproxy实现高可用反向代理环境,为k8s apiserver提供高可用反向代理,具体步骤略。

安装kubeadm等组件

在各节点安装kubeadm、kubelet、kubectl、docker等组件

###修改为阿里云源
# sudo vim /etc/apt/sources.list 
deb http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-security main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-updates main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-proposed main restricted universe multiverse

deb http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ bionic-backports main restricted universe multiverse


# sudo cat /etc/issue
Ubuntu 18.04.5 LTS \n \l


###安装脚本,所有节点都要执行
# sudo vim install.sh
#!/bin/bash
/sbin/modprobe br_netfilter
/sbin/modprobe ip_conntrack

cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1


EOF

/bin/sysctl -p

cat >> /etc/security/limits.conf << EOF
*               soft    core            unlimited
*               hard    core            unlimited
*   soft  nofile    500000
*   hard  nofile    500000
EOF


/usr/bin/apt update
/usr/bin/apt -y install apt-transport-https ca-certificates curl software-properties-common apt-transport-https
/usr/bin/curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | sudo apt-key add -
/usr/bin/curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add - 

cat  > /etc/apt/sources.list.d/kubernetes.list << EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

/usr/bin/apt update
/usr/bin/apt install -y docker-ce=5:19.03.15~3-0~ubuntu-bionic docker-ce-cli=5:19.03.15~3-0~ubuntu-bionic
/usr/bin/apt install -y kubelet=1.20.5-00 kubeadm=1.20.5-00 kubectl=1.20.5-00


# sudo bash install.sh

# sudo kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

# sudo docker version
Client: Docker Engine - Community
 Version:           19.03.15
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        99e3ed8919
 Built:             Sat Jan 30 03:16:51 2021
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.15
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       99e3ed8919
  Built:            Sat Jan 30 03:15:20 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.12
  GitCommit:        7b11cfaabd73bb80907dd23182b9347b4245eb5d
 runc:
  Version:          1.0.2
  GitCommit:        v1.0.2-0-g52b36a2
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683


# kubectl version
Client Version: version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:10:43Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

master节点运行kubeadm init初始化命令

在三台master中任意一台master进行集群初始化,而且集群初始化只需要初始化一次

###查看默认镜像
# sudo kubeadm config images list --kubernetes-version v1.20.5
k8s.gcr.io/kube-apiserver:v1.20.5
k8s.gcr.io/kube-controller-manager:v1.20.5
k8s.gcr.io/kube-scheduler:v1.20.5
k8s.gcr.io/kube-proxy:v1.20.5
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0

###提前下载好镜像可以节省安装时间
# sudo vim image.sh 
#!/bin/bash

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.20.5
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.4.13-0
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.7.0

# sudo bash image.sh

# sudo docker image ls
REPOSITORY                                                                    TAG                 IMAGE ID            CREATED             SIZE
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy                v1.20.5             5384b1650507        11 months ago       118MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver            v1.20.5             d7e24aeb3b10        11 months ago       122MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler            v1.20.5             8d13f1db8bfb        11 months ago       47.3MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager   v1.20.5             6f0c3da8c99e        11 months ago       116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/etcd                      3.4.13-0            0369cf4303ff        18 months ago       253MB
registry.cn-hangzhou.aliyuncs.com/google_containers/coredns                   1.7.0               bfe3a36ebd25        20 months ago       45.2MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        2 years ago         683kB


###master节点初始化
# kubeadm init --apiserver-advertise-address=172.20.22.10 --control-plane-endpoint=172.20.22.29 --apiserver-bind-port=6443  --kubernetes-version=v1.20.5  --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers --ignore-preflight-errors=swap


##生成kube-config文件
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

# sudo kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
master1.k8s.com   Ready    control-plane,master   17m   v1.20.5


##部署网络插件
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml


# kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS              RESTARTS   AGE
kube-system   coredns-54d67798b7-dt5sw                  1/1     Running             0          107s
kube-system   coredns-54d67798b7-ncdkc                  1/1     Running             0          107s
kube-system   etcd-master1.k8s.com                      1/1     Running             0          114s
kube-system   kube-apiserver-master1.k8s.com            1/1     Running             0          114s
kube-system   kube-controller-manager-master1.k8s.com   1/1     Running             0          114s
kube-system   kube-flannel-ds-pl48t                     1/1     Running             0          29s
kube-system   kube-proxy-pvzqq                          1/1     Running             0          107s
kube-system   kube-scheduler-master1.k8s.com            1/1     Running             0          114s

如果显示以下信息,则初始化成功

当前master生成证书用于添加新控制节点

添加master节点

分别在另外的master节点执行以下命令

# sudo kubeadm join 172.20.22.29:6443 --token z6ttz0.0snqlm1a3w0rpx26     --discovery-token-ca-cert-hash sha256:d88c21f35f0bc314e5cbf013ed1c5e86cf8a63c79243b89e0927e43cd27727a0     --control-plane --certificate-key 5c5245f346fed7e042e0b6103a19503b2f95d92143a29f47bc3592c01b567945




###添加完成后查看集群相关信息
# sudo kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
master1.k8s.com   Ready    control-plane,master   20m   v1.20.5
master2.k8s.com   Ready    control-plane,master   61s   v1.20.5
master3.k8s.com   Ready    control-plane,master   15m   v1.20.5


# sudo kubectl get pod -A -o wide
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE   IP             NODE              NOMINATED NODE   READINESS GATES
kube-system   coredns-54d67798b7-dt5sw                  1/1     Running   0          66m   10.244.0.3     master1.k8s.com   <none>           <none>
kube-system   coredns-54d67798b7-ncdkc                  1/1     Running   0          66m   10.244.0.2     master1.k8s.com   <none>           <none>
kube-system   etcd-master1.k8s.com                      1/1     Running   0          66m   172.20.22.10   master1.k8s.com   <none>           <none>
kube-system   etcd-master2.k8s.com                      1/1     Running   0          46m   172.20.22.11   master2.k8s.com   <none>           <none>
kube-system   etcd-master3.k8s.com                      1/1     Running   0          61m   172.20.22.12   master3.k8s.com   <none>           <none>
kube-system   kube-apiserver-master1.k8s.com            1/1     Running   0          66m   172.20.22.10   master1.k8s.com   <none>           <none>
kube-system   kube-apiserver-master2.k8s.com            1/1     Running   0          46m   172.20.22.11   master2.k8s.com   <none>           <none>
kube-system   kube-apiserver-master3.k8s.com            1/1     Running   0          61m   172.20.22.12   master3.k8s.com   <none>           <none>
kube-system   kube-controller-manager-master1.k8s.com   1/1     Running   1          66m   172.20.22.10   master1.k8s.com   <none>           <none>
kube-system   kube-controller-manager-master2.k8s.com   1/1     Running   0          46m   172.20.22.11   master2.k8s.com   <none>           <none>
kube-system   kube-controller-manager-master3.k8s.com   1/1     Running   0          61m   172.20.22.12   master3.k8s.com   <none>           <none>
kube-system   kube-flannel-ds-mnc6w                     1/1     Running   0          61m   172.20.22.12   master3.k8s.com   <none>           <none>
kube-system   kube-flannel-ds-pl48t                     1/1     Running   0          64m   172.20.22.10   master1.k8s.com   <none>           <none>
kube-system   kube-flannel-ds-zz7vn                     1/1     Running   0          47m   172.20.22.11   master2.k8s.com   <none>           <none>
kube-system   kube-proxy-2f4l2                          1/1     Running   0          47m   172.20.22.11   master2.k8s.com   <none>           <none>
kube-system   kube-proxy-bjvrx                          1/1     Running   0          61m   172.20.22.12   master3.k8s.com   <none>           <none>
kube-system   kube-proxy-pvzqq                          1/1     Running   0          66m   172.20.22.10   master1.k8s.com   <none>           <none>
kube-system   kube-scheduler-master1.k8s.com            1/1     Running   1          66m   172.20.22.10   master1.k8s.com   <none>           <none>
kube-system   kube-scheduler-master2.k8s.com            1/1     Running   0          46m   172.20.22.11   master2.k8s.com   <none>           <none>
kube-system   kube-scheduler-master3.k8s.com            1/1     Running   1          61m   172.20.22.12   master3.k8s.com   <none>           <none>

显示以下信息,则说明已经添加成功

四、创建pod测试

通过deployment控制器部署pod

# vim nginx.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: default
spec:
  selector:
    app: nginx
  type: NodePort
  ports:
  - name: http
    port: 80
    targetPort: 80
    nodePort: 30080
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.18.0
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80


# kubectl apply -f nginx.yaml
# kubectl get pod -o wide
NAME                                READY   STATUS    RESTARTS   AGE   IP           NODE              NOMINATED NODE   READINESS GATES
nginx-deployment-7486689d59-mhrj6   1/1     Running   0          10m   10.244.0.5   master1.k8s.com   <none>           <none>
nginx-deployment-7486689d59-tfjdt   1/1     Running   0          10m   10.244.0.6   master1.k8s.com   <none>           <none>
nginx-deployment-7486689d59-tmjdl   1/1     Running   0          10m   10.244.0.4   master1.k8s.com   <none>           <none>

# kubectl  get svc
NAME         TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1     <none>        443/TCP        162m
nginx        NodePort    10.97.2.126   <none>        80:30080/TCP   25m

##分别修改pod的index.html文件访问测试
# kubec exec -it nginx-deployment-7486689d59-mhrj6 -- bash
/# echo 10.244.0.5 > /usr/share/nginx/html/index.html  ##另外两个pod也分别修改为对应的IP


###访问测试
# while true; do curl http://172.20.22.10:30080/;sleep 1;done
10.244.0.5
10.244.0.6
10.244.0.6
10.244.0.6
10.244.0.5
10.244.0.6
10.244.0.5
10.244.0.4
10.244.0.4
10.244.0.5




五、集群升级

升级k8s集群必须升级kubeadm版本到目标k8s版本,在k8s的所有master节点进行组件升级,将管理的服务kube-conntroller-manager、kube-apiserver、kube-scheduler、kube-proxy进行版本升级

####验证当集群版本master节点
root@master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"20", GitVersion:"v1.20.5", GitCommit:"6b1d87acf3c8253c123756b9e61dac642678305f", GitTreeState:"clean", BuildDate:"2021-03-18T01:08:27Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}


####node节点
root@master1:~# kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
master1.k8s.com   Ready    control-plane,master   18h   v1.20.5
master2.k8s.com   Ready    control-plane,master   18h   v1.20.5
master3.k8s.com   Ready    control-plane,master   18h   v1.20.5
node01            Ready    <none>                 15h   v1.20.5



###所有master节点安装指定新版本kubeadm
root@master1:~# apt-cache madison kubeadm
root@master1:~# apt install kubeadm=1.21.10-00
root@master2:~# apt install kubeadm=1.21.10-00
root@master3:~# apt install kubeadm=1.21.10-00

root@master1:~# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.10", GitCommit:"a7a32748b5c60445c4c7ee904caf01b91f2dbb71", GitTreeState:"clean", BuildDate:"2022-02-16T11:22:49Z", GoVersion:"go1.16.14", Compiler:"gc", Platform:"linux/amd64"}


master版本升级

升级之前最好先提前下载对应镜像,可以节省升级过程中需要下载镜像的时间

###查看升级计划
root@master1:~# kubeadm upgrade plan

 master版本升级,所有master节点都要执行

###版本升级,正常来说要先把要升级的节点从前端负载摘除再升级
root@master1:~# kubeadm upgrade apply v1.21.10
root@master2:~# kubeadm upgrade apply v1.21.10
root@master3:~# kubeadm upgrade apply v1.21.10

升级确认

升级完成

node升级

所有node节点升级kubelet、kubectl二进制包

##升级二进制包
root@master1:~# apt install  kubelet=1.21.10-00 kubeadm=1.21.10-00 kubelet=1.21.10-00 -y
root@master2:~# apt install  kubelet=1.21.10-00 kubeadm=1.21.10-00 kubelet=1.21.10-00 -y
root@master3:~# apt install  kubelet=1.21.10-00 kubeadm=1.21.10-00 kubelet=1.21.10-00 -y
root@node01:~# apt install  kubelet=1.21.10-00 kubeadm=1.21.10-00 kubelet=1.21.10-00 -y

 验证升级版本

root@master1:~# kubectl get node
NAME              STATUS   ROLES                  AGE   VERSION
master1.k8s.com   Ready    control-plane,master   19h   v1.21.10
master2.k8s.com   Ready    control-plane,master   41m   v1.21.10
master3.k8s.com   Ready    control-plane,master   19h   v1.21.10
node01            Ready    <none>                 16h   v1.21.10

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐