简介

前面我们一直介绍的是直接创建Pod,但是在生产环境中基本很少让K8s直接创建Pod,因为这种方式创建出来的Pod删除就没有了,也不会重建。我们希望Pod资源出现故障的时候能够尝试着重启或者创建出新的Pod,扩缩容能够更加方便,更新升级能够更加合理,这个时候就需要用到Pod的控制器来实现

在K8s中有很多种Pod,分别作用域不同的场景,上面介绍的场景就是我们本文要介绍的控制器ReplicaSet(简称RS),其它控制器后续文章逐一介绍

ReplicationController

在使用ReplicaSet之前我们先来了解下ReplicationController简称RC,它是比较原始的控制器,新的版本已经被RS所取代

RC会持续监控正在运行的Pod列表,保证相应标签下的Pod数量与期望的数量相符合,一旦出现故障,就会重启或重建,数量少了便根据模板创建新的副本,数量多了就删除多余的副本

RC主要有三大部分组成

Label Selector 标签选择器,表示RC需要管理的Pod

Replica 副本数,Pod运行的数量

Pod template 创建Pod的模板

ReplicaSet

上面简单介绍的RC其实也是RS的功能,目前来说他们的不同存在于标签选择器上,RC只能选择一个标签,RS则多了一种选择可以根据操作符操作多个标签集合。但即便是RS官方也是不建议直接使用的,后面文章还会介绍更好的控制器Deployment

资源清单文件
apiVersion: apps/v1  # 版本号
kind: ReplicaSet  # 表示RS
metadata:    # 元数据
  name:    # RS的名称
  namespace:  # 与Pod一样,有归属于某个命名空间
  labels:   # 标签
spec:
  replicas:  # 期望的副本数量
  selector:  # 标签选择器,定于需要管理的Pod
    matchLabels: 
    matchExpressions:
    - key:
      operator: # 操作符 In, NotIn, Exists and DoesNotExist
      values:
  template:  #模板 与Pod的定义一样
    metadata:
    spec:

与RC一样最重要的就是replicas、selector与template

创建

创建一个RS管理三个Pod

编写 replicaset.yaml

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  name: replicaset
spec:
  replicas: 3
  selector:
    matchExpressions:
    - {key: app, operator: In,values: [nginx]}
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.19.1

启动控制器,查看控制器详情,查看Pod详情

# 启动控制器RS
[root@master rs]# kubectl create -f replicaset.yaml
replicaset.apps/replicaset created

# 查看控制器,有三个正常运行的副本
[root@master rs]# kubectl get rs -o wide
NAME         DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES         SELECTOR
replicaset   3         3         3       8s    nginx        nginx:1.19.1   app in (nginx)

# 查看Pod 确实有三个正常的Pod
[root@master rs]# kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
replicaset-6622b   1/1     Running   0          18s
replicaset-pn66p   1/1     Running   0          18s
replicaset-wf6pz   1/1     Running   0          18s

# 查看RS事件 创建了三个Pod
[root@master rs]# kubectl describe rs replicaset | grep -A 100 Events
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  29s   replicaset-controller  Created pod: replicaset-6622b
  Normal  SuccessfulCreate  29s   replicaset-controller  Created pod: replicaset-pn66p
  Normal  SuccessfulCreate  29s   replicaset-controller  Created pod: replicaset-wf6pz

注意观察上面的NAME,RS的name为replicaset,Pod的name便是被管理的RS的name加上一串随机字符串,从名字上可以看出Pod是被哪个RS所管理

既然RS可以感知到Pod的异常并且根据策略重启或重建,那么我们手动删除一个Pod,看副本数是否还能保持三个

# 删除一个Pod
[root@master rs]# kubectl delete pod replicaset-wf6pz
pod "replicaset-wf6pz" deleted

# Pod还是有三个,并且看时间是刚刚启动,通过与上面的对比wf6pz的Pod删除后新加入了一个jqmf5
[root@master rs]# kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
replicaset-6622b   1/1     Running   0          61s
replicaset-jqmf5   1/1     Running   0          13s
replicaset-pn66p   1/1     Running   0          61s

# 查看事件,多了一次创建replicaset-jqmf5
[root@master rs]# kubectl describe rs replicaset | grep -A 100 Events
Events:
  Type    Reason            Age   From                   Message
  ----    ------            ----  ----                   -------
  Normal  SuccessfulCreate  113s  replicaset-controller  Created pod: replicaset-6622b
  Normal  SuccessfulCreate  113s  replicaset-controller  Created pod: replicaset-pn66p
  Normal  SuccessfulCreate  113s  replicaset-controller  Created pod: replicaset-wf6pz
  Normal  SuccessfulCreate  65s   replicaset-controller  Created pod: replicaset-jqmf5
扩缩容

RS可以很方便的进行扩缩容,扩缩容的方式有两种,一是通过edit编辑资源清单文件,二是直接使用命令

我们分别使用文件编辑方式扩容与命令方式缩容

扩容

# 编辑yaml 修改如下部分
# spec:
#   replicas: 5
[root@master rs]# kubectl edit rs replicaset
replicaset.apps/replicaset edited

# 发现新增了两个Pod dtmkl and rvw6s
[root@master rs]# kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
replicaset-6622b   1/1     Running   0          3m33s
replicaset-dtmkl   1/1     Running   0          10s
replicaset-jqmf5   1/1     Running   0          2m45s
replicaset-pn66p   1/1     Running   0          3m33s
replicaset-rvw6s   1/1     Running   0          10s

# 事件,多了两次创建
[root@master rs]# kubectl describe rs replicaset | grep -A 100 Events
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  3m50s  replicaset-controller  Created pod: replicaset-6622b
  Normal  SuccessfulCreate  3m50s  replicaset-controller  Created pod: replicaset-pn66p
  Normal  SuccessfulCreate  3m50s  replicaset-controller  Created pod: replicaset-wf6pz
  Normal  SuccessfulCreate  3m2s   replicaset-controller  Created pod: replicaset-jqmf5
  Normal  SuccessfulCreate  27s    replicaset-controller  Created pod: replicaset-rvw6s
  Normal  SuccessfulCreate  27s    replicaset-controller  Created pod: replicaset-dtmkl

缩容

kubectl scale rs 名字 --replicas=副本数

将副本数减到2

# 缩容
[root@master rs]# kubectl scale rs replicaset --replicas=2
replicaset.apps/replicaset scaled

# Pod数量变为为了 2
[root@master rs]# kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
replicaset-6622b   1/1     Running   0          5m10s
replicaset-pn66p   1/1     Running   0          5m10s

# 查看事件 多了三次delete事件
[root@master rs]# kubectl describe rs replicaset | grep -A 100 Events
Events:
  Type    Reason            Age    From                   Message
  ----    ------            ----   ----                   -------
  Normal  SuccessfulCreate  5m24s  replicaset-controller  Created pod: replicaset-6622b
  Normal  SuccessfulCreate  5m24s  replicaset-controller  Created pod: replicaset-pn66p
  Normal  SuccessfulCreate  5m24s  replicaset-controller  Created pod: replicaset-wf6pz
  Normal  SuccessfulCreate  4m36s  replicaset-controller  Created pod: replicaset-jqmf5
  Normal  SuccessfulCreate  2m1s   replicaset-controller  Created pod: replicaset-rvw6s
  Normal  SuccessfulCreate  2m1s   replicaset-controller  Created pod: replicaset-dtmkl
  Normal  SuccessfulDelete  21s    replicaset-controller  Deleted pod: replicaset-jqmf5
  Normal  SuccessfulDelete  21s    replicaset-controller  Deleted pod: replicaset-rvw6s
  Normal  SuccessfulDelete  21s    replicaset-controller  Deleted pod: replicaset-dtmkl
镜像升级

查看当前RS使用的nginx版本

# 当前RS使用nginx:1.19.1版本
[root@master rs]# kubectl get rs -o wide
NAME         DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES         SELECTOR
replicaset   2         2         2       7m7s   nginx        nginx:1.19.1   app in (nginx)

镜像升级也是有两种方式,通过编辑文件与命令方式

通过编辑文件方式将版本改为1.17.1

# 编辑文件内容改为如下
#    spec:
#      containers:
#      - image: nginx:1.17.1
[root@master rs]# kubectl edit rs replicaset
replicaset.apps/replicaset edited

# 查看rs使用的模板镜像已经变成1.17.1
[root@master rs]# kubectl get rs -o wide
NAME         DESIRED   CURRENT   READY   AGE     CONTAINERS   IMAGES         SELECTOR
replicaset   2         2         2       8m17s   nginx        nginx:1.17.1   app in (nginx)

上面虽然显示版本已经改为1.17.1了,但是Pod中的镜像真的改变了吗?下面检测一下是否真的替换了镜像

# 通过 kubectl describe rs replicasetc 查看RS并无新的事件发生

# 查看Pod的启动事件,并没有替换新的1.17.1,依然是老的1.19.1
[root@master rs]# kubectl describe pod replicaset-6622b | grep -A 100 Events             
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/replicaset-6622b to node01
  Normal  Pulled     11m   kubelet            Container image "nginx:1.19.1" already present on machine
  Normal  Created    11m   kubelet            Created container nginx
  Normal  Started    11m   kubelet            Started container nginx
  
# 直接访问Pod,执行命令查看版本,发现并没有改变
[root@master rs]# kubectl exec replicaset-6622b -- nginx -V
nginx version: nginx/1.19.1

通过上面检测,发现RS升级镜像并不能自动更新,也就是不会影响正在运行的Pod,那么我们删除一个Pod再观测自动创建的新Pod是否会拉取1.17.1版本镜像

# 删除一个Pod
[root@master rs]# kubectl delete pod replicaset-6622b
pod "replicaset-6622b" deleted

# 查看正在运行的Pod 新创建了replicaset-xcm6l
[root@master rs]# kubectl get pods
NAME               READY   STATUS    RESTARTS   AGE
replicaset-pn66p   1/1     Running   0          20m
replicaset-xcm6l   1/1     Running   0          2s

# 查看版本发现有两个Pod版本不同,并且他们可以并存
[root@master rs]# kubectl exec replicaset-xcm6l -- nginx -V
nginx version: nginx/1.17.1
[root@master rs]# kubectl exec replicaset-pn66p -- nginx -V
nginx version: nginx/1.19.1

命令方式格式

kubectl set image rs 名称 容器=镜像版本

使用命令方式将nginx的版本改回1.19.1

# 修改镜像版本
[root@master rs]# kubectl set image rs replicaset nginx=nginx:1.19.1
replicaset.apps/replicaset image updated

# 查看RS详情 镜像已经变为1.19.1
[root@master rs]# kubectl get rs -o wide
NAME         DESIRED   CURRENT   READY   AGE   CONTAINERS   IMAGES   SELECTOR
replicaset   2         2         2       23m   nginx        nginx:1.19.1   app in (nginx)

命令方式也与上面编辑文件方式一样,不会自动更新升级,正在运行Pod不会受影响,如果有新的Pod创建那么会使用修改过后的镜像。

基于这种效果我们想想,如果替换镜像后能够自动按批次删除指定数量Pod,是不是可以达到滚动升级的效果,如果删除指定Pod后暂停,观察一段时间再删除剩余的Pod,是不是可以达到灰度发布的效果。后面我们介绍Deployment控制器会详细介绍这两种发布。

RS控制器就介绍到这,后面介绍Deployment控制器


欢迎关注,学习不迷路!

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐