1. 版本升级及回滚

在指定的deployment控制器中通过kubectl set image 指定新版本的镜像tag,来实现代码更新的目的.
例如:
更新deployment中的2个pod,busybox pod更新到2.1版本,nginx pod更新到1.9.1版本

kubectl set image deployment/nginx busybox=busybox:v2.1 nginx=nginx:1.9.1

更新方法有2种:

  • rolling update 滚动更新
    先创建一批新POD再删除一批旧Pod,升级完成后再升级下一批pod.(一批的默认值是25%的pod数)
    优点:业务不会中断.
    缺点:同一时间内会有2个不同版本同时存在.
  • recreate 重建更新
    先删除所有Pod再重新创建Pod.
    优点:不会同时有多个版本存在
    缺点:在旧版本删除,新版本创建完成之前,该服务无法访问

1.1 环境准备

生成一个10个pod的deployment,命名空间为wework

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    app: wework-tomcat-app1-deployment-label
  name: wework-tomcat-app1-deployment
  namespace: wework
spec:
  replicas: 10
  selector:
    matchLabels:
      app: wework-tomcat-app1-selector
  template:
    metadata:
      labels:
        app: wework-tomcat-app1-selector
    spec:
      containers:
      - name: wework-tomcat-app1-container
        image: harbor.intra.com/wework/tomcat-app1:v3
        ports:
        - containerPort: 8080
          protocol: TCP
          name: http
        env:
        - name: "password"
          value: "123456"
        volumeMounts:
        - name: wework-images
          mountPath: /usr/local/nginx/html/webapp/images
          readOnly: false
        - name: wework-static
          mountPath: /usr/local/nginx/html/webapp/static
          readOnly: false
      volumes:
      - name: wework-images
        nfs:
          server: 192.168.31.109
          path: /data/k8s/wework/images
      - name: wework-static
        nfs:
          server: 192.168.31.104
          path: /data/k8s/wework/static
---
kind: Service
apiVersion: v1
metadata:
  labels:
    app: wework-tomcat-app1-service-label
  name: wework-tomcat-app1-service
  namespace: wework
spec:
  type: NodePort
  ports:
  - name: http
    port: 80
    protocol: TCP
    targetPort: 8080
    nodePort: 30092
  selector:
    app: wework-tomcat-app1-selector

更新deployment,可以看到wework-tomcat-app1-deployment副本数是10/10

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl apply -f tomcat-app1.yaml 
deployment.apps/wework-tomcat-app1-deployment configured
service/wework-tomcat-app1-service unchanged
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl get deploy -n wework
NAME                            READY   UP-TO-DATE   AVAILABLE   AGE
wework-consumer-deployment      1/1     1            1           12d
wework-dubboadmin-deployment    1/1     1            1           11d
wework-jenkins-deployment       1/1     1            1           12d
wework-provider-deployment      1/1     1            1           12d
wework-tomcat-app1-deployment   10/10   10           10          6d2h

1.2 更新版本

第一批(这个是最明显的)
总的Pod数量为Running(8个)+ContainerCreating(3个)maxSurge影响+ContainerCreating(2个)maxUnavailable影响
**maxSurge:**deployment中Pod副本总数受maxSurge(默认是25%)影响,向上取整.就是ContainerCreating(3个).
**maxUnavailable:**deployment中不可用Pod副本数受maxUnavailable(默认25%)影响向下取整.就是ContainerCreating(2个)Deployment中Pod的数量是10=ContainerCreating(2个)+Running(8个)它保证了deployment中可用的pods数量为8.
**注意!**这里maxUnavailable的值并不是Terminating的数量.而是2个ContainerCreating.

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl set image deployment/wework-tomcat-app1-deployment  wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v1 -n wework --record=true
Flag --record has been deprecated, --record will be removed in the future
deployment.apps/wework-tomcat-app1-deployment image updated
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl get pods -n wework|grep wework-tomcat-app1-deployment
wework-tomcat-app1-deployment-69d4f7f54b-2njx5   1/1     Running             0               46s
wework-tomcat-app1-deployment-69d4f7f54b-2v2kf   1/1     Running             0               43s
wework-tomcat-app1-deployment-69d4f7f54b-2w4zw   1/1     Running             0               46s
wework-tomcat-app1-deployment-69d4f7f54b-5l7pd   1/1     Running             0               46s
wework-tomcat-app1-deployment-69d4f7f54b-7zlcc   1/1     Running             0               43s
wework-tomcat-app1-deployment-69d4f7f54b-8blxz   1/1     Running             0               43s
wework-tomcat-app1-deployment-69d4f7f54b-kcpzj   1/1     Running             0               43s
wework-tomcat-app1-deployment-69d4f7f54b-p76k5   1/1     Running             0               43s
wework-tomcat-app1-deployment-69d4f7f54b-twq7h   1/1     Terminating         0               46s
wework-tomcat-app1-deployment-69d4f7f54b-w2dpv   1/1     Terminating         0               46s
wework-tomcat-app1-deployment-879b5f5d5-2l5wm    0/1     ContainerCreating   0               1s
wework-tomcat-app1-deployment-879b5f5d5-62qzj    0/1     ContainerCreating   0               2s
wework-tomcat-app1-deployment-879b5f5d5-7299v    0/1     ContainerCreating   0               2s
wework-tomcat-app1-deployment-879b5f5d5-t8vvr    0/1     ContainerCreating   0               2s
wework-tomcat-app1-deployment-879b5f5d5-wkvc9    0/1     ContainerCreating   0               1s

由于手速太慢第二批和第三批的没法截到,这里就不是很明显了.理论上是和第一批一样,分批做更新,直至最后所有的Pod都成为新版本,再把老版本的Pod删除.

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl get pods -n wework|grep wework-tomcat-app1-deployment
wework-tomcat-app1-deployment-69d4f7f54b-2njx5   1/1     Terminating   0               62s
wework-tomcat-app1-deployment-69d4f7f54b-2v2kf   1/1     Terminating   0               59s
wework-tomcat-app1-deployment-69d4f7f54b-2w4zw   1/1     Terminating   0               62s
wework-tomcat-app1-deployment-69d4f7f54b-5l7pd   1/1     Terminating   0               62s
wework-tomcat-app1-deployment-69d4f7f54b-7zlcc   1/1     Terminating   0               59s
wework-tomcat-app1-deployment-69d4f7f54b-8blxz   1/1     Terminating   0               59s
wework-tomcat-app1-deployment-69d4f7f54b-kcpzj   1/1     Terminating   0               59s
wework-tomcat-app1-deployment-69d4f7f54b-p76k5   1/1     Terminating   0               59s
wework-tomcat-app1-deployment-69d4f7f54b-twq7h   1/1     Terminating   0               62s
wework-tomcat-app1-deployment-69d4f7f54b-w2dpv   1/1     Terminating   0               62s
wework-tomcat-app1-deployment-879b5f5d5-2l5wm    1/1     Running       0               17s
wework-tomcat-app1-deployment-879b5f5d5-62qzj    1/1     Running       0               18s
wework-tomcat-app1-deployment-879b5f5d5-7299v    1/1     Running       0               18s
wework-tomcat-app1-deployment-879b5f5d5-bfwj6    1/1     Running       0               15s
wework-tomcat-app1-deployment-879b5f5d5-f6nlb    1/1     Running       0               15s
wework-tomcat-app1-deployment-879b5f5d5-gzh6g    1/1     Running       0               15s
wework-tomcat-app1-deployment-879b5f5d5-mll9g    1/1     Running       0               15s
wework-tomcat-app1-deployment-879b5f5d5-rq2wk    1/1     Running       0               15s
wework-tomcat-app1-deployment-879b5f5d5-t8vvr    1/1     Running       0               18s
wework-tomcat-app1-deployment-879b5f5d5-wkvc9    1/1     Running       0               17s
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl get pods -n wework|grep wework-tomcat-app1-deployment
wework-tomcat-app1-deployment-879b5f5d5-2l5wm   1/1     Running   0               80s
wework-tomcat-app1-deployment-879b5f5d5-62qzj   1/1     Running   0               81s
wework-tomcat-app1-deployment-879b5f5d5-7299v   1/1     Running   0               81s
wework-tomcat-app1-deployment-879b5f5d5-bfwj6   1/1     Running   0               78s
wework-tomcat-app1-deployment-879b5f5d5-f6nlb   1/1     Running   0               78s
wework-tomcat-app1-deployment-879b5f5d5-gzh6g   1/1     Running   0               78s
wework-tomcat-app1-deployment-879b5f5d5-mll9g   1/1     Running   0               78s
wework-tomcat-app1-deployment-879b5f5d5-rq2wk   1/1     Running   0               78s
wework-tomcat-app1-deployment-879b5f5d5-t8vvr   1/1     Running   0               81s
wework-tomcat-app1-deployment-879b5f5d5-wkvc9   1/1     Running   0               80s
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl get deployments.apps -n wework|grep tomcat
wework-tomcat-app1-deployment   10/10   10           10          6d3h

1.3 回滚版本

通过命令kubectl rollout history查看更新历史,可以看到除了第一次部署外我们还做了3次升级.当前版本是v3.

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout history deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment 
REVISION  CHANGE-CAUSE
2         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v1 --namespace=wework --record=true
3         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v2 --record=true --namespace=wework
4         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v3 --record=true --namespace=wework
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl describe pod wework-tomcat-app1-deployment-58c7bdcc47-66c2m -n wework|grep "Image:"
    Image:          harbor.intra.com/wework/tomcat-app1:v3

1.3.1 回滚到上一个版本

假设现在的V3版本有问题,需要回滚到v2的状态

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout undo deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment rolled back
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl describe pod wework-tomcat-app1-deployment-69d4f7f54b-4bgbs -n wework|grep "Image:"
    Image:          harbor.intra.com/wework/tomcat-app1:v2

1.3.2 再次回滚是哪个版本

此时版本已经恢复到了v2状态.
那么问题来了如果我们这是再来次undo,它的版本会是v1?还是v3?亦或是报错或者任然是v2呢?
很明显他是根据rollout history当前版本的id号的上一个来回滚的.

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout undo deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment rolled back
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl describe pod wework-tomcat-app1-deployment-58c7bdcc47-fxrzj -n wework|grep "Image:"
    Image:          harbor.intra.com/wework/tomcat-app1:v3
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout history deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment 
REVISION  CHANGE-CAUSE
2         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v1 --namespace=wework --record=true
5         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v2 --record=true --namespace=wework
6         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v3 --record=true --namespace=wework

1.3.3 回滚到指定版本

那么有没有办法直接回滚到v1的版本呢?
除了用set image将版本指定到想要的版本外是否可以用rollout实现回滚呢?

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout history deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment 
REVISION  CHANGE-CAUSE
2         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v1 --namespace=wework --record=true
5         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v2 --record=true --namespace=wework
6         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v3 --record=true --namespace=wework

答案显然是可以的,使用参数–to-version就可将版本回滚到指定版本

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout undo --to-revision=2  deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment rolled back
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl rollout history deploy wework-tomcat-app1-deployment -n wework
deployment.apps/wework-tomcat-app1-deployment 
REVISION  CHANGE-CAUSE
5         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v2 --record=true --namespace=wework
6         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v3 --record=true --namespace=wework
7         kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:v1 --namespace=wework --record=true
root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# kubectl describe pod wework-tomcat-app1-deployment-879b5f5d5-bb7r9 -n wework|grep "Image:"
    Image:          harbor.intra.com/wework/tomcat-app1:v1

2. Jenkis代码升级和回滚

2.1 准备工作

服务器之间免密

2.1.1 Jenkis到GitLab

目的是拉取代码时免密
导出Jenkins服务器秘钥到gitlab服务器中

root@jenkins:/data/scripts/app1# cat /root/.ssh/id_rsa.pub 
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC93i4GhpmetkSrRjJ5PJloJ12V4blTz6n0DNtgOcQm07cOjiM6Z8/txMbbismBx400Oz7M0f3+5mnD2J+DrKhprniJPoMweIWk9E0V+75J/ExN9pHoKOe5bd4YeM4RE1rqYw8J2k3PP7nystU8rn1bSK4cluNzCssRHtVEtRxqqkfu80L40fk82nb2E7SeTZoj9PVwWLB77L79FaceVj8vukd2RTguZioDbdfNjynmbNPkXaLlTX4CJ+KKV/1jNUHUJ0nvNCvvx+jHG2xzduOebgOB9071Qs62VRpbP5z10Sh3L5zKaCSC6mWQO+RT3KmIoz6HmcuEJb7ZsLVH06ot root@jenkins

请添加图片描述

此时在Jenkins上实现免密克隆了.

root@jenkins:/data/scripts/app1# git clone git@192.168.31.199:wework/app1.git
Cloning into 'app1'...
remote: 
remote: INFO: Your SSH key is expiring soon. Please generate a new key.
remote: 
remote: Enumerating objects: 9, done.
remote: Counting objects: 100% (9/9), done.
remote: Compressing objects: 100% (6/6), done.
remote: Total 9 (delta 0), reused 0 (delta 0), pack-reused 0
Receiving objects: 100% (9/9), done.

如果失败可能是以下问题:

  • Gitlab是否正常
  • Jenkins是否正常
  • Gitlab权限是否正确
  • Jenkins的ssh-key是否正确

2.1.2 K8s-master到GitLab免密

方法和2.1.1一样,这里就不再复述了,但操作还是要做的.目的是为了上传Yaml文件

2.1.3 Jenkins到K8s-master免密

目的是为了将压缩包发送到k8s和远程在k8s-master执行命令时免密

ssh-copy-id 192.168.31.101

2.2 CICD脚本部分

2.2.1 脚本功能

  1. 克隆,打包代码
  2. 将代码复制到k8s-master节点
  3. 更新镜像
  4. 更新yaml并上传git
  5. 更新deployment中的镜像

2.2.2 脚本变量及传参

命令格式: wework_app1_deploy.sh [update|rollback_last_version] [main|test|develop|uat]

变量含义
$1更新或者回退镜像版本
$2选择不同的git分支,这里应该为一个case,选择不同的分支那么后端的deployment和namespace的值会有区分
GIT_Servergit服务器地址
Jenkins_ServerJenkins服务器地址
K8S_CONTROLLER1K8s-master服务器地址
PATH_DockerfileDockerfile在k8s-master服务器上的路径
PATH_YamlYaml文件在k8s-master服务器上的路径

2.2.2 脚本内容

#!/bin/bash
#Author: qiuqin
#Date: 2022-08-12
#Version: v3

## 命令格式: wework_app1_deploy.sh [update|rollback_last_version] [main|test|develop|uat]
echo "命令格式: wework_app1_deploy.sh [update|rollback_last_version] [main|test|develop|uat]"

#记录脚本开始执行时间
starttime=`date +'%Y-%m-%d %H:%M:%S'`

#变量
SHELL_DIR="/data/scripts"
SHELL_NAME="$0"
GIT_Server="192.168.31.199"
Jenkins_Server="192.168.31.99"
K8S_CONTROLLER1="192.168.31.101"
K8S_CONTROLLER2="192.168.31.102"
PATH_Dockerfile="/opt/k8s-data/dockerfile/web/wework/tomcat-app1"
PATH_Yaml="/opt/k8s-data/yaml/wework/tomcat-app1"
DATE=`date +%Y-%m-%d_%H_%M_%S`
METHOD=$1
Branch=$2

## 默认为main分支
if test -z $Branch;then
  Branch=main
fi

function Code_Clone(){
  Git_URL="git@${GIT_Server}:wework/app1.git"
  DIR_NAME=`echo $Git_URL|awk -F "[/|.| ]+" '{print $5}'`
  DATA_DIR="/data/gitdata/wework"
  Git_Dir="${DATA_DIR}/${DIR_NAME}"
  [ -d ${Git_Dir} ] || mkdir -p ${Git_Dir}
  cd ${DATA_DIR} &&  echo "即将清空上一版本代码并获取当前分支最新代码" && sleep 1 && rm -rf ${DIR_NAME}
  echo "即将开始从分支${Branch} 获取代码" && sleep 1
  git clone -b ${Branch} ${Git_URL} 
  echo "分支${Branch} 克隆完成,即将进行代码编译!" && sleep 1
  cd ${Git_Dir}
  tar czf ${DIR_NAME}.tar.gz  ./*
}

#将打包好的压缩文件拷贝到k8s 控制端服务器
function Copy_File(){
  echo "压缩文件打包完成,即将拷贝到k8s 控制端服务器${K8S_CONTROLLER1}" && sleep 1
  ssh root@${K8S_CONTROLLER1} "mkdir -p ${PATH_Dockerfile}"
  scp ${Git_Dir}/${DIR_NAME}.tar.gz root@${K8S_CONTROLLER1}:${PATH_Dockerfile}
  echo "压缩文件拷贝完成,服务器${K8S_CONTROLLER1}即将开始制作Docker 镜像!" && sleep 1
}

#到控制端执行脚本制作并上传镜像
function Make_Image(){
  echo "开始制作Docker镜像并上传到Harbor服务器" && sleep 1
  ssh root@${K8S_CONTROLLER1} "cd ${PATH_Dockerfile} && bash build-command.sh ${DATE}"
  echo "Docker镜像制作完成并已经上传到harbor服务器" && sleep 1
}

#到控制端更新k8s yaml文件中的镜像版本号,从而保持yaml文件中的镜像版本号和k8s中版本号一致
function Update_k8s_yaml(){
  echo "即将更新k8s yaml文件中镜像版本" && sleep 1
  ssh root@${K8S_CONTROLLER1} "cd ${PATH_Yaml} && sed -Ei 's#(harbor.intra.com\/wework\/tomcat-app1:).*#\1${DATE}#g' tomcat-app1.yaml"
  echo "k8s yaml文件镜像版本更新完成,即将开始更新容器中镜像版本" && sleep 1
  ssh root@${K8S_CONTROLLER1} "cd ${PATH_Yaml}&&\cp tomcat-app1.yaml wework-yaml/ && cd wework-yaml && git add .&& git commit -m \"${DATE}\"&& git push"
}

#到控制端更新k8s中容器的版本号,有两种更新办法,一是指定镜像版本更新,二是apply执行修改过的yaml文件
function Update_k8s_container(){
  #第一种方法
  ssh root@${K8S_CONTROLLER1} "kubectl set image deployment/wework-tomcat-app1-deployment wework-tomcat-app1-container=harbor.intra.com/wework/tomcat-app1:${DATE} -n wework" 
  #第二种方法,推荐使用第一种
  #ssh root@${K8S_CONTROLLER1} "cd  /opt/k8s-data/yaml/wework/tomcat-app1  && kubectl  apply -f tomcat-app1.yaml --record" 
  echo "k8s 镜像更新完成" && sleep 1
  echo "当前业务镜像版本: harbor.intra.com/wework/tomcat-app1:${DATE}"
  #计算脚本累计执行时间,如果不需要的话可以去掉下面四行
  endtime=`date +'%Y-%m-%d %H:%M:%S'`
  start_seconds=$(date --date="$starttime" +%s);
  end_seconds=$(date --date="$endtime" +%s);
  echo "本次业务镜像更新总计耗时:"$((end_seconds-start_seconds))"s"
}

#基于k8s 内置版本管理回滚到上一个版本
function rollback_last_version(){
  echo "即将回滚之上一个版本"
  ssh root@${K8S_CONTROLLER1}  "kubectl rollout undo deployment/wework-tomcat-app1-deployment -n wework"
  sleep 1
  echo "已执行回滚至上一个版本"
}

#使用帮助
usage(){
  echo "部署使用方法为 ${SHELL_DIR}/${SHELL_NAME} update "
  echo "回滚到上一版本使用方法为 ${SHELL_DIR}/${SHELL_NAME} rollback_last_version"
}

#主函数
main(){
  case ${METHOD}  in
  update)
    Code_Clone;
    Copy_File;
    Make_Image; 
    Update_k8s_yaml;
    Update_k8s_container;
  ;;
  rollback_last_version)
    rollback_last_version;
  ;;
  *)
    usage;
  esac;
}

main $1 $2

2.3 Jenkis 传参调用脚本

先测试访问deployment的service

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# curl 192.168.31.113:30092/myapp/
<h1>v1.3</h1>

此时版本是1.3
通过git将版本升级到1.4

root@jenkins:/git/app1# cat index.html
<h1>v1.4</h1>
root@jenkins:/git/app1# git add .
root@jenkins:/git/app1# git commit -m "1.4"
[main 190f1cb] 1.4
 1 file changed, 1 insertion(+), 1 deletion(-)
root@jenkins:/git/app1# git push
Counting objects: 3, done.
Delta compression using up to 2 threads.
Compressing objects: 100% (2/2), done.
Writing objects: 100% (3/3), 269 bytes | 269.00 KiB/s, done.
Total 3 (delta 0), reused 0 (delta 0)
To 192.168.31.199:wework/app1.git
   662796b..190f1cb  main -> main

请添加图片描述
请添加图片描述
请添加图片描述
请添加图片描述
请添加图片描述
请添加图片描述
请添加图片描述
完成更新后再次访问url

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# curl 192.168.31.113:30092/myapp/
<h1>v1.4</h1>

2.3.2 回滚版本

此时,我们发觉1.4版本存在问题,需要回退至1.3版本
请添加图片描述
请添加图片描述
此时再次访问url

root@k8s-master-01:/opt/k8s-data/yaml/wework/tomcat-app1# curl 192.168.31.113:30092/myapp/
<h1>v1.3</h1>

回滚成功

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐