一.介绍

本篇博客主要实现了,因为gitlab和jenkins做了关联,登录jenkins网站,选择任意一个测试脚本的版本,并且能够选择一些测试脚本的参数,然后可以将参数注入到测试镜像中去。然后将测试脚本的镜像自动的部署到k8s集群中去。关于测试结果,可以加allure插件(上一篇博客做了,这篇没做。)
搭建的测试CICD持续交付的框架已经搭建完成了。框架主要主要使用了:
开发语言:python、shell
开发环境:pycharm
代码管理:gitlab
镜像制作:dockerfile
镜像管理:harbor
构建工具:jenkins
测试框架:selenium/selenium-grid
测试用例管理:pytest
测试用例并发执行:pytest-xdist
集群管理工具:k8s
测试代码项目部署:以docker容器使用k8s的job方式部署
测试代码项目的配置管理:configMap
jenkins slave节点:以docker容器使用k8s的方式部署

二.解决问题的场景

么我们这篇博客,主要解决问题的场景是什么?
场景1:因为使用selenium-grid分布式执行测试用例,可以提高自动化测试用例的执行速度
场景2:兼容性测试,能够动态的选择不同的浏览器(后面可以加上不同的浏览器版本镜像),选择不同的测试代码版本进行测试
场景3:能够配置job矩阵,当开发将应用部署上去之后,接着执行构建后的操作,可以将该job加入进去,以自动实现快速的自动的给出测试结果
还有什么好处:
1.通过jenkins,能够可视化整个测试流程
2.通过jenkins,能够将一些关键参数(如:测试url,浏览器类型等)注入到我们的测试脚本中,当然,目前
这些关键参数,可能远远不够,这个需要根据公司的实际项目进行进化,目前仅仅是demo,抛砖引玉
3.首先,对于大规模的测试,肯定是需要很多机器进行集群,那么对于测试脚本的集群部署,还有对于镜像的动态制作生成,回退脚本镜像,选择不同的浏览器啥的,还有很多个pod容器的allure测试报告整合,这些中间动作,全部自动化掉,对于
测试工程师,我们只需要专注于我们的测试代码,而不是需要去手动的去执行。而且也不需要测试工程师,去学习k8s,jenkins啥的。
4.更简单,测试工程师开发或者修改测试脚本或者回退版本,然后提交到gitlab后,然后登录到jenkins网站,直接根据自己的需求,选择一些参数,直接build,然后就是静静的等待测试结果(目前邮件或微信或钉钉通知没有做,会后面的章节去做的)

PS:
这些都是我粗浅的见解,不喜勿喷

三.测试CICD整体流程

1.(手动)在pycharm上调试代码,调试完成后,通过git将代码上传到gitlab上
2.(手动)登录jenkins网站,创建对目标网站的测试,并且能选择一些参数,动态的注入到测试脚本中去,如下图
在这里插入图片描述在selenium-grid网站,查看浏览器是否会并发执行
在这里插入图片描述观察浏览器的执行情况,选择两个浏览器
浏览器1
在这里插入图片描述
浏览器2
在这里插入图片描述

四.实验环境准备

1.k8s部署selenium-grid

关selenium-grid的部署方式,有很多种,官方文档,主要给的都是docker方式部署,docker-compose方式部署以及docker swarm的方式进行部署,关于k8s的helm部署,我没有尝试成功,目前应该是网络方面的问题,导致无法下载chart表。
关于k8s部署selenium grid的文章在网上真是太少,其中,有一篇,K8S 在docker上部署 Selenium Grid (最新版),写的挺好的,但是因为都用固定ip啥的,我觉得逼不得已,不能用固定ip,不然后期维护,比较恶心。我就参照docker-compose文档,把它转换成k8s的部署方式,下文主要介绍了selenium-grid的2种k8s部署方式。

a.方式1:hub-node模式

关于hub-node模式,我参考的文档:docker-compose部署selenium grid
hub的ingress

root@k8s-master1:~/k8s/ingress# cat ingress-default.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: jenkins.xusanduo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: jenkins
            port:
              number: 8080

  - host: sonarqube.xusanduo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: sonarqube
            port:
              number: 9000
   
  #如下是本次新增加,方便外面用户访问,需要将selenium-grid.xusanduo.com的域名解析写到对应的电脑上
  - host: selenium-grid.xusanduo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: selenium-hub
            port:
              number: 4444
root@k8s-master1:~/k8s/ingress# 

hub的service

root@k8s-master1:~/k8s/selenium-grid/hub-node# cat selenium-hub-svc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: selenium-hub
  labels:
    name: selenium-hub
spec:
  ports:
  - port: 4442
    targetPort: 4442
    name: publishport
  - port: 4443
    targetPort: 4443
    name: subscribeport
  - port: 4444
    targetPort: 4444
    name: hubport
  selector:
    name: selenium-hub
  type: NodePort
  sessionAffinity: None

hub的的deployment部署

root@k8s-master1:~/k8s/selenium-grid/hub-node# cat selenium-hub-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-hub
  labels:
    name: selenium-hub
spec:
  replicas: 1
  selector:
    matchLabels:
      name: selenium-hub
  template:
    metadata:
      labels:
        name: selenium-hub
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-hub
        image: selenium/hub:4.7.0-20221202
        ports:
          - containerPort: 4442
          - containerPort: 4443
          - containerPort: 4444
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        livenessProbe:
          httpGet:
            path: /grid/console
            port: 4444
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /grid/console
            port: 4444
          initialDelaySeconds: 30
          timeoutSeconds: 5

3个节点的deployment部署

root@k8s-master1:~/k8s/selenium-grid/hub-node# cat selenium-nodes-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-chrome
  labels:
    name: selenium-node-chrome
spec:
  replicas: 2
  selector:
    matchLabels:
       name: selenium-node-chrome
  template:
    metadata:
      labels:
        name: selenium-node-chrome
    spec:
      #nodeName: k8s-node1
      containers:
      - name: selenium-node-chrome
        image: selenium/node-chrome:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-hub.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-firefox
  labels:
    name: selenium-node-firefox
spec:
  replicas: 2
  selector:
    matchLabels:
       name: selenium-node-firefox
  template:
    metadata:
      labels:
        name: selenium-node-firefox
    spec:
      #nodeName: k8s-node1
      containers:
      - name: selenium-node-firefox
        image: selenium/node-firefox:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-hub.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-edge
  labels:
    name: selenium-node-edge
spec:
  replicas: 2
  selector:
    matchLabels:
       name: selenium-node-edge
  template:
    metadata:
      labels:
        name: selenium-node-edge
    spec:
      #nodeName: k8s-node1
      containers:
      - name: selenium-node-edge
        image: selenium/node-edge:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-hub.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"

部署ingress

root@k8s-master1:~/k8s/ingress# kubectl apply -f ingress-default.yaml 
ingress.networking.k8s.io/ingress-nginx created
root@k8s-master1:~/k8s/ingress# 

部署hub和节点

root@k8s-master1:~/k8s/selenium-grid/hub-node# kubectl apply -f .
deployment.apps/selenium-hub created
service/selenium-hub created
deployment.apps/selenium-node-chrome created
deployment.apps/selenium-node-firefox created
deployment.apps/selenium-node-edge created
root@k8s-master1:~/k8s/selenium-grid/hub-node# ls
selenium-hub-deployment.yaml  selenium-hub-svc.yaml  selenium-nodes-deployment.yaml
root@k8s-master1:~/k8s/selenium-grid/hub-node# 

验证:
验证1:验证部署的selenium grid的ingress,service,pod等是否部署成功

root@k8s-master1:~/k8s/selenium-grid/hub-node# kubectl get ingress
NAME            CLASS   HOSTS                                                                    ADDRESS   PORTS   AGE
ingress-nginx   nginx   jenkins.xusanduo.com,sonarqube.xusanduo.com,selenium-grid.xusanduo.com             80      57d
root@k8s-master1:~/k8s/selenium-grid/hub-node# kubectl get service
NAME           TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                                        AGE
jenkins        NodePort    10.101.26.140   <none>        8080:32001/TCP,50000:32002/TCP                 49d
kubernetes     ClusterIP   10.96.0.1       <none>        443/TCP                                        61d
selenium-hub   NodePort    10.99.223.202   <none>        4442:31041/TCP,4443:30731/TCP,4444:31339/TCP   3m21s
root@k8s-master1:~/k8s/selenium-grid/hub-node# kubectl get pods -o wide 
NAME                                               READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
jenkins-6b65c8bcf6-gndt4                           1/1     Running   0          4h33m   10.244.1.95    k8s-node1   <none>           <none>
nfs-subdir-external-provisioner-6c4d758b88-22ljd   1/1     Running   0          2d3h    10.244.1.101   k8s-node1   <none>           <none>
selenium-hub-5dfdc9f89-j5zkc                       1/1     Running   0          3m27s   10.244.1.131   k8s-node1   <none>           <none>
selenium-node-chrome-7f95f46477-ptvrh              1/1     Running   0          3m27s   10.244.2.137   k8s-node2   <none>           <none>
selenium-node-chrome-7f95f46477-rbb7s              1/1     Running   0          3m27s   10.244.1.132   k8s-node1   <none>           <none>
selenium-node-edge-7945f945dd-5wpnt                1/1     Running   0          3m27s   10.244.2.136   k8s-node2   <none>           <none>
selenium-node-edge-7945f945dd-dpzj6                1/1     Running   0          3m27s   10.244.1.134   k8s-node1   <none>           <none>
selenium-node-firefox-677f597f6d-4cmhh             1/1     Running   0          3m27s   10.244.1.133   k8s-node1   <none>           <none>
selenium-node-firefox-677f597f6d-z6bf5             1/1     Running   0          3m27s   10.244.2.138   k8s-node2   <none>           <none>
root@k8s-master1:~/k8s/selenium-grid/hub-node# 

验证2:判断selenium grid的web网站是否可以正常访问
在这里插入图片描述验证3:判断是否能够远程链接到pod容器中
chrome
在这里插入图片描述
edge
在这里插入图片描述
firefox
在这里插入图片描述

b.方式2:distributed模式

关于k8s部署selenium grid的distributed模式,我参考文档,docker-compose部署selenium grid的distributed模式

hub的ingress

root@k8s-master1:~/k8s/ingress# cat ingress-default.yaml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ingress-nginx
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: jenkins.xusanduo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: jenkins
            port:
              number: 8080

  - host: sonarqube.xusanduo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: sonarqube
            port:
              number: 9000
   
  #如下是本次新增加,方便外面用户访问,需要将selenium-grid.xusanduo.com的域名解析写到对应的电脑上
  - host: selenium-grid.xusanduo.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: selenium-grid
            port:
              number: 4444
root@k8s-master1:~/k8s/ingress# 

eventbus的service和deployment

root@k8s-master1:~/k8s/selenium-grid/distribute# cat selenium-event-bus.yaml 
apiVersion: v1
kind: Service
metadata:
  name: selenium-event-bus
  labels:
    name: selenium-event-bus
spec:
  ports:
  - port: 4442
    targetPort: 4442
    name: eventpublish
  - port: 4443
    targetPort: 4443
    name: eventsubscribe
  - port: 5557
    targetPort: 5557
    name: name
  selector:
    name: selenium-event-bus
  type: NodePort
  sessionAffinity: None

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-event-bus
  labels:
    name: selenium-event-bus
spec:
  replicas: 1
  selector:
    matchLabels:
      name: selenium-event-bus
  template:
    metadata:
      labels:
        name: selenium-event-bus
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-event-bus
        image: selenium/event-bus:4.7.0-20221202
        ports:
          - containerPort: 4442
          - containerPort: 4443
          - containerPort: 5557

sessions的service和deployment

root@k8s-master1:~/k8s/selenium-grid/distribute# cat selenium-sessions.yaml 
apiVersion: v1
kind: Service
metadata:
  name: selenium-sessions
  labels:
    name: selenium-sessions
spec:
  ports:
  - port: 5556
    targetPort: 5556
    name: selenium-sessions
  selector:
    name: selenium-sessions
  type: NodePort
  sessionAffinity: None

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-sessions
  labels:
    name: selenium-sessions
spec:
  replicas: 1
  selector:
    matchLabels:
      name: selenium-sessions
  template:
    metadata:
      labels:
        name: selenium-sessions
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-sessions
        image: selenium/sessions:4.7.0-20221202
        ports:
          - containerPort: 5556
        env:
          - name: SE_EVENT_BUS_HOST
            value: "selenium-event-bus.default.svc.cluster.local"
          - name: SE_EVENT_BUS_PUBLISH_PORT
            value: "4442"
          - name: SE_EVENT_BUS_SUBSCRIBE_PORT
            value: "4443"

session-queue的service和deployment

root@k8s-master1:~/k8s/selenium-grid/distribute# cat selenium-session-queue.yaml 
apiVersion: v1
kind: Service
metadata:
  name: selenium-session-queue
  labels:
    name: selenium-session-queue
spec:
  ports:
  - port: 5559
    targetPort: 5559
    name: selenium-session-queue
  selector:
    name: selenium-session-queue
  type: NodePort
  sessionAffinity: None

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-session-queue
  labels:
    name: selenium-session-queue
spec:
  replicas: 1
  selector:
    matchLabels:
      name: selenium-session-queue
  template:
    metadata:
      labels:
        name: selenium-session-queue
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-session-queue
        image: selenium/session-queue:4.7.0-20221202
        ports:
          - containerPort: 5559

distributor的service和deployment

root@k8s-master1:~/k8s/selenium-grid/distribute# cat selenium-distributor.yaml 
apiVersion: v1
kind: Service
metadata:
  name: selenium-distributor
  labels:
    name: selenium-distributor
spec:
  ports:
  - port: 5553
    targetPort: 5553
    name: selenium-distributor
  selector:
    name: selenium-distributor
  type: NodePort
  sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-distributor
  labels:
    name: selenium-distributor
spec:
  replicas: 1
  selector:
    matchLabels:
      name: selenium-distributor
  template:
    metadata:
      labels:
        name: selenium-distributor
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-distributor
        image: selenium/distributor:4.7.0-20221202
        ports:
          - containerPort: 5553
        env:
          - name: SE_EVENT_BUS_HOST
            value: "selenium-event-bus.default.svc.cluster.local"
          - name: SE_EVENT_BUS_PUBLISH_PORT
            value: "4442"
          - name: SE_EVENT_BUS_SUBSCRIBE_PORT
            value: "4443"
          - name: SE_SESSIONS_MAP_HOST
            value: "selenium-sessions.default.svc.cluster.local"
          - name: SE_SESSIONS_MAP_PORT
            value: "5556"
          - name: SE_SESSION_QUEUE_HOST
            value: "selenium-session-queue.default.svc.cluster.local"
          - name: SE_SESSION_QUEUE_PORT
            value: "5559"

router的service和deployment

root@k8s-master1:~/k8s/selenium-grid/distribute# cat selenium-router.yaml 
apiVersion: v1
kind: Service
metadata:
  name: selenium-grid
  labels:
    name: selenium-router
spec:
  ports:
  - port: 4444
    targetPort: 4444
    name: selenium-router
  selector:
    name: selenium-router
  type: NodePort
  sessionAffinity: None
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-router
  labels:
    name: selenium-router
spec:
  replicas: 1
  selector:
    matchLabels:
      name: selenium-router
  template:
    metadata:
      labels:
        name: selenium-router
    spec:
      #nodeName: k8s-node1
      containers:
      - name: selenium-router
        image: selenium/router:4.7.0-20221202
        ports:
          - containerPort: 4444
        env:
          - name: SE_DISTRIBUTOR_HOST
            value: "selenium-distributor.default.svc.cluster.local"
          - name: SE_DISTRIBUTOR_PORT
            value: "5553"
          - name: SE_SESSIONS_MAP_HOST
            value: "selenium-sessions.default.svc.cluster.local"
          - name: SE_SESSIONS_MAP_PORT
            value: "5556"
          - name: SE_SESSION_QUEUE_HOST
            value: "selenium-session-queue.default.svc.cluster.local"
          - name: SE_SESSION_QUEUE_PORT
            value: "5559"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        livenessProbe:
          httpGet:
            path: /grid/console
            port: 4444
          initialDelaySeconds: 30
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /grid/console
            port: 4444
          initialDelaySeconds: 30
          timeoutSeconds: 5

不同浏览器的节点部署

root@k8s-master1:~/k8s/selenium-grid/distribute# cat selenium-nodes-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-chrome
  labels:
    name: selenium-node-chrome
spec:
  replicas: 2
  selector:
    matchLabels:
       name: selenium-node-chrome
  template:
    metadata:
      labels:
        name: selenium-node-chrome
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-node-chrome
        image: selenium/node-chrome:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-event-bus.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-firefox
  labels:
    name: selenium-node-firefox
spec:
  replicas: 2
  selector:
    matchLabels:
       name: selenium-node-firefox
  template:
    metadata:
      labels:
        name: selenium-node-firefox
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-node-firefox
        image: selenium/node-firefox:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-event-bus.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-edge
  labels:
    name: selenium-node-edge
spec:
  replicas: 2
  selector:
    matchLabels:
       name: selenium-node-edge
  template:
    metadata:
      labels:
        name: selenium-node-edge
    spec:
#      nodeName: k8s-node1
      containers:
      - name: selenium-node-edge
        image: selenium/node-edge:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-event-bus.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"

部署selenium的ingress

root@k8s-master1:~/k8s/ingress# kubectl apply -f ingress-default.yaml 
ingress.networking.k8s.io/ingress-nginx configured
root@k8s-master1:~/k8s/ingress# 

部署selenium-grid组件和hub

root@k8s-master1:~/k8s/selenium-grid/distribute# kubectl apply -f .
service/selenium-distributor created
deployment.apps/selenium-distributor created
service/selenium-event-bus created
deployment.apps/selenium-event-bus created
deployment.apps/selenium-node-chrome created
deployment.apps/selenium-node-firefox created
deployment.apps/selenium-node-edge created
service/selenium-grid created
deployment.apps/selenium-router created
service/selenium-session-queue created
deployment.apps/selenium-session-queue created
service/selenium-sessions created
deployment.apps/selenium-sessions created
root@k8s-master1:~/k8s/selenium-grid/distribute# 

验证:
验证1:验证部署的selenium grid的ingress,service,pod等是否部署成功

root@k8s-master1:~/k8s/selenium-grid/distribute# kubectl get ingress
NAME            CLASS   HOSTS                                                                    ADDRESS   PORTS   AGE
ingress-nginx   nginx   jenkins.xusanduo.com,sonarqube.xusanduo.com,selenium-grid.xusanduo.com             80      57d
root@k8s-master1:~/k8s/selenium-grid/distribute# kubectl get svc
NAME                     TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                                        AGE
jenkins                  NodePort    10.101.26.140    <none>        8080:32001/TCP,50000:32002/TCP                 49d
kubernetes               ClusterIP   10.96.0.1        <none>        443/TCP                                        61d
selenium-distributor     NodePort    10.97.249.9      <none>        5553:31411/TCP                                 7m51s
selenium-event-bus       NodePort    10.105.143.21    <none>        4442:30501/TCP,4443:30724/TCP,5557:30863/TCP   7m51s
selenium-grid            NodePort    10.96.112.90     <none>        4444:31847/TCP                                 7m51s
selenium-session-queue   NodePort    10.101.171.200   <none>        5559:30986/TCP                                 7m51s
selenium-sessions        NodePort    10.101.48.151    <none>        5556:32060/TCP                                 7m51s
root@k8s-master1:~/k8s/selenium-grid/distribute# kubectl get pods -o wide 
NAME                                               READY   STATUS    RESTARTS   AGE     IP             NODE        NOMINATED NODE   READINESS GATES
jenkins-6b65c8bcf6-gndt4                           1/1     Running   0          5h13m   10.244.1.95    k8s-node1   <none>           <none>
nfs-subdir-external-provisioner-6c4d758b88-22ljd   1/1     Running   0          2d3h    10.244.1.101   k8s-node1   <none>           <none>
selenium-distributor-68c85787b8-hl4dr              1/1     Running   0          7m56s   10.244.1.135   k8s-node1   <none>           <none>
selenium-event-bus-6667d5784-wqtbz                 1/1     Running   0          7m56s   10.244.1.136   k8s-node1   <none>           <none>
selenium-node-chrome-69fc7dd77-4k6jp               1/1     Running   0          7m56s   10.244.1.139   k8s-node1   <none>           <none>
selenium-node-chrome-69fc7dd77-vtfd7               1/1     Running   0          7m56s   10.244.2.143   k8s-node2   <none>           <none>
selenium-node-edge-95b4f597b-bs6kc                 1/1     Running   0          7m56s   10.244.2.142   k8s-node2   <none>           <none>
selenium-node-edge-95b4f597b-w9jp9                 1/1     Running   0          7m56s   10.244.1.138   k8s-node1   <none>           <none>
selenium-node-firefox-6c67d7bdb4-7sks8             1/1     Running   0          7m56s   10.244.1.137   k8s-node1   <none>           <none>
selenium-node-firefox-6c67d7bdb4-rn6t9             1/1     Running   0          7m56s   10.244.2.140   k8s-node2   <none>           <none>
selenium-router-855566f59b-cb4pk                   1/1     Running   0          7m56s   10.244.1.140   k8s-node1   <none>           <none>
selenium-session-queue-7795c897c8-n675w            1/1     Running   0          7m56s   10.244.2.141   k8s-node2   <none>           <none>
selenium-sessions-576d46fd7c-2jc7t                 1/1     Running   0          7m56s   10.244.2.139   k8s-node2   <none>           <none>
root@k8s-master1:~/k8s/selenium-grid/distribute# 

验证2:判断selenium grid的web网站是否可以正常访问
在这里插入图片描述

验证3:判断是否能够远程链接到pod容器中
这次就找一个chrome的浏览器进行验证,另外两个,就不截图了
在这里插入图片描述

2.制作测试脚本docker基础镜像

root@k8s-node1:~/cicd-prepar/selenium/grid/dockerfile# cat dockerfile 
FROM ubuntu
MAINTAINER xusanduo 15618829160@163.com
#工作目录
WORKDIR /root/workspace
#先进行必要的更新,否则无法进行后续的安装操作
RUN  apt-get update \
  && apt-get -y install sudo dialog apt-utils \
  && sudo echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections \
  && sudo apt upgrade -y
#安装一些必要的软件,方便以后调式使用
RUN  sudo apt install nano tcpdump curl wget net-tools inetutils-ping git openssh-server unzip pip -y
#安装中文语言包,不然浏览器会出现乱码问题
RUN apt-get update && apt-get -y install ttf-wqy-microhei ttf-wqy-zenhei && apt-get clean
#安装Java,因为allure需要使用
RUN  sudo wget https://download.oracle.com/java/19/latest/jdk-19_linux-x64_bin.tar.gz \
  && sudo tar -zxvf jdk-19_linux-x64_bin.tar.gz \
  && sudo rm -rf jdk-19_linux-x64_bin.tar.gz
#配置Java的环境变量
ENV JAVA_HOME=/root/workspace/jdk-19.0.1
ENV JRE_HOME=$JAVA_HOME/jre
ENV CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
ENV PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH
#安装allure,当前最新的allure版本为2.19
RUN sudo wget https://repo.maven.apache.org/maven2/io/qameta/allure/allure-commandline/2.19.0/allure-commandline-2.19.0.tgz \
  && sudo tar -zxvf allure-commandline-2.19.0.tgz \
  && sudo rm -r allure-commandline-2.19.0.tgz \
  && sudo ln -s /root/workspace/allure-2.19.0/bin/allure /usr/bin/allure
#安装selenium,pyyaml,pytest,allure-pytest
RUN sudo pip install selenium
RUN sudo pip install pyyaml pytest allure-pytest pytest-xdist
#配置ssh能够root登录
RUN sudo echo "PermitRootLogin yes" >> /etc/ssh/sshd_config \
  && sudo echo "service ssh restart" >> ~/.bashrc
#配置容器的root账号和密码
RUN  sudo echo root:123456 | chpasswd

build镜像

docker build . -t harbor.xusanduo.com/library/test-script-base-image:v1

push镜像到harbor仓库

docker push harbor.xusanduo.com/library/test-script-base-image:v1

五.实验

1.gitlab操作

a. 在gitlab上创建项目
在这里插入图片描述
b.将代码上传到gitlab上
在这里插入图片描述
代码如下:

root@k8s-node1:~/PycharmProjects/selenium-grid# cat test_main.py 
import time
import random
from selenium.webdriver.common.by import By
from concurrent.futures import ThreadPoolExecutor

from selenium import webdriver

# chrome, MicrosoftEdge, firefox
ds = {'platform': 'ANY',
      'browserName': "temp_browser",
      'version': '',
      'javascriptEnabled': True
      }

# hub_url = 'http://selenium-grid.xusanduo.com/wd/hub'
# test_web_url = 'https://www.baidu.com'

hub_url = 'temp_hub_url'
test_web_url = 'temp_test_web_url'

search_content_list = random.sample(range(100, 10000), 2)
print(search_content_list)


class Test_demo1:

    def test_demo1(self):
        driver = webdriver.Remote(command_executor=hub_url, desired_capabilities=ds)
        driver.get(test_web_url)
        driver.find_element(By.ID, 'kw').send_keys('111111111111111')
        driver.find_element(By.XPATH, '//input[@type="submit"]').click()
        time.sleep(5)
        driver.quit()

    def test_demo2(self):
        driver = webdriver.Remote(command_executor=hub_url, desired_capabilities=ds)
        driver.get(test_web_url)
        driver.find_element(By.ID, 'kw').send_keys('222222222222222222')
        driver.find_element(By.XPATH, '//input[@type="submit"]').click()
        time.sleep(5)
        driver.quit()
    def test_demo3(self):
        driver = webdriver.Remote(command_executor=hub_url, desired_capabilities=ds)
        driver.get(test_web_url)
        driver.find_element(By.ID, 'kw').send_keys('333333333333333333333')
        driver.find_element(By.XPATH, '//input[@type="submit"]').click()
        time.sleep(5)
        driver.quit()

    def test_demo4(self):
        driver = webdriver.Remote(command_executor=hub_url, desired_capabilities=ds)
        driver.get(test_web_url)
        driver.find_element(By.ID, 'kw').send_keys('444444444444444444444')
        driver.find_element(By.XPATH, '//input[@type="submit"]').click()
        time.sleep(5)
        driver.quit()

    def test_demo5(self):
        driver = webdriver.Remote(command_executor=hub_url, desired_capabilities=ds)
        driver.get(test_web_url)
        driver.find_element(By.ID, 'kw').send_keys('555555555555555555555')
        driver.find_element(By.XPATH, '//input[@type="submit"]').click()
        time.sleep(5)
        driver.quit()

2.创建浏览器

这里仅仅演示chrome浏览器,另外两个浏览器都是类似的。

a.创建5个chrome浏览器

root@k8s-master1:~/k8s/selenium-grid/hub-node# cat selenium-nodes-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: selenium-node-chrome
  labels:
    name: selenium-node-chrome
spec:
  replicas: 5
  selector:
    matchLabels:
       name: selenium-node-chrome
  template:
    metadata:
      labels:
        name: selenium-node-chrome
    spec:
      #nodeName: k8s-node1
      containers:
      - name: selenium-node-chrome
        image: selenium/node-chrome:4.7.0-20221202
        ports:
          - containerPort: 7900
        env:
           - name: SE_EVENT_BUS_HOST
             value: "selenium-hub.default.svc.cluster.local"
           - name: SE_EVENT_BUS_PUBLISH_PORT
             value: "4442"
           - name: SE_EVENT_BUS_SUBSCRIBE_PORT
             value: "4443"
        resources:
          limits:
            memory: "1000Mi"
            cpu: "1"
        volumeMounts:
          - mountPath: "/dev/shm"
            name: "dshm"
      volumes:
        - name: "dshm"
          emptyDir:
            medium: "Memory"

创建命令如下:

kubectl apply -f selenium-nodes-deployment.yaml 

b.查看和验证创建的浏览器

在k8s的master机器上查看
在这里插入图片描述
查看hub的网站,查看浏览器是否注册到hub上
在这里插入图片描述
通过vnc查看浏览器的ui界面(仅仅查看第一个浏览器(IP地址为10.244.1.218),其他的类似)
在这里插入图片描述

3.k8s的master节点上的操作

a.测试脚本的k8s部署模板

root@k8s-master1:/deploy# cat job_selenium_grid_template.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  namespace: devops
  name: temp_JobName
spec:
  template:
    spec:
      nodeName: temp_k8s-node
      hostAliases:
      - ip: 192.168.100.201
        hostnames:
        - "selenium-grid.xusanduo.com"
      hostAliases:
      - ip: 192.168.100.202
        hostnames:
        - "selenium-grid.xusanduo.com"
      containers:
      - name: temp_PodLableName
        #image: harbor.xusanduo.com/library/selenium-grid-test-script:v1
        image: temp_ImageName
        command: ['/bin/sh', '-c','pytest -n 5 /root/workspace/selenium-grid/test_main.py']
        securityContext:
          runAsUser: 0
          privileged: true
        resources:
          limits:
            memory: "1Gi"
          requests:
            memory: "1Gi"
        volumeMounts:
          - name: dshm
            mountPath: /dev/shm
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
      volumes:
         - name: dshm
           emptyDir:
             medium: Memory
      restartPolicy: Never
  parallelism: 1
  completions: 1
  backoffLimit: 2

b.测试脚本的k8s部署脚本

root@k8s-master1:~/k8s/jenkinsnode# cat deploy_selenium_grid.sh 
#!/bin/bash

#接受jenkins传过来的参数
#项目代码的版本
#codeversion="v1"
codeversion=$1

#用户选择参数,部署or回滚or终止
#deploy_action="deploy"
deploy_action=$2

#测试的url
#test_target_url="https://www.baidu.com"
test_target_url=$3

#浏览器的类型
#browser_type="chrome"
browser_type=$4

#hub url
hub_url=$5
#hub_url="http://selenium-grid.xusanduo.com/wd/hub"

#项目的名称
#jobName="selenium-grid"
jobName=$6

#工作目录
#workspace="/home/workspace/selenium-grid"
workspace=$7


#harbor相关变量信息
#harbor仓库的账号密码
Username="admin"
Password="123456"

#harbor仓库的url地址和host名称
HaborUrl="http://harbor.xusanduo.com"
HaborWeb="harbor.xusanduo.com"

#默认的项目路径
DefaultProjectName="library"

#测试脚本的基础镜像名称
DefaultImageName="test-script-base-image"

#动态生成的测试脚本的镜像名称
image_name_in_harbor="selenium-grid-test-script"

#需要部署或者回滚的版本镜像名称
project_image_base_url=${HaborWeb}/${DefaultProjectName}/${DefaultImageName}
imageName=${HaborWeb}/${DefaultProjectName}/${image_name_in_harbor}:${codeversion}


#jenkins node节点中相关变量信息
#selenium_grid的k8s部署模板文件,挂载到pod容器的路径
jobFileFromConfigmap="/deploy/job_selenium_grid_template.yaml"

#通过模板文件,动态生成的seleniu-grid的k8s部署文件
deployFile="$projectPath/deploy.yaml"

#登录harbor.xusanduo.com网站的账号密码的配置文件路径
dockerConfigPath="/root/.docker"

#从configmap中,获取登录harbor.xusanduo.com网站的账号密码的配置文件
dockerConfigMapPath="/deploy/config.json"


#测试脚本项目的相关信息
#测试脚本项目的路径,用来存放,部署文件,dockerfile等等
projectPath="/root/${jobName}"

#测试脚本的路径
test_script_path=$workspace"/"${jobName}"/test_main.py"

#测试脚本的压缩包名称
testScriptsTarName=${jobName}".tar"

#k8s部署的模板文件,动态生成部署文件时,需要的部分名称
#获取当前时间
currentTime=$(date -d today +"%Y-%m-%d-%H-%M-%S")
#部署时,应用默认的设置
default_node="k8s-node1"


#判断镜像的版本,是否在harbor仓库中存在
imageIsOrNotExistHarbor(){
   projectName="$1"
   imageName="$2"
   imageVersion="$3"
   urlResult=$(curl -s -u ${Username}:${Password}  -X GET -H "Content-Type: application/json" ${HaborUrl}/api/v2.0/projects/${projectName}/repositories/${imageName}/artifacts/${imageVersion}/tags)
   notFoundStr="NOT_FOUND"
   result=$(echo $urlResult | grep "${notFoundStr}")
   if [[ "$result" != "" ]]
   then
#      echo "imageIsOrNotExistHarbor------Image is not exsit in harbor,need to build a new image"
      echo 1
   else
#      echo "imageIsOrNotExistHarbor------Image is exsit in harbor,not need to build a new image"
      echo 0
   fi
}


#将测试脚本进行压缩
tarTestScriptsProject(){
   cd $workspace
   cd ..
   pwd
   ls
   echo $testScriptsTarName $jobName
   tar -cvPf $testScriptsTarName $jobName
   mv $testScriptsTarName $projectPath
}


#制作和push镜像到harbor
buildAndPushImage(){
   #创建测试脚本为tar压缩文件
   #tarTestScriptsProject
   sudo mkdir -p $projectPath
   sudo rm -rf $projectPath/*
   tarTestScriptsProject
   #将构建的springboot应用的jar,打进docker镜像中去
   echo "buildAndPushImage-------It is building image"
   echo "FROM ${project_image_base_url}:v1"  >> $projectPath/dockerfile
   echo "COPY . ."  >> $projectPath/dockerfile
   echo "RUN sudo tar -xvf ${testScriptsTarName} && rm ${testScriptsTarName} dockerfile" >> $projectPath/dockerfile
   cd $projectPath
   echo "buildAndPushImage-------Finish to build image"
   sudo docker build . -t $imageName
   sudo docker push $imageName
}


#将最新版本的应用部署到k8s集群中去
deploy(){
   #判断是否需要build镜像
   echo "deploy-------start deploy"
   result=$(imageIsOrNotExistHarbor $DefaultProjectName $image_name_in_harbor $codeversion)
   if test $result -eq 1;then
      insertJenkinsParamsToTestScripts
      echo "deploy-------The image is not exsit in harbor, need build"
      buildAndPushImage
      generateDeployYamlFile
      cd $projectPath
      kubectl apply -f $deployFile
      waitJobComplete
      echo "deploy-------deploy finished"
   elif test $result -eq 0;then
      insertJenkinsParamsToTestScripts
      echo "deploy-------The image is exsit in harbor, not need build"
      generateDeployYamlFile
      cd $projectPath
      kubectl apply -f $deployFile
      waitJobComplete
      echo "deploy-------deploy finished"
   else
      echo "deploy-------imageIsOrNotExistHarbor function is error, please check"
   fi
}


#将版本回滚到指定版本,并且部署到k8s集群中去
rollback(){
   insertJenkinsParamsToTestScripts
   #生成部署文件
   echo "rollback-----start rollback"
   generateDeployYamlFile
   cd $projectPath
   kubectl apply -f $deployFile
   waitJobComplete
}


#此功能暂时未实现,请忽略
stop(){
   #生成部署文件
   echo "stop-----stop app deploy"
   generateDeployYamlFile
   cd $projectPath
   kubectl delete -f $deployFile
}


#检查job是否已经完成,如果完成返回0,如果没有完成返回1
checkjobcomlete(){
   jobname="$1"
   result=&(kubectl get jobs.batch -n devops |awk -v jobname="$jobname" '{split($2,arr,"/");if ($1==jobname && arr[1]==arr[2]) print "0"; if ($1==jobname && arr[1]<arr[2]) print "1"}')
   return $result
}


#等待job完成
waitJobComplete(){
   JobName=${jobName}"-"${currentTime}
   while true;do
     result=$(checkjobcomlete $JobName)
     if [[ $result == '1' ]];then
       echo "startJob function----------job not complete, please wait"
       sleep 10
     elif [[ $result == '0' ]];then
       echo "startJob function----------job complete!!!"
       break
     else
       echo "startJob function----------job not exsit!!!please check"
       break
     fi
   done
}


#获取部署模板文件
getDeployTemplateFile(){
   if [ -f $deployFile ]
   then
     echo "" > $deployFile
   fi
   cat $jobFileFromConfigmap >> $deployFile
}


#将jenkins的参数传入到测试脚本中去
insertJenkinsParamsToTestScripts(){
   #将jenkins中的参数,注入到测试镜像中去
   sed -i s#temp_browser#$browser_type#g $test_script_path
   sed -i s#temp_test_web_url#$test_target_url#g $test_script_path
   sed -i s#temp_hub_url#$hub_url#g $test_script_path
}


#生成部署到k8s集群的yaml文件
generateDeployYamlFile(){
   echo "generateDeployYamlFile-------Start generate deployfile"
   getDeployTemplateFile
   JobName=${jobName}"-"${currentTime}
   PodLableName=$browser_type$"-"${codeversion}"-"${currentTime}
   k8s_node=$default_node
   ImageName=$imageName

   sed -i s#temp_PodLableName#${PodLableName}#g $deployFile
   sed -i s#temp_JobName#${JobName}#g $deployFile
   sed -i s#temp_k8s-node#${k8s_node}#g $deployFile
   sed -i s#temp_ImageName#${ImageName}#g $deployFile
   echo "generateDeployYamlFile-------Finish to generate deployfile"
}


#这个方法应该可以添加到容器启动脚本里面,但是不想再重新做镜像,就添加到这个位置
setDockerConfigForHarbor(){
   echo "Get harbor website account and password"
   mkdir -p $dockerConfigPath
   cat $dockerConfigMapPath > $dockerConfigPath/config.json
}


#根据参数,选择部署最新或者回滚应用
deploy_action(){
   setDockerConfigForHarbor
   if [[ "$deploy_action" == "deploy" ]]
   then
      deploy
   elif [[ "$deploy_action" == "rollback" ]]
   then
      rollback
   elif [[ "$deploy_action" == "stop" ]]
   then
      stop
   else
      echo "deploy_action function, param is error:$deploy_action"
   fi
}
deploy_action

c.将模板文件和脚本文件,创建成k8s中的configMap

此configMap,我们还是在上一篇博客的基础上整,所以第一步,我们在k8s的master节点,先删除configMap(当然,也可以通过打补丁的方式,我觉得删除后,再创建比较方便)

kubectl delete cm -n devops deploy-app

第二步,我们创建configmap,前面三个文件是上一篇博客,后面的2个文件,是我们本次添加进去的。

kubectl create configmap deploy-app -n devops --from-file=deploy_app.sh=deploy_app.sh  --from-file=deploy_springboot_app_template.yaml=deploy_springboot_app_template.yaml --from-file=config.json=config.json --from-file=deploy_uitestdemo.sh=deploy_uitestdemo.sh --from-file=job_uitestdemo_browser_template.yaml --from-file=deploy_selenium_grid.sh=deploy_selenium_grid.sh --from-file=job_selenium_grid_template.yaml=job_selenium_grid_template.yaml

4.jenkinks网站上的操作

a.创建jenkins项目

1.版本参数
在这里插入图片描述
2.部署动作参数和测试目标
在这里插入图片描述
3.浏览器类型和hub url的参数
在这里插入图片描述
4.选择在那个jenkins节点构建和代码仓库的地址
在这里插入图片描述
5.1.构建动作(需要执行的脚本)
在这里插入图片描述

5.2 构建时需要执行的脚本代码如下:

#1.定义新建部署根目录
deployWorkspace="/root/"$JOB_BASE_NAME
rm -rf $deployWorkspace
mkdir $deployWorkspace
#2.获取部署应用的脚本,jenkins的node节点,挂载了deploy-app这个configmap,我们的部署脚本就是从这里获取的
cd $deployWorkspace
cat /deploy/deploy_selenium_grid.sh >> $deployWorkspace/deploy_selenium_grid.sh
chmod +x deploy_selenium_grid.sh
#3.执行应用部署或回滚或停止
./deploy_selenium_grid.sh ${testCodeVersion} ${deploy_action} ${target_test_url} ${browser_type} $hub_url $JOB_BASE_NAME $WORKSPACE

b.项目创建成功后,检验build的时候,是否能够选择参数

构建项目时,检查参数是否正确
在这里插入图片描述

六.验证

1.第一步,使用git提交代码到gitlab代码仓库
在这里插入图片描述

2.第2步,开始build,对百度网站进行测试
点击build
在这里插入图片描述
3.在k8s集群,查看相关的job和pod信息
在这里插入图片描述

4.在selenium-grid网站,查看浏览器是否会并发执行
在这里插入图片描述观察浏览器的执行情况,选择两个浏览器
浏览器1
在这里插入图片描述
浏览器2
在这里插入图片描述

七.遗留问题

1.关于k8s部署selenium grid自带视频录制功能,这个本次没有做进去,后续有时间有需求,再去折腾
关于docker-compose部署selenium grid带有视频录制,可以参考docker-compose-v3-video

八.参考资料

官网,挺好的
https://www.selenium.dev/documentation/grid/getting_started/

Python线程池及其原理和使用(超级详细)
http://c.biancheng.net/view/2627.html

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐