在经过半年多的应用之后,我考虑再对现有的jenkins整套服务再做升级,使其更好用些。因为现存的jenkins服务结构已经在使用了,所以就联合运维部的同事重新搭建了一套,我也是从头设计了一下,具体设计方案,我准备另外写,这里只分享新体系里的pipeline的样本。新体系里会新用到gitlab、k8s这些技术,而且jenkins设计成了伸缩式的。

好了,脚本如下:

#!/usr/bin/env groovy

pipeline {
    //确认使用主机
    agent
            {
                kubernetes {
                    label "${BUILD_TAG}-pod"
                    defaultContainer 'jnlp'
                    yaml """
apiVersion: v1
kind: Pod
metadata:
  labels:
    some-label: some-label-value
spec:
  containers:
  - name: jnlp-slave
    image: 172.17.1.XXX/library/jenkins-docker:2
    imagePullPolicy: Always
    env:
    - name: MY_NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: MY_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: MY_POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: MY_POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: MY_POD_SERVICE_ACCOUNT
      valueFrom:
        fieldRef:
          fieldPath: spec.serviceAccountName
    volumeMounts:
    - name: sock
      mountPath: /var/run/docker.sock
    - name: mvn
      mountPath: /home/jenkins/.m2/repository      
    command:
    - cat
    tty: true
  volumes:
  - name: sock
    hostPath:
      path: /var/run/docker.sock
  - name: mvn
    glusterfs:
      endpoints: glusterfs-cluster
      path: /gfs/jenkinsmvn/.m2/repository
      readOnly: false
"""
                }
            }
    //常量参数,初始确定后一般不需更改
    environment {
        //services的pom.xml的相对路径
        pomPath = 'pom.xml'
        //gitlab账号
        GIT_USERNAME = 'jenkins'
        //密码
        GIT_PASSWORD = '*********'
        //k8s节点地址
        TESTIP = '172.17.1.xxx'
        //归档文件,jmeter测试报告
        responseData='jmeter/ResponseData.xml,'+'jmeter/ResultReport/*.*,'+'jmeter/ResultReport/sbadmin2-1.0.7/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/less/*.*,'+'jmeter/ResultReport/sbadmin2-1.0.7/dist/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/dist/css/*.*,'+'jmeter/ResultReport/sbadmin2-1.0.7/dist/js/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/*.*,'+'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/bootstrap/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/bootstrap/dist/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/bootstrap/dist/css/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/bootstrap/dist/fonts/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/bootstrap/dist/js/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/metisMenu/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/metisMenu/dist/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/flot.tooltip/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/flot.tooltip/js/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/flot-axislabels/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/flot/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/font-awesome/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/font-awesome/css/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/font-awesome/less/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/font-awesome/fonts/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/font-awesome/scss/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/jquery/*.*,'+
                    'jmeter/ResultReport/sbadmin2-1.0.7/bower_components/jquery/dist/*.*,'+'jmeter/ResultReport/content/*.*,'+
                    'jmeter/ResultReport/content/css/*.*,'+'jmeter/ResultReport/content/pages/*.*,'+
                    'jmeter/ResultReport/content/js/*.*'
        //jmeter测试脚本名称
        JMETERNAME = 'cms'
    }
    options {
        //保持构建的最大个数
        buildDiscarder(logRotator(numToKeepStr: '10'))
        // 设置Pipeline运行的超时时间
        timeout(time: 1, unit: 'HOURS')
    }
    //pipeline运行结果通知给触发者
    post {
        //始终执行
        /*always {
            echo "分支${BRANCH_NAME}删除服务......"
            container('jnlp-slave') {
                sh """
		chmod 755 docker_shell/*.sh
		docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
		"""
            }
        }*/
        //失败触发邮件
        failure {
            script {
                emailext body: '${JELLY_SCRIPT,template="static-analysis"}',
                        recipientProviders: [[$class: 'RequesterRecipientProvider'], [$class: 'DevelopersRecipientProvider']],
                        subject: '${JOB_NAME}- Build # ${BUILD_NUMBER} - Failure!'
            }
        }
    }

    stages {
        // 编译构建代码
        stage('Build') {
            steps {
                //拉取工程代码
                git branch: '${BRANCH_NAME}', credentialsId: '******-***-***-****-**********', url: 'git@172.17.1.xxx:X/xxx/zc-ht/datamgt.git'

                container('jnlp-slave') {
                    script {
                        if ( "$BRANCH_NAME" != 'master') {
                        sh "mvn org.jacoco:jacoco-maven-plugin:prepare-agent -Dmaven.test.failure.ignore=true -f ${pomPath} clean install -Dautoconfig.skip=true -Pcoverage-per-test"
                        } else {
                            sh "mvn -f ${pomPath} clean install -Dmaven.test.skip=true"
                            //sh "mvn -f ${pomPath} clean install -Dtest -DfailIfNoTests=false"
                        }
                    }
                }
            }

        }
        //单元测试
        stage('Unit test') {
            steps {
                echo "starting unitTest......"
                //注入jacoco插件配置,clean test执行单元测试代码. All tests should pass.
                //junit '**/target/surefire-reports/*.xml'
                //配置单元测试覆盖率要求,未达到要求pipeline将会fail,code coverage.LineCoverage>20%.
                //jacoco changeBuildStatus: true, maximumLineCoverage:"20%"
            }
        }
        //静态检查
        stage('SonarQube') {
            steps {
                echo "starting codeAnalyze with SonarQube......"
                //sonar:sonar.QualityGate should pass
                container('jnlp-slave') {
                    withSonarQubeEnv('Sonar-6.7') {
                        //固定使用项目根目录${basedir}下的pom.xml进行代码检查
                        //sh "mvn -f pom.xml clean compile sonar:sonar"
                        sh "mvn sonar:sonar " +
                                "-Dsonar.sourceEncoding=UTF-8 "
                    }
                    script {
                        //  未通过代码检查,中断
                        timeout(10) {
                            //利用sonar webhook功能通知pipeline代码检测结果,未通过质量阈,pipeline将会fail
                            def qg = waitForQualityGate()
                            if (qg.status != 'OK') {
                                error "未通过Sonarqube的代码质量阈检查,请及时修改!failure: ${qg.status}"
                            }
                        }
                    }
                }
            }
        }
        //Make Branch Image
        stage('Make Branch Image') {
            when { not { branch 'master' } }
            steps {
                echo "starting Make Branch Image ......"
                //构建项目镜像文件,启动对应项目的k8s服务
                container('jnlp-slave') {
                    script {
                        // 获取分支对应端口
                        testport = sh(returnStdout: true, script: 'echo "`cat docker_shell/port.list|grep $BRANCH_NAME|awk '{print $2}'`"').trim()
                        // 端口未获取到
                        if (testport == '') {
                            echo "分支未分配端口"
                            error "您的分支未分配端口,请联系项目经理!failure: 端口未分配"
                        }
                    }
                    sh """
			cp -r target/distribution docker_shell
                        sed -i "s#https://IP#http://${TESTIP}:${testport}#g" docker_shell/distribution/serverIp.properties
			chmod 755 docker_shell/*.sh
			mkdir /home/jenkins/.kube/
			cp docker_shell/config /home/jenkins/.kube/
			docker_shell/mkdocker.sh ${testport} ${BUILD_NUMBER} ${JOB_NAME}
			"""
                }
            }
            post {
                //失败执行
                failure {
                    echo "分支${BRANCH_NAME}删除服务......"
                    container('jnlp-slave') {
                        sh """
                        chmod 755 docker_shell/*.sh
                        docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
                        """
                    }
                }
            }
        }
        //Make Master Image
        stage('Make Master Image') {
            when { branch 'master' }
            steps {
                echo "starting Make Master Image ......"
                //构建项目镜像文件,启动对应项目的k8s服务
                container('jnlp-slave') {
                    script {
                        // 获取分支对应端口
                        testport = sh(returnStdout: true, script: 'echo "`cat docker_shell/port.list|grep $BRANCH_NAME|awk '{print $2}'`"').trim()
                        // 端口未获取到
                        if (testport == '') {
                            echo "分支未分配端口"
                            error "您的分支未分配端口,请联系项目经理!failure: 端口未分配"
                        }
                    }
                    sh """
			cp -r target/distribution docker_shell
        		sed -i "s#dev#test#g" docker_shell/distribution/application.yml
        		sed -i "s#cbdnews36#znz_2019#g" docker_shell/distribution/elas.properties
			chmod 755 docker_shell/*.sh
			mkdir /home/jenkins/.kube/
			cp docker_shell/config /home/jenkins/.kube/
			docker_shell/mmkdocker.sh ${testport} ${BUILD_NUMBER} ${JOB_NAME}
			"""
                }
            }
            post {
                //失败执行
                failure {
                    echo "主干${BRANCH_NAME}删除服务......"
                    container('jnlp-slave') {
                        sh """
                        chmod 755 docker_shell/*.sh
                        docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
                        """
                    }
                }
            }
        }
        //自动化接口测试
        stage('Branch API test') {
            when { not { branch 'master' } }
            steps {
                echo "starting Branch API test ......"
                container('jnlp-slave') {
                    //根据服务名跑对应名称的脚本
                    script {
                        def input_result = input message: '请输入提交的服务名', ok: ' 确定', parameters: [string(defaultValue: '', description: '输入提交的服务名:', name: 'ServiceName')]
                        if (input_result == '') {
                            echo " 请输入可用的服务名!"
                            return false
                        }
                        // 服务名
                        ServiceName = input_result
                    }
                    sh """
                        sudo cp docker_shell/mysql-connector-java-5.1.47.jar  /usr/local/apache-jmeter-5.0/lib
        		sleep 10s
                        jmeter -Jprotocol=http -Jip=${TESTIP} -Jport=${testport} -n -t jmeter/${ServiceName}.jmx -l jmeter/${ServiceName}.jtl -e -o jmeter/ResultReport
        		"""
                    perfReport sourceDataFiles: "jmeter/${ServiceName}.jtl", errorFailedThreshold: 0
                }
            }
            post {
                //失败执行
                failure {
                    echo "分支${BRANCH_NAME}删除服务......"
                    container('jnlp-slave') {
                        sh """
                        docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
                        """
                    }
                }
            }
        }
        //自动化接口测试
        stage('Master API test') {
            when { branch 'master' }
            steps {
                echo "starting Master API test ......"
                container('jnlp-slave') {
                    sh """
                        sudo cp docker_shell/mysql-connector-java-5.1.47.jar  /usr/local/apache-jmeter-5.0/lib
        		sed -i "s#/znz#/znz_2019#g" jmeter/${JMETERNAME}.jmx
                        sleep 10s
                        jmeter -Jprotocol=http -Jip=${TESTIP} -Jport=${testport} -n -t jmeter/${JMETERNAME}.jmx -l jmeter/${JMETERNAME}.jtl -e -o jmeter/ResultReport
        		"""
                    perfReport sourceDataFiles: "jmeter/${JMETERNAME}.jtl", errorFailedThreshold: 0
                }
            }
        }
        // 归档
        stage('Archive Artifacts') {
               steps {
                // 归档文件
                 archiveArtifacts "${responseData}"
               }
        }
        //开发分支手工调试
        stage('UT test') {
            when { not { branch 'master' } }
            steps {
                echo "I am branch ......"
                script {
                    emailext(
                            subject: "PineLine '${JOB_NAME}' (${BUILD_NUMBER})开发分支,手工测试通知",
                            body: "提交的PineLine '${JOB_NAME}' (${BUILD_NUMBER})可以手工测试了n请测试完毕后结束本次JOB!!!",
                            recipientProviders: [[$class: 'RequesterRecipientProvider'], [$class: 'DevelopersRecipientProvider']]
                    )
                    input message: "请前往以下地址测试nhttp://${TESTIP}:${testport} n测试通过请确认", ok: ' 确认通过'
                }
            }
            post {
                //始终执行
                always {
                    echo "分支${BRANCH_NAME}删除服务......"
                    container('jnlp-slave') {
                        sh """
                        docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
                        """
                    }
                }
            }
        }
        //Push Image
        stage('Push Image') {
            when { branch 'master' }
            steps {
                echo "starting Push Image ......"
                //根据param.server分割获取参数,包括IP,username,password
                container('jnlp-slave') {
                    timeout(5) {
                        script {
                            emailext(
                                    subject: "PineLine '${JOB_NAME}' (${BUILD_NUMBER})对应的接口版本录入通知",
                                    body: "提交的PineLine '${JOB_NAME}' (${BUILD_NUMBER})需要输入接口版本号n请及时前往输入${env.BUILD_URL}进行测试验收",
                                    recipientProviders: [[$class: 'RequesterRecipientProvider'], [$class: 'DevelopersRecipientProvider']]
                            )
                            def input_result = input message: '请输入Tag', ok: ' 确定', parameters: [string(defaultValue: '', description: '请输入Tag:', name: 'TAG')]
                            //echo "you input is "+input_result +",to do sth"
                            if (input_result == '') {
                                echo " 未输入版本号 !!"
                                return false
                            }
                            // TAG标签号
                            gittag = input_result
                        }
                    }
                    withCredentials([usernamePassword(credentialsId: '******-***-***-****-**********', passwordVariable: GIT_PASSWORD, usernameVariable: 'GIT_USERNAME')]) {
                        sh """
			echo ${gittag}
			git config --global user.email "jenkins@xxxxxx.com"
			git config --global user.name "jenkins"
			git tag -a ${gittag} -m 'Release version ${gittag}'
			git push http://${GIT_USERNAME}:${GIT_PASSWORD}@172.17.1.xxx/X/znz/zc-ht/datamgt.git --tags
			echo 'git标签添加成功'
			docker_shell/pushimage.sh ${gittag} ${BUILD_NUMBER} ${JOB_NAME}
			"""
                    }
                }
            }
        }
    }
}

基础的pipeline语法就不多说了,主要看下这几段内容

    agent
            {
                kubernetes {
                    label "${BUILD_TAG}-pod"
                    defaultContainer 'jnlp'
                    yaml """
apiVersion: v1
kind: Pod
metadata:
  labels:
    some-label: some-label-value
spec:
  containers:
  - name: jnlp-slave
    image: 172.17.1.XXX/library/jenkins-docker:2
    imagePullPolicy: Always
    env:
    - name: MY_NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    - name: MY_POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: MY_POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: MY_POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: MY_POD_SERVICE_ACCOUNT
      valueFrom:
        fieldRef:
          fieldPath: spec.serviceAccountName
    volumeMounts:
    - name: sock
      mountPath: /var/run/docker.sock
    - name: mvn
      mountPath: /home/jenkins/.m2/repository      
    command:
    - cat
    tty: true
  volumes:
  - name: sock
    hostPath:
      path: /var/run/docker.sock
  - name: mvn
    glusterfs:
      endpoints: glusterfs-cluster
      path: /gfs/jenkinsmvn/.m2/repository
      readOnly: false
"""
                }
            }

这段是向k8s服务器去申请资源启动jenkins节点服务,节点以BUILD_TAG来命名,因为可能同时会有多个开发分支提出申请,需要区分开不同的名字以便可以差错或者跟踪log;另外在这段脚本里还做了三件事:

1.将在那台k8s节点上的环境变量一同加到jenkins节点里去,后面会使用到;

2.将k8s的docker的sock也挂载进去,因为会用到构建、拉取、推送镜像;

3.将jenkins节点里的“.m2”目录持久化,这个目录是maven工程编译时拉取依赖所存放的位置,为了避免重复拉取耗费太多时间,所以把它持久化,这样编译速度会提高很多;

    options {
        //保持构建的最大个数
        buildDiscarder(logRotator(numToKeepStr: '10'))
        // 设置Pipeline运行的超时时间
        timeout(time: 1, unit: 'HOURS')
    }

这里设置了一下最大构建数和超时中断时间,这个超时设定原本并没有加,但是后来运行时发现开发人员经常忘记去确认正在运行的流水线,这套脚本是为多分支服务的,然后在流水线的最后环节会暂停,需要人工确认部署在测试服务器上的分支程序的功能是否可用,确认之后会销毁已经部署的服务容器,这样可以节约资源,不过我们的开发经常忘记确认导致服务一直挂着,资源就被占着,拖垮了k8s的服务,所以加了个超时终止的处理。

接着就是构建分支上的镜像,并且向k8s申请资源来启动该服务

script {
	// 获取分支对应端口
	testport = sh(returnStdout: true, script: 'echo "`cat docker_shell/port.list|grep $BRANCH_NAME|awk '{print $2}'`"').trim()
	// 端口未获取到
	if (testport == '') {
		echo "分支未分配端口"
		error "您的分支未分配端口,请联系项目经理!failure: 端口未分配"
	}
}
	sh """
	cp -r target/distribution docker_shell
	sed -i "s#https://IP#http://${TESTIP}:${testport}#g" docker_shell/distribution/serverIp.properties
	chmod 755 docker_shell/*.sh
	mkdir /home/jenkins/.kube/
	cp docker_shell/config /home/jenkins/.kube/
	docker_shell/mkdocker.sh ${testport} ${BUILD_NUMBER} ${JOB_NAME}
	"""
}

在构建镜像之前先需要先确定那个分支的代码,构建镜像的命名都会以各分支名来命名,同时也包括后面启的服务名也会以这个规则来命名;这里会先准备一个文件,里面记录的分支对应的各自的端口号,下面就是port.list

2258e5a12ba396db6ef9ad04cd843e76.png

通过这个配置文件获取到端口号后,就是去构建镜像,先把编译的jar及配置复制到docker工作目录,然后如果有需要调整的配置,可以在这里调整,接着对shell文件赋执行权限,因为需要使用k8s,当前使用的节点镜像不知为何k8s命令一直无法使用,然后运维的同事给了一个方案就是直接把k8s的配置复制到对应位置,准备工作完毕开始执行构建镜像并且启动服务,这里传了3个值进去:分支端口号、构建编号、构建名(带分支名的),这三个值都是这套构建镜像脚本的重要参数。

接着看看脚本内容:

#!/bin/bash -l
this_dir=`pwd`
echo "$this_dir ,this is pwd"
echo "$0 ,this is $0"
dirname $0|grep "^/" >/dev/null
if [ $? -eq 0 ];then
this_dir=`dirname $0`
else
dirname $0|grep "^." >/dev/null
retval=$?
if [ $retval -eq 0 ];then
this_dir=`dirname $0|sed "s#^.#$this_dir#"`
else
this_dir=`dirname $0|sed "s#^#$this_dir/#"`
fi
fi

#testport
TESTPORT=$1
#version
SVNVERSION=$2
#镜像名(job名)
IMAGE_NM=${3///-}

HARBOR_IP='172.17.1.XXX'
HARBOR_USER='jenkins'
HARBOR_USER_PASSWD='******'


# 删除旧镜像
IMAGE_ID=`docker images | grep "$IMAGE_NM" | awk '{print $3}'`
if [ -n "$IMAGE_ID" ];then
docker rmi $IMAGE_ID
echo "$IMAGE_ID delete"
fi

#登录镜像仓库
docker login -u ${HARBOR_USER} -p ${HARBOR_USER_PASSWD} ${HARBOR_IP}

# 构建新镜像
cd $this_dir
docker build -t "$IMAGE_NM":${SVNVERSION} .

# 判断构建是否成功
IMAGE_ID=`docker images | grep "$IMAGE_NM" | awk '{print $3}'`
if [ -n "$IMAGE_ID" ];then
echo "$IMAGE_ID $IMAGE_NM 镜像制作成功"
#退出仓库
docker logout ${HARBOR_IP}

else
#如果制作镜像失败了,退出
echo "$IMAGE_NM $IMAGE_NM 镜像制作失败"
#退出仓库
docker logout ${HARBOR_IP}
exit 1

fi

# 配置k8s脚本
echo "配置k8s脚本"
./k8sport.sh $TESTPORT $SVNVERSION $IMAGE_NM

这份脚本和上一篇中分享的脚本没有太大区别,多了配置k8s脚本,原本是写在一起的,但是太多了还是按功能分开写比较清楚,下面是k8s脚本的内容:

#!/bin/bash -l

#k8s对外端口(根据分支名来区分)
TESTPORT=$1
#version
SVNVERSION=$2
#镜像名
IMAGE_NM=$3
#deployment名称
DEPLOYMENTNM="msa-$IMAGE_NM"
#k8s服务端口、容器端口测试
SERVICEPORT='8890'

#复制k8s脚本模板
cp datamgt.yaml $IMAGE_NM.yaml

#修改各项参数
echo '指定节点'
sed -i "s#node-x#$MY_NODE_NAME#g" $IMAGE_NM.yaml
echo '指定服务名'
sed -i "s#k8sservicenm#$IMAGE_NM#g" $IMAGE_NM.yaml
echo '指定deployment名'
sed -i "s#deploymentnm#$DEPLOYMENTNM#g" $IMAGE_NM.yaml
echo '指定容器使用的镜像版本'
sed -i "s#imagetemp#$SVNVERSION#g" $IMAGE_NM.yaml
echo '指定k8s服务端口、容器端口'
sed -i "s#service-port#$SERVICEPORT#g" $IMAGE_NM.yaml
#根据分支指定k8s对外端口
echo "指定容器端口"
sed -i "s#k8s-port#$TESTPORT#g" $IMAGE_NM.yaml

# 删除旧deployment
DEPLOY_NM=`kubectl get deployment | grep "$DEPLOYMENTNM" | awk '{print $1}'`
if [ -n "$DEPLOY_NM" ];then
kubectl delete -f $IMAGE_NM.yaml
echo "$DEPLOY_NM delete"
fi

# 执行k8s脚本
echo "执行k8s脚本"
kubectl create -f $IMAGE_NM.yaml

这里需要先要写好k8s使用的yaml文件,考虑到通用性,所以把yaml文件写成变量代入的形式,在这个shell脚本里通过sed命令将必须的内容进行替换,形成可执行的yaml文件,最后执行yaml就可以拉起这个分支代码的容器了。

yaml文件模板:

apiVersion: v1
kind: Service
metadata:
  name: k8sservicenm
  labels:
    app: k8sservicenm
spec:
  type: NodePort
  selector:
    app: k8sservicenm
  ports:
  - name: http
    port: service-port
    nodePort: k8s-port
    targetPort: service-port
---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: deploymentnm
  labels:
    app: deploymentnm
spec:
  replicas: 1
  selector:
    matchLabels:
      app: k8sservicenm
  template:
    metadata:
      labels:
        app: k8sservicenm
    spec:
      nodeSelector:
        name: node-x
      containers:
      - name: k8sservicenm
        image: k8sservicenm:imagetemp
        ports:
        - containerPort: service-port
      imagePullSecrets:
        - name: registry-179

在整个stage的最后,我加了一段post处理,里面执行了一段脚本always.sh,为的是防止中途构建或者启动服务失败,而导致资源没释放。

#!/bin/bash -l

#deployment名
DEPLOYMENTNM=${1///-}
#镜像标签
TEMPTAG=$2
# 删除正在运行的deployment
DEPLOY_NM=`kubectl get deployment | grep "$DEPLOYMENTNM" | awk '{print $1}'`

if [ -n "$DEPLOY_NM" ] && [ -f "docker_shell/$DEPLOYMENTNM.yaml" ];then
kubectl delete -f docker_shell/$DEPLOYMENTNM.yaml
echo "$DEPLOY_NM 已删除!!!"
docker rmi $DEPLOYMENTNM:$TEMPTAG
echo "$DEPLOYMENTNM:$TEMPTAG 镜像已删除!!!"
else
echo "$DEPLOY_NM 未删除!!!"
fi

后面是主干的镜像构建和启动服务,基本和分支的脚本都是一样的,区别就在于主干的服务是不关闭的,因为主干的服务相当于测试服务,需要保持运行状态,所以将启动k8s脚本的命令create换成apply就行了,这里不想在文件里加判断来区分了,所以就复写了一套只修改了命令。

接下来的jmeter测试,由于是要求团队按模块分开写测试脚本的,所以执行时会有个交互,需要输入测试的脚本名称再继续执行jmeter脚本,当然还是分主干和分支两条路,分支执行测试脚本有错的话会结束新启动的容器服务。另外在执行脚本前传入了一个mysql的jar包到jmeter根目录下,这是因为脚本里有恢复数据库内容的操作需要用到数据库jar包。

//自动化接口测试
stage('Branch API test') {
	when { not { branch 'master' } }
	steps {
		echo "starting Branch API test ......"
		container('jnlp-slave') {
			//根据服务名跑对应名称的脚本
			script {
				def input_result = input message: '请输入提交的服务名', ok: ' 确定', parameters: [string(defaultValue: '', description: '输入提交的服务名:', name: 'ServiceName')]
				if (input_result == '') {
					echo " 请输入可用的服务名!"
					return false
				}
				// 服务名
				ServiceName = input_result
			}
			sh """
				sudo cp docker_shell/mysql-connector-java-5.1.47.jar  /usr/local/apache-jmeter-5.0/lib
				sleep 20s
				jmeter -Jprotocol=http -Jip=${TESTIP} -Jport=${testport} -n -t jmeter/${ServiceName}.jmx -l jmeter/${ServiceName}.jtl -e -o jmeter/ResultReport
			  """
			perfReport sourceDataFiles: "jmeter/${ServiceName}.jtl", errorFailedThreshold: 5
		}
	}
	post {
		//失败执行
		failure {
			echo "分支${BRANCH_NAME}删除服务......"
			container('jnlp-slave') {
				sh """
				docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
				"""
			}
		}
	}
}

之后是进行一个归档处理,将前面的测试报告归档到master服务器上便于查阅,这次的报告加了图形界面分析,如图

518b126313155709497905d16598ad28.png

在接下来在分支路线的话整个job在这里会暂停,这时会提示这条分支的服务已经启动,可以在给出的地址上进行手工测试或是前端调用联调,测试或是联调完成后点确定,这时整个job就会结束,同时分支的服务也会关闭,释放资源,最后同样也有post处理,无论是否成功或失败都会释放资源

//开发分支手工调试
stage('UT test') {
  when { not { branch 'master' } }
  steps {
    echo "I am branch ......"
    script {
      if ( "$BRANCH_NAME" != 'devlop') {
        emailext(
            subject: "PineLine '${JOB_NAME}' (${BUILD_NUMBER})开发分支,手工测试通知",
            body: "提交的PineLine '${JOB_NAME}' (${BUILD_NUMBER})可以手工测试了n请测试完毕后结束本次JOB!!!",
            recipientProviders: [[$class: 'RequesterRecipientProvider'], [$class: 'DevelopersRecipientProvider']]
        )
        input message: "请前往以下地址测试nhttp://${TESTIP}:${testport} n测试通过请确认", ok: ' 确认通过'
      }
    }
  }
  post {
    //始终执行
    always {
      echo "分支${BRANCH_NAME}删除服务......"
      container('jnlp-slave') {
        script {
          if ( "$BRANCH_NAME" != 'develop') {
            sh """
            docker_shell/always.sh ${JOB_NAME} ${BUILD_NUMBER}
            """
          }
        }
      }
    }
  }
}

如果是主干的话,就是对镜像打上标签再推送到仓库里去

//Push Image
stage('Push Image') {
  when { branch 'master' }
  steps {
    echo "starting Push Image ......"
    //根据param.server分割获取参数,包括IP,username,password
    container('jnlp-slave') {
      timeout(5) {
        script {
          emailext(
              subject: "PineLine '${JOB_NAME}' (${BUILD_NUMBER})对应的接口版本录入通知",
              body: "提交的PineLine '${JOB_NAME}' (${BUILD_NUMBER})需要输入接口版本号n请及时前往输入${env.BUILD_URL}进行测试验收",
              recipientProviders: [[$class: 'RequesterRecipientProvider'], [$class: 'DevelopersRecipientProvider']]
          )
          def input_result = input message: '请输入Tag', ok: ' 确定', parameters: [string(defaultValue: '', description: '请输入Tag:', name: 'TAG')]
          //echo "you input is "+input_result +",to do sth"
          if (input_result == '') {
            echo " 未输入版本号 !!"
            return false
          }
          // TAG标签号
          gittag = input_result
        }
      }
      withCredentials([usernamePassword(credentialsId: '******-***-***-****-**********', passwordVariable: GIT_PASSWORD, usernameVariable: 'GIT_USERNAME')]) {
        sh """
        echo ${gittag}
        git config --global user.email "jenkins@xxxxxx.com"
        git config --global user.name "jenkins"
        git tag -a ${gittag} -m 'Release version ${gittag}'
        git push http://${GIT_USERNAME}:${GIT_PASSWORD}@172.17.1.xxx/X/znz/zc-ht/datamgt.git --tags
        echo 'git标签添加成功'
        docker_shell/pushimage.sh ${gittag} ${BUILD_NUMBER} ${JOB_NAME}
        """
      }
    }
  }
}

这段打标签推仓库的思路是这样考虑的,现在的代码管理是用的分支管理,也就是团队成员开发的功能都只在自己的分支上,而各自的分支的代码都是不完整的,所以在分支CI的时候只构建了临时镜像,分支结束后容器销毁,在下一次启动时旧镜像消除;当分支合并到主干,主干CI时,在各项检测都通过了后,对镜像打上标签并推送到仓库里,我们的仓库用的是自己搭的harbor,之前有试过想从harbor里读取镜像的列表,便于推到测试环境或者生产环境,但没找到好的方法,后来想了个折中的办法,就是对gitlab的代码打标签,同时将这个标签作为镜像的标签打上传到仓库里,这样就可以获得镜像标签列表了,另外还可以找到gitlab里对应的代码,准确的找到线上版本的代码,对于定位修复线上bug很有效。

最后是推送镜像的脚本:

#!/bin/bash -l
this_dir=`pwd`
echo "$this_dir ,this is pwd"
echo "$0 ,this is $0"
dirname $0|grep "^/" >/dev/null
if [ $? -eq 0 ];then
this_dir=`dirname $0`
else
dirname $0|grep "^." >/dev/null
retval=$?
if [ $retval -eq 0 ];then
this_dir=`dirname $0|sed "s#^.#$this_dir#"`
else
this_dir=`dirname $0|sed "s#^#$this_dir/#"`
fi
fi

#version
SVNVERSION=$1
#tempversion
TMP_VER=$2
#镜像名(job名)
IMAGE_NM=${3///-}
#deployment名称
DEPLOYMENTNM="msa-$IMAGE_NM"

HARBOR_IP='172.17.1.xxx'
HARBOR_USER='jenkins'
HARBOR_USER_PASSWD='********'
HARBOR_REGISTRY='172.17.1.xxx/xxxxx/'


# 判断容器是否存在
DEPLOY_NM=`kubectl get deployment | grep "$DEPLOYMENTNM" | awk '{print $1}'`
if [ -n "$DEPLOY_NM" ]; then
# 判断镜像是否存在
IMAGE_ID=`docker images | grep "$IMAGE_NM" | awk '{print $3}'`
if [ -n "$IMAGE_ID" ];then
#登录镜像仓库
echo "registry login"
docker login -u ${HARBOR_USER} -p ${HARBOR_USER_PASSWD} ${HARBOR_IP}
#对新镜像打标签
echo "Make Tag to $IMAGE_ID $IMAGE_NM"
docker tag "$IMAGE_NM":${TMP_VER} "$HARBOR_REGISTRY""$IMAGE_NM":${SVNVERSION}
echo "Make Tag successful"
#推送镜像
echo "Push $IMAGE_NM"
docker push "$HARBOR_REGISTRY""$IMAGE_NM":${SVNVERSION}
echo "Push $IMAGE_NM successful"
#退出镜像仓库
docker logout ${HARBOR_IP}
else
#打标签失败
echo "Make Tag Faild"
exit 1
fi
else
#打标签失败
echo "Make Tag Faild"
exit 1

fi

好了,联合K8s的pipeline脚本已经分享好了,其实pipeline的脚本可以很多样,并没有一定的定式,主要还是根据自己的业务需要来设计,希望可以帮助到大家。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐