眼因多流泪而愈益清明,心因饱经风霜而愈益温厚

首先说一下为什么在学习K8s的时候要学习Jenkins这个知识点,首先看图:
传统的CI/CD:
在这里插入图片描述
CI/CD的最终目的是减少人工干预,增加可管理性
非容器化CI/CD:
在这里插入图片描述
容器化的CI/CD
在这里插入图片描述
容器化可以保持环境的高度一致性,镜像可以部署到任何环境中

jenkins与k8s的CI/CD
在这里插入图片描述
jenkins会调用k8s的api接口(所以会有jenkins配置k8s的网络接口的操作),创建一个Pod(Pod具体会完成git→jenkins→maven)拉取代码,构建推送,都是在这个Pod中完成的,当任务完成时,它便会自动释放销毁。

部署jenkins到k8s集群

1.部署持久存储卷

这里说明一点,之前的文章中有使用kubeadm方式部署集群的步骤,这里使用的还是那个环境,一台master,两台node。
master:192.168.26.10
node1:192.168.26.11
node2:192.168.26.12
NFS的安装
我这里是用NODE1提供NFS服务,生产环境要独立,在node1上操作。

# yum -y install nfs-utils rpcbind

这里是做多个NFS目录用于挂载,因为一个PVC取消一个PV的绑定之后,原来的PV还是不能被其他PVC使用的

# mkdir /data/{nfs1,nfs2,nfs3,nfs4,nfs5,nfs6,nfs7,nfs8,nfs9,nfs10} -pv && chmod 777 /data/nfs*
# vim /etc/exports
/data/nfs1 */24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash) 
/data/nfs2 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash) 
/data/nfs3 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash) 
/data/nfs4 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash) 
/data/nfs5 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/data/nfs6 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/data/nfs7 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/data/nfs8 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/data/nfs9 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/data/nfs10 *(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)

设置NFS

# exportfs -rv
# systemctl enable rpcbind nfs-server
# systemctl start nfs-server rpcbind
# rpcinfo -p

持久卷的运用
可以使用下面的explain子命令去查看学习pv的使用方法,这里不是要大家操作的,这些命令直接回车会看到帮助
#kubectl explain PersistentVolume
#kubectl explain PersistentVolume.spec
#kubectl explain PersistentVolume.spec.accessModes

在master节点使用YAML文件创建PV(声明一点,pv属于集群资源,不属于任何命名空间,因为笔者这里的集群已经做了名称解析,所以这里的【server】字段使用的是node1)

# cat nfs-pv.yaml   //内容在子目录
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/nfs1
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/nfs2
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/nfs3
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv004
  labels:
    name: pv004
spec:
  nfs:
    path: /data/nfs4
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv005
  labels:
    name: pv005
spec:
  nfs:
    path: /data/nfs5
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv006
  labels:
    name: pv006
spec:
  nfs:
    path: /data/nfs6
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv007
  labels:
    name: pv007
spec:
  nfs:
    path: /data/nfs7
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv008
  labels:
    name: pv008
spec:
  nfs:
    path: /data/nfs8
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv009
  labels:
    name: pv009
spec:
  nfs:
    path: /data/nfs9
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv010
  labels:
    name: pv010
spec:
  nfs:
    path: /data/nfs10
    server: node1
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi

创建与查看pv

[root@master volume]# kubectl apply -f nfs-pv.yaml --record
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
persistentvolume/pv004 created
persistentvolume/pv005 created
persistentvolume/pv006 created
persistentvolume/pv007 created
persistentvolume/pv008 created
persistentvolume/pv009 created
persistentvolume/pv010 created

[root@master volume]# kubectl  get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                   16s
pv002   1Gi        RWO,RWX        Retain           Available                                   16s
pv003   1Gi        RWO,RWX        Retain           Available                                   16s
pv004   1Gi        RWO,RWX        Retain           Available                                   16s
pv005   1Gi        RWO,RWX        Retain           Available                                   16s
pv006   1Gi        RWO,RWX        Retain           Available                                   16s
pv007   2Gi        RWO,RWX        Retain           Available                                   16s
pv008   1Gi        RWO,RWX        Retain           Available                                   16s
pv009   1Gi        RWO,RWX        Retain           Available                                   16s
pv010   1Gi        RWO,RWX        Retain           Available                                   16s

#############################################
注意:下面的步骤不用操作

创建pvc的YAML文件(此处需要注意,如果按照步骤搭建后,Pod一直处于pending状态,请将下面的accessModes: [“ReadWriteMany”]字段修改为accessModes: [“ReadWriteOnce”])

# cat volume/nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 2Gi

创建与查看pvc

[root@master volume]# kubectl apply -f nfs-pvc.yaml --record
persistentvolumeclaim/mypvc created
[root@master volume]# kubectl  get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv007    2Gi        RWO,RWX                       5s

#############################################################

2.部署jenkins

创建nginx代理

[root@master /]# cat mandatory.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10

---

根据yaml文件创建

[root@master /]# kubectl  apply -f mandatory.yaml 

# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE   IP             NODE    NOMINATED NODE   READINESS GATES
nginx-ingress-controller-689498bc7c-xmf8j   1/1     Running   0          9s    10.244.2.226   node2   <none>           <none>

创建jenkins的Statefulset(部署jenkins的yaml文件,statefulset中已经定义了pvc的字段,所以它会自动创建pvc,上面的步骤中使用YAML文件创建pvc的步骤应该跳过。)

# cat stateful/jenkins.yaml  
#定义命名空间进行隔离
apiVersion: v1
kind: Namespace
metadata:
  name: jenkins

---
#service用于服务暴露
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: jenkins
spec:
  selector:
    app: jenkins
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
  - name: agent
    port: 50000
    protocol: TCP

---
#ingress用于将不同的URL的访问请求转发到后端不同的Service
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
  namespace: jenkins
  annotations:
    kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/ssl-redirect: "false"
    ingress.kubernetes.io/proxy-body-size: 50m
    ingress.kubernetes.io/proxy-request-buffering: "off"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/proxy-body-size: 50m
    nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
spec:
  rules:
  - http:
      paths:
      - path: /jenkins
        backend:
          serviceName: jenkins
          servicePort: 80

---
#state用于创建稳定有唯一标识的Pod,这里的配置是Jenkins的核心配置
apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: jenkins
  namespace: jenkins
spec:                                      #pod核心配置
  selector:
    matchLabels:                          #匹配标签
      app: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:                         #更新策略
    type: RollingUpdate                 #滚动更新
  template:                                  #模板
    metadata:                               #元数据
      labels:
        app: jenkins
    spec:
      terminationGracePeriodSeconds: 10              #终止宽限期秒
      containers:
      - name: jenkins
        image: jenkins/jenkins:lts-alpine
        imagePullPolicy: Always                     #镜像下载策略
        ports:
        - containerPort: 8080
        - containerPort: 50000
        resources:                                #资源配置项
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 0.5
            memory: 500Mi
        env:                                   #容器运行前配置的环境变量列表
        - name: JENKINS_OPTS
          value: --prefix=/jenkins
        - name: LIMITS_MEMORY
          valueFrom:
            resourceFieldRef:                    #资源领域判断(资源占用标准)
              resource: limits.memory           #内存资源边界
              divisor: 1Mi
        - name: JAVA_OPTS
          value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
        volumeMounts:                           #逻辑卷挂载项
        - name: jenkins-home
          mountPath: /var/jenkins_home
        livenessProbe:                           #存活性探针
          httpGet:
            path: /jenkins/login
            port: 8080
          initialDelaySeconds: 60                #初始探测时间
          timeoutSeconds: 5                                      
          failureThreshold: 12 # ~2 minutes
        readinessProbe:                           #服务可用性探针
          httpGet:
            path: /jenkins/login
            port: 8080
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 12 # ~2 minutes
      securityContext:                  #安全上下文                           
        fsGroup: 1000
  volumeClaimTemplates:        #存储卷体积(大小)要求,使用volumeclaim定义pvc模板
  - metadata:
      name: jenkins-home
    spec:
      accessModes: [ "ReadWriteOnce" ]          #存储方式为读写一次性
      resources:
        requests:
          storage: 2Gi

下面就是创建与检查集群状态的操作

# kubectl apply -f jenkins.yaml --record
namespace/jenkins created
service/jenkins created
ingress.extensions/jenkins created
statefulset.apps/jenkins created
# kubectl get pods -n jenkins 
NAME        READY   STATUS    RESTARTS   AGE
jenkins-0   0/1     Running   0          28s

等一下再次查看:
# kubectl get pods -n jenkins
NAME        READY   STATUS    RESTARTS   AGE
jenkins-0   1/1     Running   0          7m55s

报错分析

这里读者可以参考这篇文档中使用Stateflset的使用事项:
https://www.cnblogs.com/wn1m/p/11289079.html
注意:

在这里jenkins-0的状态会如果显示为pending状态,说明pvc没有与pv进行绑定,如果系统中没有满足pvc要求的pv,pvc会一直无限期处于pending状态。直到系统管理员创建了一个符合要求的pv。pv一旦绑定到某个pvc上,就会被这个pvc独占,不能在与其他的pvc进行绑定了。

statefulset创建后,这些 PVC,都以"<PVC 名字 >-<StatefulSet 名字 >-< 编号 >"的方式命名,并且处于 Bound 状态。

这时查看pvc 的详细信息会显示:没有可用于此声明的持久卷,并且未设置任何存储类
如下所示

[root@master edu-kubernetes]# kubectl describe pvc
Name:          mypvc
Namespace:     default
StorageClass:  
Status:        Pending
Volume:        
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"mypvc","namespace":"default"},"spec":{"accessModes"...
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      
Access Modes:  
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason         Age               From                         Message
  ----    ------         ----              ----                         -------
  Normal  FailedBinding  5s (x3 over 35s)  persistentvolume-controller  no persistent volumes available for this claim and no storage class is set

如果上述Pod的状态为Runing,则可以进行以下步骤。

# kubectl get pvc -n jenkins
NAME                     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
jenkins-home-jenkins-0   Bound    pv007    2Gi        RWO,RWX                       50s

# kubectl get ingress -n jenkins
NAME      HOSTS   ADDRESS   PORTS   AGE
jenkins   *                 80      84s



[root@master /]# cat service-nodeport.yaml 
apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 30080
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      nodePort: 30043
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

# kubectl  apply -f service-nodeport.yaml
service/ingress-nginx created

# kubectl  get svc -n ingress-nginx
NAME            TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx   NodePort   10.111.117.150   <none>        80:30080/TCP,443:30043/TCP   40s

# curl http://Node:Port/jenkins

# kubectl get pods -n ingress-nginx -o wide
NAME                                        READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
nginx-ingress-controller-689498bc7c-xmf8j   1/1     Running   0          7m34s   10.244.2.226   node2   <none>           <none>

浏览器访问jenkins:http://node1:30080/jenkins
在这里插入图片描述
查询管理员密码:

[root@master /]# kubectl get pods -n jenkins
NAME        READY   STATUS    RESTARTS   AGE
jenkins-0   1/1     Running   0          131m

[root@master /]# kubectl  exec -it jenkins-0 cat /var/jenkins_home/secrets/initialAdminPassword -n jenkins
82841678e8f845edbdf45c579cf9eb55

输入查询出来的管理员密码到JENKINS界面上:
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
到此部署JENKINS到K8S完成,但是没有添加RBAC权限,下一步配置JENKINS支持K8S中完成

3.配置 Jenkins kubernetes 插件

添加 BlueOcean 插件
添加 Kubernetes 插件,单击测试,应该会报:禁止类的错误

系统管理–>系统设置:
在最后新增一个云
在这里插入图片描述
选择:Kubernetes
在这里插入图片描述
在这里插入图片描述
查询Kubernetes地址

[root@master stateful]# kubectl cluster-info
Kubernetes master is running at https://192.168.1.200:6443
KubeDNS is running at https://192.168.1.205:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

为 jenkins 插件 kubernetes 配置 ServiceAccounts

~# cat serviceaccount/jenkins.yml //见下图解释,在原来stateful/jenkins.yml的基础上加了sa和rbac
apiVersion: v1
kind: Namespace
metadata:
  name: jenkins

---

apiVersion: v1
kind: Namespace
metadata:
  name: build

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: jenkins

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: jenkins
  namespace: build
rules:
- apiGroups: [""]
  resources: ["pods", "pods/exec", "pods/log"]
  verbs: ["*"]
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["get"]

---

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: jenkins
  namespace: build
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: jenkins
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: jenkins

---

apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: jenkins
spec:
  selector:
    app: jenkins
  ports:
  - name: http
    port: 80
    targetPort: 8080
    protocol: TCP
  - name: agent
    port: 50000
    protocol: TCP

---

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins
  namespace: jenkins
  annotations:
    kubernetes.io/ingress.class: "nginx"
    ingress.kubernetes.io/ssl-redirect: "false"
    ingress.kubernetes.io/proxy-body-size: 50m
    ingress.kubernetes.io/proxy-request-buffering: "off"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/proxy-body-size: 50m
    nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
spec:
  rules:
  - http:
      paths:
      - path: /jenkins
        backend:
          serviceName: jenkins
          servicePort: 80

---

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: jenkins
  namespace: jenkins
spec:
  selector:
    matchLabels:
      app: jenkins
  serviceName: jenkins
  replicas: 1
  updateStrategy:
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: jenkins
    spec:
      terminationGracePeriodSeconds: 10
      serviceAccountName: jenkins
      containers:
      - name: jenkins
        image: jenkins/jenkins:lts-alpine
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
        - containerPort: 50000
        resources:
          limits:
            cpu: 1
            memory: 1Gi
          requests:
            cpu: 0.5
            memory: 500Mi
        env:
        - name: JENKINS_OPTS
          value: --prefix=/jenkins
        - name: LIMITS_MEMORY
          valueFrom:
            resourceFieldRef:
              resource: limits.memory
              divisor: 1Mi
        - name: JAVA_OPTS
          value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85
        volumeMounts:
        - name: jenkins-home
          mountPath: /var/jenkins_home
        livenessProbe:
          httpGet:
            path: /jenkins/login
            port: 8080
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 12 # ~2 minutes
        readinessProbe:
          httpGet:
            path: /jenkins/login
            port: 8080
          initialDelaySeconds: 60
          timeoutSeconds: 5
          failureThreshold: 12 # ~2 minutes
      securityContext:
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: jenkins-home
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 2Gi

创建了第二个命名空间 build 和 jenkins命名空间中的ServiceAccount jenkins。
创建了一个 角色和提供构建中所需权限的角色绑定 build 名称空间。
将角色绑定到jenkins命名空间中的 ServiceAccount。
因此,Jenkins应该能够在构建中创建豆荚,但是它不能在自己的命名空间Jenkins中做任何事情。这样就可以相对安全地保证构建中的问题不会影响Jenkins。如果我们为这两个名称空间指定了resourcequota和 LimitRanges,那么解决方案将更加可靠。

在这里插入图片描述

# kubectl apply -f serviceaccount/jenkins.yml --record 
namespace/jenkins unchanged
namespace/build created
serviceaccount/jenkins created
role.rbac.authorization.k8s.io/jenkins created
rolebinding.rbac.authorization.k8s.io/jenkins created
service/jenkins unchanged
ingress.extensions/jenkins unchanged
statefulset.apps/jenkins configured

# kubectl -n jenkins rollout status sts jenkins
Waiting for 1 pods to be ready...
statefulset rolling update complete 1 pods at revision jenkins-5d6c46d5d...

注:
sts是statefuset的简写

浏览器打开:http://node2:30080/jenkins

获取密码
因为需要重新登陆,我之前没有记住密码

# kubectl -n jenkins exec jenkins-0 -it -- cat /var/jenkins_home/secrets/initialAdminPassword

继续前面的添加云的操作:
这次填写命名空间并点击连接测试

在这里插入图片描述

填写JNLP地址:
当我们创建一个使用Kubernetes Pods的作业时,将添加一个额外的容器。该容器将使用JNLP与 jenkins master通信。
需要指定一个有效的地址JNLP可以用来连接到主机。因为pod将在build命名空间中,而主pod是 jenkins ,因此需要使用更长的DNS名称,它指定服务的名称(jenkins)和名称空间(jenkins)。最重要的是,主配置为使用根路径 /jenkins 响应请求。
总之,完整的地址豆荚可以用来与Jenkins master通信,它应该是 http://[SERVICE_NAME].[NAMESPACE]/[PATH] 。
因为这三个元素都是 jenkins,所以"真实"地址是 http://jenkins.jenkins/jenkins 。
请在Jenkins URL字段中键入它并单击Save按钮。
在这里插入图片描述

到此配置完成

=======================

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐