心得

最近也是从github上面看到一个不错的devops项目,原本想着两天左右的时间完成,但是可能也是因为之前接触较少,用了差不多整整五天的时间才把所有的坑都踩完,也遇到了很多奇怪的问题,也翻阅了很多的资料,因为身边对这块知识了解的人并不多,所以也几经周转,终于完成了这个“作业”,所有的service都是基于pod来部署的,建议k8s得有点基础的,这样遇到问题解决起来不会太费劲。

devops概念

自动化是devOps中重要的一个环节,借助于自动化构建、测试和发布等一些列动作,可以解放开发人员的双手,提高工作效率。这也是我们常说的持续集成(CI),持续部署(CD)中比较重要的部分。自动化可以帮助我们减少人为的错误,而自动化一旦正确配置,就能永远正确的执行下去。自动化减少我们重复性的劳动从而使开发人员的双手解放出来,去做更多有意义的事情,提高人员能效。开发中自动生成一些代码?自动去区分环境启动?自动打包?自动部署?自动发布?自动报警… 等等,以上这些都可以归到自动化的范畴。

目前就自己的感觉而言,gitlab的runner和jenkins的agent是一个概念,前端自动化流程已经有许多优秀的工具去帮助我们实现这一任务,其中,使用最广泛的就是gitlab+Jenkins,从打包到发布以及回滚等一系列流程。但是本文所要讲述的并不是jenkins,而是gitlab-runner。为什么要使用gitlab-runner,而不用jenkins呢?

  1. 好奇心:想试试不同的工具,既然又另外一种选择,何不去试试,万一真香呢?

  2. 做个对比:jenkins是功能强大,插件多,无所不能,但是对于我这种单一需求,有时候不需要那么多其他功能。这个时候小而美更适合

  3. 近亲:gitlab和gitlab-runner是血亲啊,两者配合有天然的默契。

实战演习

img

需要部署的组件以及作用

几乎所有的组件都是基于k8s来部署,唯独harbor部署在本地。

postgresql 镜像版本:postgres:12.6-alpine

pg数据库用于gitlab高可用,例如存储gitlab中的用户信息,以及项目文件等等,避免gitlab运行服务不可用情况下数据丢失。默认的gitlab数据库配置文件在/var/opt/gitlab/gitlab-rails/etc/databse.yml

参考 https://www.jianshu.com/p/872c1e62021a

redis 镜像版本:redis:6.2.0-alpine3.13

redis作为gitlab缓存数据库。

gitlab 镜像版本:gitlab/gitlab-ce:13.8.6-ce.1

GitLab 是一个用于仓库管理系统的开源项目,使用Git作为代码管理工具,并在此基础上搭建起来的web服务。

gitlab-runner 镜像版本:gitlab/gitlab-runner:v13.10.0

runner(奔跑者)GitLab-Runner是配合GitLab-CI进行使用的。一般地,GitLab里面的每一个工程都会定义一个属于这个工程的软件集成脚本,用来自动化地完成一些软件集成工作。当这个工程的仓库代码发生变动时,比如有人push了代码,GitLab就会将这个变动通知GitLab-CI。这时GitLab-CI会找出与这个工程相关联的Runner,并通知这些Runner把代码更新到本地并执行预定义好的执行脚本。
 所以,GitLab-Runner就是一个用来执行软件集成脚本的东西。你可以想象一下:Runner就像一个个的工人,而GitLab-CI就是这些工人的一个管理中心,所有工人都要在GitLab-CI里面登记注册,并且表明自己是为哪个工程服务的。当相应的工程发生变化时,GitLab-CI就会通知相应的工人执行软件集成脚本。如下图所示:

img

dind(docker in docker)镜像版本:docker:19-dind

提供整个服务CI功能说白了,就是在docker容器内启动一个docker daemon,对外提供服务。每个运行中的容器,都是一个进程;这个进程都托管在docker daemon中。镜像和容器都在一个隔离的环境,保持宿主机的环境。

dind一般分两种方式:

一种是使用宿主机的docker.sock,通过docker run -v /var/run/docker.sock:/var/run/docker.sock,将宿主机sock引入到容器内。这样当容器内使用docker命令时,实际上调用的是宿主机的docker daemon,这种方式速度快,但是安全性不够。
另一种是启动一个docker:dind容器a,再启动一个docker容器b,容器b指定host为a容器内的docker daemon;

节点规划

节点角色节点ip
master10.0.0.110
Node0110.0.0.51
Node0210.0.0.16

1.部署nfs(用于k8s数据持久化)

#安装nfs-utils
yum -y install nfs-utils

#创建nfs挂载目录
mkdir /nfs_dir
chown nobody.nobody /nfs_dir

# 修改NFS-SERVER配置
echo '/nfs_dir *(rw,sync,no_root_squash)' > /etc/exports

# 重启服务
systemctl restart rpcbind.service
systemctl restart nfs-utils.service 
systemctl restart nfs-server.service 

# 增加NFS-SERVER开机自启动
systemctl enable nfs-server.service

# 验证NFS-SERVER是否能正常访问
# showmount -e 10.0.0.110                 
Export list for 10.0.0.110:
/nfs_dir *

2.部署nginx-ingress-controlle

#[root@master yaml]# cat ingress-coll.yaml 
---
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-controller
  labels:
    app: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-controller
  labels:
    app: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-controller
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-controller
    namespace: ingress-nginx

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: ingress-nginx
  name: nginx-ingress-lb
  namespace: ingress-nginx
spec:
  # DaemonSet need:
  # ----------------
  type: ClusterIP
  # ----------------
  # Deployment need:
  # ----------------
#  type: NodePort
  # ----------------
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  - name: metrics
    port: 10254
    protocol: TCP
    targetPort: 10254
  selector:
    app: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
data:
  keep-alive: "75"
  keep-alive-requests: "100"
  upstream-keepalive-connections: "10000"
  upstream-keepalive-requests: "100"
  upstream-keepalive-timeout: "60"
  allow-backend-server-header: "true"
  enable-underscores-in-headers: "true"
  generate-request-id: "true"
  http-redirect-code: "301"
  ignore-invalid-headers: "true"
  log-format-upstream: '{"@timestamp": "$time_iso8601","remote_addr": "$remote_addr","x-forward-for": "$proxy_add_x_forwarded_for","request_id": "$req_id","remote_user": "$remote_user","bytes_sent": $bytes_sent,"request_time": $request_time,"status": $status,"vhost": "$host","request_proto": "$server_protocol","path": "$uri","request_query": "$args","request_length": $request_length,"duration": $request_time,"method": "$request_method","http_referrer": "$http_referer","http_user_agent":  "$http_user_agent","upstream-sever":"$proxy_upstream_name","proxy_alternative_upstream_name":"$proxy_alternative_upstream_name","upstream_addr":"$upstream_addr","upstream_response_length":$upstream_response_length,"upstream_response_time":$upstream_response_time,"upstream_status":$upstream_status}'
  max-worker-connections: "65536"
  worker-processes: "2"
  proxy-body-size: 20m
  proxy-connect-timeout: "10"
  proxy_next_upstream: error timeout http_502
  reuse-port: "true"
  server-tokens: "false"
  ssl-ciphers: ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA
  ssl-protocols: TLSv1 TLSv1.1 TLSv1.2
  ssl-redirect: "false"
  worker-cpu-affinity: auto

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app: ingress-nginx

---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
  annotations:
    component.version: "v0.30.0"
    component.revision: "v1"
spec:
  # Deployment need:
  # ----------------
#  replicas: 1
  # ----------------
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
        scheduler.alpha.kubernetes.io/critical-pod: ""
    spec:
      # DaemonSet need:
      # ----------------
      hostNetwork: true
      # ----------------
      serviceAccountName: nginx-ingress-controller
      priorityClassName: system-node-critical
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - ingress-nginx
              topologyKey: kubernetes.io/hostname
            weight: 100
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: type
                operator: NotIn
                values:
                - virtual-kubelet
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-beijing.aliyuncs.com/acs/aliyun-ingress-controller:v0.30.0.2-9597b3685-aliyun
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
            - --annotations-prefix=nginx.ingress.kubernetes.io
            - --enable-dynamic-certificates=true
            - --v=2
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
#          resources:
#            limits:
#              cpu: "1"
#              memory: 2Gi
#            requests:
#              cpu: "1"
#              memory: 2Gi
          volumeMounts:
          - mountPath: /etc/localtime
            name: localtime
            readOnly: true
      volumes:
      - name: localtime
        hostPath:
          path: /etc/localtime
          type: File
      nodeSelector:
        test/ingress-controller-ready: "true"
      tolerations:
      - operator: Exists

      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          mount -o remount rw /proc/sys
          sysctl -w net.core.somaxconn=65535
          sysctl -w net.ipv4.ip_local_port_range="1024 65535"
          sysctl -w fs.file-max=1048576
          sysctl -w fs.inotify.max_user_instances=16384
          sysctl -w fs.inotify.max_user_watches=524288
          sysctl -w fs.inotify.max_queued_events=16384
        image: registry.cn-beijing.aliyuncs.com/acs/busybox:v1.29.2
        imagePullPolicy: Always
        name: init-sysctl
        securityContext:
          privileged: true
          procMount: Default

---
## Deployment need for aliyun'k8s:
#apiVersion: v1
#kind: Service
#metadata:
#  annotations:
#    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-id: "lb-xxxxxxxxxxxxxxxxxxx"
#    service.beta.kubernetes.io/alibaba-cloud-loadbalancer-force-override-listeners: "true"
#  labels:
#    app: nginx-ingress-lb
#  name: nginx-ingress-lb-local
#  namespace: ingress-nginx
#spec:
#  externalTrafficPolicy: Local
#  ports:
#  - name: http
#    port: 80
#    protocol: TCP
#    targetPort: 80
#  - name: https
#    port: 443
#    protocol: TCP
#    targetPort: 443
#  selector:
#    app: ingress-nginx
#  type: LoadBalancer

开始部署:

# kubectl  apply -f aliyun-ingress-nginx.yaml 
namespace/ingress-nginx created
serviceaccount/nginx-ingress-controller created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-controller created
service/nginx-ingress-lb created
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
daemonset.apps/nginx-ingress-controller created

# 我们查看下pod,会发现空空如也,为什么会这样呢?
# kubectl -n ingress-nginx get pod
注意上面的yaml配置里面,我使用了节点选择配置,只有打了我指定lable标签的node节点,也会被允许调度pod上去运行
      nodeSelector:
        test/ingress-controller-ready: "true"

# 我们现在来打标签
# kubectl label node master test/ingress-controller-ready=true
master labeled
# kubectl label node node01 test/ingress-controller-ready=true
node01 labeled

3.部署harbor(docker-compose)

!!!注意规划好配置节点,节点1和节点2部署了ingress-controller占用80端口,节点3部署harbor
10.0.0.110 git.test.com ##节点1用于访问gitlab
10.0.0.51 flask-test.devops.com #节点2带
10.0.0.16 harbor.test.com ##节点3部署harbor


在生产中安装一般有两种方式,一种是用docker-compose启动官方打包好的离线安装包; 二上用helm chart的形式在k8s上来运行harbor,两种方式都可以用,但根据工作经验,建议是不要将harbor部署在k8s上.

##创建目录以及下载harbor离线包
mkdir /data && cd /data
wget https://github.com/goharbor/harbor/releases/download/v2.2.0/harbor-offline-installer-v2.2.0.tgz
tar xf harbor-offline-installer-v2.2.0.tgz && rm harbor-offline-installer-v2.2.0.tgz

## 修改harbor配置
cd harbor
cp harbor.yml.tmpl harbor.yml
    5 hostname: harbor.test.com
    17   certificate: /data/harbor/ssl/tls.cert
    18   private_key: /data/harbor/ssl/tls.key
    34 harbor_admin_password: 123.com

## 创建harbor访问域名证书
mkdir /data/harbor/tls && cd /data/harbor/ssl
openssl genrsa -out tls.key 2048
openssl req -new -x509 -key tls.key -out tls.cert -days 360 -subj /CN=*.test.com

## 安装docker-compose
官方文档 https://docs.docker.com/compose/install/
curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
#若待会执行install.sh时,出现install docker-compose的报错将docker-compose二进制文件copy到install.sh同级目录。


## 开始安装
./install.sh

## 推送镜像到harbor
echo '10.0.0.16 harbor.test.com' >> /etc/hosts
docker tag nginx:latest  harbor.test.com/library/nginx:latest
docker push harbor.test.com/library/nginx:1.18.0-alpine

#其余节点执行
mkdir -p /etc/docker/certs.d/harbor.test.com/
scp 10.0.0.16:/data/harbor/tls/tls.cert /etc/docker/certs.d/harbor.test.com/
echo '10.0.0.16 harbor.test.com' >> /etc/hosts

3.部署postgresql

# 创建持久化目录
#mkdir -p /nfs_dir/{gitlab_etc_ver130806,gitlab_log_ver130806,gitlab_opt_ver130806,gitlab_postgresql_data_ver130806}

#创建namespaces
#kubectl create namespace gitlab-ver130806

cat 3postgres.yaml
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-postgresql-data-ver130806
  labels:
    type: gitlab-postgresql-data-ver130806
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs_dir/gitlab_postgresql_data_ver130806
    server: 10.0.0.110

# pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-postgresql-data-ver130806-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs
  selector:
    matchLabels:
      type: gitlab-postgresql-data-ver130806
---
apiVersion: v1
kind: Service
metadata:
  name: postgresql
  labels:
    app: gitlab
    tier: postgreSQL
spec:
  ports:
    - port: 5432
  selector:
    app: gitlab
    tier: postgreSQL

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgresql
  labels:
    app: gitlab
    tier: postgreSQL
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitlab
      tier: postgreSQL
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: gitlab
        tier: postgreSQL
    spec:
      #nodeSelector:
      #  gee/disk: "500g"
      containers:
        - image: postgres:12.6-alpine
          name: postgresql
          env:
            - name: POSTGRES_USER
              value: gitlab
            - name: POSTGRES_DB
              value: gitlabhq_production
            - name: POSTGRES_PASSWORD
              value: deusepg
            - name: TZ
              value: Asia/Shanghai
          ports:
            - containerPort: 5432
              name: postgresql
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - exec pg_isready -U gitlab -h 127.0.0.1 -p 5432 -d gitlabhq_production
            initialDelaySeconds: 110
            timeoutSeconds: 5
            failureThreshold: 6
          readinessProbe:
            exec:
              command:
              - sh
              - -c
              - exec pg_isready -U gitlab -h 127.0.0.1 -p 5432 -d gitlabhq_production
            initialDelaySeconds: 20
            timeoutSeconds: 3
            periodSeconds: 5
#          resources:
#            requests:
#              cpu: 100m
#              memory: 512Mi
#            limits:
#              cpu: "1"
#              memory: 1Gi
          volumeMounts:
            - name: postgresql
              mountPath: /var/lib/postgresql/data
      volumes:
        - name: postgresql
          persistentVolumeClaim:
            claimName: gitlab-postgresql-data-ver130806-pvc
 
---
#kubectl -n gitlab-ver130806 create -f 3postgres.yaml

开始部署

kubectl -n gitlab-ver130806 create -f 3postgres.yaml

4.部署redis

[root@master yaml]# cat 4redis.yaml 
---
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    app: gitlab
    tier: backend
spec:
  ports:
    - port: 6379
      targetPort: 6379
  selector:
    app: gitlab
    tier: backend
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  labels:
    app: gitlab
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitlab
      tier: backend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: gitlab
        tier: backend
    spec:
      #nodeSelector:
      #  gee/disk: "500g"
      containers:
        - image: redis:6.2.0-alpine3.13
          name: redis
          command:
            - "redis-server"
          args:
            - "--requirepass"
            - "useredis"
#          resources:
#            requests:
#              cpu: "1"
#              memory: 2Gi
#            limits:
#              cpu: "1"
#              memory: 2Gi
          ports:
            - containerPort: 6379
              name: redis
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - "redis-cli ping"
            initialDelaySeconds: 30
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            exec:
              command:
              - sh
              - -c
              - "redis-cli ping"
            initialDelaySeconds: 5
            periodSeconds: 10
            timeoutSeconds: 1
            successThreshold: 1
            failureThreshold: 3
      initContainers:
      - command:
        - /bin/sh
        - -c
        - |
          ulimit -n 65536
          mount -o remount rw /sys
          echo never > /sys/kernel/mm/transparent_hugepage/enabled
          mount -o remount rw /proc/sys
          echo 2000 > /proc/sys/net/core/somaxconn
          echo 1 > /proc/sys/vm/overcommit_memory
        image: registry.cn-beijing.aliyuncs.com/acs/busybox:v1.29.2
        imagePullPolicy: IfNotPresent
        name: init-redis
        resources: {}
        securityContext:
          privileged: true
          procMount: Default
          
---
# kubectl -n gitlab-ver130806  create  -f 4redis.yaml

5.部署gitlab

先定制一下镜像Dockerfile

vim Dockerfile

FROM gitlab/gitlab-ce:13.8.6-ce.0

RUN rm /etc/apt/sources.list \
    && echo 'deb http://apt.postgresql.org/pub/repos/apt/ xenial-pgdg main' > /etc/apt/sources.list.d/pgdg.list \
    && wget --no-check-certificate -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add -
COPY sources.list /etc/apt/sources.list
RUN apt-get update -yq && \
    apt-get install -y vim iproute2 net-tools iputils-ping curl wget software-properties-common unzip postgresql-client-12  --assume-yes apt-utils && \
    rm -rf /var/cache/apt/archives/*

RUN ln -svf /usr/bin/pg_dump /opt/gitlab/embedded/bin/pg_dump

准备source.list文件

deb http://mirrors.aliyun.com/ubuntu/ xenial main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu xenial-security main
deb-src http://mirrors.aliyun.com/ubuntu xenial-security main
deb http://mirrors.aliyun.com/ubuntu xenial-security universe
deb-src http://mirrors.aliyun.com/ubuntu xenial-security universe

构建镜像

docker build -t gitlab/gitlab-ce:13.8.6-ce.1 .

开始部署gitlab

[root@master yaml]# cat 5gitlab.yaml 
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-etc-ver130806
  labels:
    type: gitlab-etc-ver130806
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs_dir/gitlab_etc_ver130806
    server: 10.0.0.110

# pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-etc-ver130806-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      type: gitlab-etc-ver130806
# pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-log-ver130806
  labels:
    type: gitlab-log-ver130806
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs_dir/gitlab_log_ver130806
    server: 10.0.0.110

# pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-log-ver130806-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      type: gitlab-log-ver130806
      
# pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-opt-ver130806
  labels:
    type: gitlab-opt-ver130806
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs_dir/gitlab_opt_ver130806
    server: 10.0.0.110

# pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-opt-ver130806-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      type: gitlab-opt-ver130806
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  labels:
    app: gitlab
    tier: frontend
spec:
  ports:
    - name: gitlab-ui
      port: 80
      protocol: TCP
      targetPort: 80
    - name: gitlab-ssh
      port: 22
      protocol: TCP
      targetPort: 22
  selector:
    app: gitlab
    tier: frontend
  type: NodePort
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gitlab
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: gitlab-cb-ver130806
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: gitlab
    namespace: gitlab-ver130806
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab
  labels:
    app: gitlab
    tier: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitlab
      tier: frontend
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: gitlab
        tier: frontend
    spec:
      serviceAccountName: gitlab
      containers:
        - image: gitlab/gitlab-ce:13.8.6-ce.1
          name: gitlab
#          resources:
#            requests:
#              cpu: 400m
#              memory: 4Gi
#            limits:
#              cpu: "800m"
#              memory: 8Gi
          securityContext:
            privileged: true
          env:
            - name: TZ
              value: Asia/Shanghai
            - name: GITLAB_OMNIBUS_CONFIG
              value: |
                postgresql['enable'] = false
                gitlab_rails['db_username'] = "gitlab"
                gitlab_rails['db_password'] = "deusepg"
                gitlab_rails['db_host'] = "postgresql"
                gitlab_rails['db_port'] = "5432"
                gitlab_rails['db_database'] = "gitlabhq_production"
                gitlab_rails['db_adapter'] = 'postgresql'
                gitlab_rails['db_encoding'] = 'utf8'
                redis['enable'] = false
                gitlab_rails['redis_host'] = 'redis'
                gitlab_rails['redis_port'] = '6379'
                gitlab_rails['redis_password'] = 'useredis'
                gitlab_rails['gitlab_shell_ssh_port'] = 22
                external_url 'http://git.test.com/'
                nginx['listen_port'] = 80
                nginx['listen_https'] = false
                #-------------------------------------------
                gitlab_rails['gitlab_email_enabled'] = true
                gitlab_rails['gitlab_email_from'] = 'admin@test.com'
                gitlab_rails['gitlab_email_display_name'] = 'test'
                gitlab_rails['gitlab_email_reply_to'] = 'gitlab@test.com'
                gitlab_rails['gitlab_default_can_create_group'] = true
                gitlab_rails['gitlab_username_changing_enabled'] = true
                gitlab_rails['smtp_enable'] = true
                gitlab_rails['smtp_address'] = "smtp.exmail.qq.com"
                gitlab_rails['smtp_port'] = 465
                gitlab_rails['smtp_user_name'] = "gitlab@test.com"
                gitlab_rails['smtp_password'] = "testsendmail"
                gitlab_rails['smtp_domain'] = "exmail.qq.com"
                gitlab_rails['smtp_authentication'] = "login"
                gitlab_rails['smtp_enable_starttls_auto'] = true
                gitlab_rails['smtp_tls'] = true
                #-------------------------------------------
                # 关闭 promethues
                prometheus['enable'] = false
                # 关闭 grafana
                grafana['enable'] = false
                # 减少内存占用
                unicorn['worker_memory_limit_min'] = "200 * 1 << 20"
                unicorn['worker_memory_limit_max'] = "300 * 1 << 20"
                # 减少 sidekiq 的并发数
                sidekiq['concurrency'] = 16
                # 减少 postgresql 数据库缓存
                postgresql['shared_buffers'] = "256MB"
                # 减少 postgresql 数据库并发数量
                postgresql['max_connections'] = 8
                # 减少进程数   worker=CPU核数+1
                unicorn['worker_processes'] = 2
                nginx['worker_processes'] = 2
                puma['worker_processes'] = 2
                # puma['per_worker_max_memory_mb'] = 850
                # 保留3天备份的数据文件
                gitlab_rails['backup_keep_time'] = 259200
                #-------------------------------------------
          ports:
            - containerPort: 80
              name: gitlab
          livenessProbe:
            exec:
              command:
              - sh
              - -c
              - "curl -s http://127.0.0.1/-/health|grep -w 'GitLab OK'"
            initialDelaySeconds: 180
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          readinessProbe:
            exec:
              command:
              - sh
              - -c
              - "curl -s http://127.0.0.1/-/health|grep -w 'GitLab OK'"
            initialDelaySeconds: 120
            periodSeconds: 10
            timeoutSeconds: 5
            successThreshold: 1
            failureThreshold: 3
          volumeMounts:
            - mountPath: /etc/gitlab
              name: gitlab1
            - mountPath: /var/log/gitlab
              name: gitlab2
            - mountPath: /var/opt/gitlab
              name: gitlab3
            - mountPath: /etc/localtime
              name: tz-config

      volumes:
        - name: gitlab1
          persistentVolumeClaim:
            claimName: gitlab-etc-ver130806-pvc
        - name: gitlab2
          persistentVolumeClaim:
            claimName: gitlab-log-ver130806-pvc
        - name: gitlab3
          persistentVolumeClaim:
            claimName: gitlab-opt-ver130806-pvc
        - name: tz-config
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai

      securityContext:
        runAsUser: 0
        fsGroup: 0
---
#kubectl -n gitlab-ver130806 create -f 5gitlab.yaml

部署gitlab-tls

#vim 6gitlab-tls.yaml
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: gitlab
  annotations:
    nginx.ingress.kubernetes.io/force-ssl-redirect: "false"
    nginx.ingress.kubernetes.io/proxy-body-size: "20m"
spec:
  tls:
  - hosts:
    - git.test.com
    secretName: mytls
  rules:
  - host: git.test.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: gitlab
            port:
              number: 80

#生成自签的tls证书,生成secret,将证书绑定到ingress。
# openssl genrsa -out tls.key 2048
# openssl req -new -x509 -key tls.key -out tls.cert -days 360 -subj /CN=*.test.com
# kubectl -n gitlab-ver130806 create secret tls mytls --cert=tls.cert --key=tls.key 
#kubectl -n gitlab-ver130806 create 6gitlab-tls.yaml
##注意如果要访问请在本地添加host解析

测试gitlab连接

#测试验证gitlab连接
#[root@master yaml]# curl git.test.com
<html><body>You are being <a href="http://git.test.com/users/sign_in">redirected</a>.</body></html>

6.部署gitlab-runner-docker

tag is docker

#  mkdir -p /nfs_dir/{gitlab-runner1-ver130806-docker,gitlab-runner2-ver130806-share}

---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-runner1-ver130806-docker
  labels:
    type: gitlab-runner1-ver130806-docker
spec:
  capacity:
    storage: 0.1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs_dir/gitlab-runner1-ver130806-docker
    server: 10.0.0.110

# pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-runner1-ver130806-docker
  namespace: gitlab-ver130806
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 0.1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      type: gitlab-runner1-ver130806-docker


---
# https://docs.gitlab.com/runner/executors

#concurrent = 30
#check_interval = 0

#[session_server]
#  session_timeout = 1800

#[[runners]]
#  name = "gitlab-runner1-ver130806-docker"
#  url = "http://git.test.com"
#  token = "xxxxxxxxxxxxxxxxxxxxxx"
#  executor = "kubernetes"
#  [runners.kubernetes]
#    namespace = "gitlab-ver130806"
#    image = "docker:stable"
#    helper_image = "gitlab/gitlab-runner-helper:x86_64-9fc34d48-pwsh"
#    privileged = true
#    [[runners.kubernetes.volumes.pvc]]
#      name = "gitlab-runner1-ver130806-docker"
#      mount_path = "/mnt"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab-runner1-ver130806-docker
  namespace: gitlab-ver130806
spec:
  replicas: 1
  selector:
    matchLabels:
      name: gitlab-runner1-ver130806-docker
  template:
    metadata:
      labels:
        name: gitlab-runner1-ver130806-docker
    spec:
      hostAliases:
      - ip: "172.27.205.161" #等同于hosts解析,需要通过kubectl get endpoints -A | grep gitlat 查看gitlab endpoints 80端口 pod地址,才可以访问git.test.com域名。
        hostnames:
        - "git.test.com"
      serviceAccountName: gitlab
      containers:
      - args:
        - run
        image: gitlab/gitlab-runner:v13.10.0
        name: gitlab-runner1-ver130806-docker
        volumeMounts:
        - mountPath: /etc/gitlab-runner
          name: config
        - mountPath: /etc/ssl/certs
          name: cacerts
          readOnly: true
      restartPolicy: Always
      volumes:
      - persistentVolumeClaim:
          claimName: gitlab-runner1-ver130806-docker
        name: config
      - hostPath:
          path: /usr/share/ca-certificates/mozilla
        name: cacerts

创建gitlab-runner

kubectl -n gitlab-ver130806 create -f 7gitlab-runer.yaml

7.部署gitlab-runner-share

tag is share

#[root@master yaml]# cat 8gitlab-share.yaml 
---
# gitlab-ci-multi-runner register

#                   Active  √ Paused Runners don't accept new jobs
#                Protected     This runner will only run on pipelines triggered on protected branches
#        Run untagged jobs  √ Indicates whether this runner can pick jobs without tags
# Lock to current projects     When a runner is locked, it cannot be assigned to other projects

# pv
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: gitlab-runner2-ver130806-share
  labels:
    type: gitlab-runner2-ver130806-share
spec:
  capacity:
    storage: 0.1Gi
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs
  nfs:
    path: /nfs_dir/gitlab-runner2-ver130806-share
    server: 10.0.0.110

# pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-runner2-ver130806-share
  namespace: gitlab-ver130806
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 0.1Gi
  storageClassName: nfs
  selector:
    matchLabels:
      type: gitlab-runner2-ver130806-share


---
# https://docs.gitlab.com/runner/executors

#concurrent = 30
#check_interval = 0

#[session_server]
#  session_timeout = 1800

#[[runners]]
#  name = "gitlab-runner2-ver130806-share"
#  url = "http://git.test.com"
#  token = "xxxxxxxxxxxxxxxx"
#  executor = "kubernetes"
#  [runners.kubernetes]
#    namespace = "gitlab-ver130806"
#    image = "registry.cn-beijing.aliyuncs.com/acs/busybox/busybox:v1.29.2"
#    helper_image = "gitlab/gitlab-runner-helper:x86_64-9fc34d48-pwsh"
#    privileged = false
#    [[runners.kubernetes.volumes.pvc]]
#      name = "gitlab-runner2-v1230-share"
#      mount_path = "/mnt"

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitlab-runner2-ver130806-share
  namespace: gitlab-ver130806
spec:
  replicas: 1
  selector:
    matchLabels:
      name: gitlab-runner2-ver130806-share
  template:
    metadata:
      labels:
        name: gitlab-runner2-ver130806-share
    spec:
      hostAliases:
      - ip: "172.27.205.161" #等同于hosts解析,需要通过kubectl get endpoints -A | grep gitlat 查看gitlab endpoints 80端口 pod地址,才可以访问git.test.com域名。
        hostnames:
        - "git.test.com"
      serviceAccountName: gitlab
      containers:
      - args:
        - run
        image: gitlab/gitlab-runner:v13.10.0
        name: gitlab-runner2-ver130806-share
        volumeMounts:
        - mountPath: /etc/gitlab-runner
          name: config
        - mountPath: /etc/ssl/certs
          name: cacerts
          readOnly: true
      restartPolicy: Always
      volumes:
      - persistentVolumeClaim:
          claimName: gitlab-runner2-ver130806-share
        name: config
      - hostPath:
          path: /usr/share/ca-certificates/mozilla
        name: cacerts

8.将gitlab-runner注册到gitlab

首先注册gitlab-runner-docker

#注意注册时候的tag不相同

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-fLP89eJg-1650546240029)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220420182558428.png)]

#通过exec进入pod
kubectl -n gitlab-ver130806  exec -it gitlab-runner1-ver130806-docker-76975b6884-s2j92  bash

#进入pod后输入命令(获取url和token在下面的图中)
Enter the GitLab instance URL (for example, https://gitlab.com/):
http://git.test.com/
Enter the registration token:
L1upvreKioaDZZkLVhEu
Enter a description for the runner:
[gitlab-runner1-ver130806-docker-76975b6884-s2j92]: gitlab-runner1-ver130806-docker
Enter tags for the runner (comma-separated):
share
Registering runner... succeeded                     runner=L1upvreK

注册gitlab-runner-share

#通过exec进入pod
kubectl -n gitlab-ver130806  exec -it gitlab-runner2-ver130806-share-585c6f95b-tw2gr  bash

#进入pod后输入命令(获取url和token在下面的图中)
Enter the GitLab instance URL (for example, https://gitlab.com/):
http://git.test.com/
Enter the registration token:
L1upvreKioaDZZkLVhEu
Enter a description for the runner:
[gitlab-runner2-ver130806-share-585c6f95b-tw2gr]: gitlab-runner2-ver130806-share
Enter tags for the runner (comma-separated):
share
Registering runner... succeeded                     runner=L1upvreK

进入gitlab后进入该页面配置(注意我已经设置好了,并且已经在运行了)

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-eV6KeNAy-1650546240029)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220420183752092.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hDNVrNNK-1650546240030)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220420183821909.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-jYDRtIIq-1650546240030)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220420184015276.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-1hQQ6XQL-1650546240030)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220420184101546.png)]

9.优化gitlab

添加内部解析

kubectl -n kube-system edit configmaps coredns  -o yaml
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        log 
        rewrite stop {      #添加此处配置
          name regex git.test.com gitlab.gitlab-ver130806.svc.cluster.local
          answer name gitlab.gitlab-ver130806.svc.cluster.local git.test.com
        }
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  creationTimestamp: "2022-04-16T18:43:13Z"
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data: {}
    manager: kubectl-create
    operation: Update
    time: "2022-04-16T18:43:13Z"
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        f:Corefile: {}
    manager: kubectl-edit
    operation: Update
    time: "2022-04-18T06:07:28Z"
  name: coredns
  namespace: kube-system
  resourceVersion: "269899"
  uid: c8a81905-0d6b-4590-ba2b-64203a221ca4

增加ssh端口转发

kubectl get endpoints -A | grep gitlab
gitlab-ver130806 gitlab 172.27.205.161:22,172.27.205.161:80

可以通过上一处命令的返回结果看到,service分别对gitlab的pod 22端口和80端口进行代理,80端口为gitlab的nginx服务web页面访问端口,22则为ssh端口用于开发人员拉取代码例如git clone xxxx命令,需要使用到22端口。

gitlab-ver130806 gitlab NodePort 10.106.196.221 80:32506/TCP,22:32079/TCP 2d6h

通过查看gitlab的svc可以看到server的type为nodeport,那么当执行git clone git@git.test.com:root/test.git拉取代码的时候就会发现无法拉取,因此需要做一个本机的端口转发才能访问

1.切记修改本机ssh远程登陆端口,否则可能会导致无法远程登录的情况
vim /etc/ssh/sshd_config
17 Port 10022
#修改完成后对ssh服务进行重启,并测试配置是否生效
systemctl restart sshd

2.配置端口转发(网上大多数方法都不可用,亲测)
iptables -t nat -I OUTPUT -p tcp -o lo --dport 22 -j REDIRECT --to-ports 32079

10.安装dind

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rs2u9L5K-1650546240031)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421105102078.png)]

# dind pip instll staus : kill -9  code 137(128+9) ,may be limits(cpu,memory) resources need change

# only have docker client ,use dind can be use normal
#dindSvc=$(kubectl -n kube-system get svc dind |awk 'NR==2{print $3}')
#export DOCKER_HOST="tcp://${dindSvc}:2375/"
#export DOCKER_DRIVER=overlay2
#export DOCKER_TLS_CERTDIR=""


---
# SVC
kind: Service
apiVersion: v1
metadata:
  name: dind
  namespace: kube-system
spec:
  selector:
    app: dind
  ports:
    - name: tcp-port
      port: 2375
      protocol: TCP
      targetPort: 2375

---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dind
  namespace: kube-system
  labels:
    app: dind
spec:
  replicas: 1
  selector:
    matchLabels:
      app: dind
  template:
    metadata:
      labels:
        app: dind
    spec:  #将docker 2375端口映射出来,这样的话速度会快一些
      hostNetwork: true
      containers:
      - name: dind
        image: docker:19-dind
        lifecycle:
          postStart: #在容器启动执行docker login仓库,因为仓库类型如果为私有需要先进行登录
            exec:
              command: ["/bin/sh", "-c", "docker login harbor.test.com -u 'admin' -p '123.com'"]
           # 3. when delete this pod , use this keep kube-proxy to flush role done
          preStop: #在容器退出执行等待5s,会有一个问题,可能会在apiserver没有感知到pod已经挂掉的情况下,依然会向dind请求
            exec:
              command: ["/bin/sh", "-c", "sleep 5"]
        ports:
        - containerPort: 2375
#        resources:
#          requests:
#            cpu: 200m
#            memory: 256Mi
#          limits:
#            cpu: 0.5
#            memory: 1Gi
        readinessProbe:
          tcpSocket:
            port: 2375
          initialDelaySeconds: 10
          periodSeconds: 30
        livenessProbe:
          tcpSocket:
            port: 2375
          initialDelaySeconds: 10
          periodSeconds: 30
        securityContext:  #因为要打包镜像什么的,所以分配的权限是比较大的,所以需要四处配置
            privileged: true
        env: 
          - name: DOCKER_HOST 
            value: tcp://localhost:2375 #从我理解来看就是通过2375这个端口使用overlay2四层连接
          - name: DOCKER_DRIVER 
            value: overlay2
          - name: DOCKER_TLS_CERTDIR 
            value: '' #docker默认需要基于tls证书进行访问和使用,不定义为空会报错
        volumeMounts: 
          - name: docker-graph-storage 
            mountPath: /var/lib/docker
          - name: tz-config
            mountPath: /etc/localtime
           # kubectl -n kube-system create secret generic harbor-ca --from-file=harbor-ca=/data/harbor/ssl/tls.cert
          - name: harbor-ca
            mountPath: /etc/docker/certs.d/harbor.test.com/ca.crt
            subPath: harbor-ca
       # kubectl -n kube-system create secret docker-registry devops-secret --docker-server=harbor.test.com --docker-username=admin --docker-password=123.com --docker-email=admin@test.com
      hostAliases:
      - hostnames:
        - harbor.test.com     #因为docker要去请求harbor仓库执行build,pull,push操作,因此需要在内部配置host解析。
        ip: 10.0.0.16
      imagePullSecrets:
      - name: devops-secert
      volumes:
#      - emptyDir:
#          medium: ""
#          sizeLimit: 10Gi
      - hostPath: #如果没有这个配置,打包的镜像会是临时的,构建后会被释放,将缓存放到宿主机上,这样下次构建速度会快一下。
          path: /var/lib/container/docker
        name: docker-graph-storage
      - hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
        name: tz-config
      - name: harbor-ca
        secret: #定义密钥配置
          secretName: harbor-ca
          defaultMode: 0600
#
#        kubectl taint node 10.0.0.110 Ingress=:NoExecute
#        kubectl describe node 10.0.0.110 |grep -i taint
#        kubectl taint node 10.0.0.110 Ingress:NoExecute-
      nodeSelector:
        kubernetes.io/hostname: "node02"
      tolerations: #克制污点,即使nocheduler也可以被调度
      - operator: Exists
# kubectl -n kube-system create secret generic harbor-ca --from-file=harbor-ca=/data/harbor/ssl/tls.cert
 # kubectl create secret docker-registry devops-secret --docker-server=harbor.test.com --docker-username=admin --docker-password=123.com --docker-email=admin@test.com
 # kubectl create -f 9dind.yaml

11.ci/cd合成

把k8s的二进制命令行工具kubelet容器化备用

#将kubectl文件cp到dockerfile的同级目录
mkdir docker-image/ && cd docker-image/

which  kubectl
#/usr/local/bin/kubectl
cp /usr/local/bin/kubectl ./

#准备Dockerfile
FROM alpine:3.13

MAINTAINER boge

ENV TZ "Asia/Shanghai"

RUN sed -ri 's+dl-cdn.alpinelinux.org+mirrors.aliyun.com+g' /etc/apk/repositories \
 && apk add --no-cache curl tzdata ca-certificates \
 && cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
 && apk upgrade \
 && rm -rf /var/cache/apk/*

COPY kubectl  /usr/local/bin/
RUN chmod +x /usr/local/bin/kubectl

ENTRYPOINT ["kubectl"]
CMD ["help"]

#构建镜像
docker build -t harbor.test.com/library/kubectl:v1.20 .

将镜像上传到harbor
docker push  harbor.test.com/library/kubectl:v1.20

gitlab创建仓库

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-20vbgByS-1650546240031)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421121932718.png)]

在命令行终端clone仓库并且上传文件测试

#Git global setup
git config --global user.name "Administrator"
git config --global user.email "admin@example.com"

#Create a new repository
git clone git@git.test.com:root/test.git
cd test
touch README.md
git add README.md
git commit -m "add README"
git push -u origin master

准备python的flask

准备好flask相关的代码文件上传到gitlab代码仓库

app.py

from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello_world():
    return 'Hello, 李爽! current version tag:22.04.20.20'

@app.route('/gg/<username>')
def hello(username):
    return 'welcome' + ': ' + username + '!'

准备python的Dockerfile

注意如果最后上传到harbor上面去,否则会比较慢,3.5及以下版本已经停止维护,不兼容。

FROM harbor.test.com/library/python:3.7-slim-stretch

WORKDIR /kae/app

COPY requirements.txt .

RUN  sed -i 's/deb.debian.org/ftp.cn.debian.org/g' /etc/apt/sources.list \
  && sed -i 's/security.debian.org/ftp.cn.debian.org/g' /etc/apt/sources.list \
  && apt-get update -y \
  && apt-get install -y wget gcc libsm6 libxext6 libglib2.0-0 libxrender1 make \
  && apt-get clean && apt-get autoremove -y && rm -rf /var/lib/apt/lists/*
RUN pip install --no-cache-dir -i https://mirrors.aliyun.com/pypi/simple -r requirements.txt \
    && rm requirements.txt

COPY . .

EXPOSE 5000
HEALTHCHECK CMD curl --fail http://localhost:5000 || exit 1

ENTRYPOINT ["gunicorn", "app:app", "-c", "gunicorn_config.py"]

gunicorn_config.py

bind = '0.0.0.0:5000'
graceful_timeout = 3600
timeout = 1200
max_requests = 1200
workers = 1
worker_class = 'gevent'

在代码仓库变量配置里面配置如下变量值

Type           Key                      Value                    State        Masked
Variable   DOCKER_USER                 admin                   下面都关闭   下面都关闭
Variable   DOCKER_PASS                 boge666
Variable   REGISTRY_URL                harbor.boge.com
Variable   REGISTRY_NS                 product
File       KUBE_CONFIG_TEST            k8s rbac相关config配置文件内容

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-THqUFU3f-1650546240032)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421123129837.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-BttFJP68-1650546240032)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421123212984.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6dfPo2aZ-1650546240032)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421123250765.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-v0H9xvEY-1650546240033)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421123310387.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-chWRgECC-1650546240033)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421123335906.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6qdjR9Mh-1650546240033)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421123355766.png)]

#注意KUBE_CONFIG_TEST

也是之前看boge 写了一个关于rbac生成配置的脚本
kubectl create namespaces test

[root@master test]# cd ..
[root@master cicd]# ls
docker-compose  docker-image  gitlab.tar  gitrunner.tar  harbor  kube_config  monitoring  rbac  test  tls  yaml
[root@master cicd]# cd rbac/
You have new mail in /var/spool/mail/root
[root@master rbac]# ls
all_permission_rbac.sh  kube_config  readonly_permission_rbac.sh
[root@master rbac]# cat all_permission_rbac.sh 
#!/bin/bash
#
# This Script based on  https://jeremievallee.com/2018/05/28/kubernetes-rbac-namespace-user.html
# K8s'RBAC doc:         https://kubernetes.io/docs/reference/access-authn-authz/rbac
# Gitlab'CI/CD doc:     hhttps://docs.gitlab.com/ee/user/permissions.html#running-pipelines-on-protected-branches
#
# In honor of the remarkable Windson

BASEDIR="$(dirname "$0")"
folder="$BASEDIR/kube_config"

echo -e "All namespaces is here: \n$(kubectl get ns|awk 'NR!=1{print $1}')"
echo "endpoint server if local network you can use $(kubectl cluster-info |awk '/Kubernetes/{print $NF}')"

namespace=$1
endpoint=$(echo "$2" | sed -e 's,https\?://,,g')

if [[ -z "$endpoint" || -z "$namespace" ]]; then
    echo "Use "$(basename "$0")" NAMESPACE ENDPOINT";
    exit 1;
fi

if ! kubectl get ns|awk 'NR!=1{print $1}'|grep -w "$namespace";then kubectl create ns "$namespace";else echo "namespace: $namespace was exist." ;fi

echo "---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: $namespace-user
  namespace: $namespace
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: $namespace-user-full-access
  namespace: $namespace
rules:
- apiGroups: ['', 'extensions', 'apps', 'metrics.k8s.io']
  resources: ['*']
  verbs: ['*']
- apiGroups: ['batch']
  resources:
  - jobs
  - cronjobs
  verbs: ['*']
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: $namespace-user-view
  namespace: $namespace
subjects:
- kind: ServiceAccount
  name: $namespace-user
  namespace: $namespace
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: $namespace-user-full-access
---
# https://kubernetes.io/zh/docs/concepts/policy/resource-quotas/
apiVersion: v1
kind: ResourceQuota
metadata:
  name: $namespace-compute-resources
  namespace: $namespace
spec:
  hard:
    pods: "10"
    services: "10"
    persistentvolumeclaims: "5"
    requests.cpu: "1"
    requests.memory: 2Gi
    limits.cpu: "2"
    limits.memory: 4Gi" | kubectl apply -f -
kubectl -n $namespace describe quota $namespace-compute-resources
mkdir -p $folder
tokenName=$(kubectl get sa $namespace-user -n $namespace -o "jsonpath={.secrets[0].name}")
token=$(kubectl get secret $tokenName -n $namespace -o "jsonpath={.data.token}" | base64 --decode)
certificate=$(kubectl get secret $tokenName -n $namespace -o "jsonpath={.data['ca\.crt']}")

echo "apiVersion: v1
kind: Config
preferences: {}
clusters:
- cluster:
    certificate-authority-data: $certificate
    server: https://$endpoint
  name: $namespace-cluster
users:
- name: $namespace-user
  user:
    as-user-extra: {}
    client-key-data: $certificate
    token: $token
contexts:
- context:
    cluster: $namespace-cluster
    namespace: $namespace
    user: $namespace-user
  name: $namespace
current-context: $namespace" > $folder/$namespace.kube.conf

[root@master rbac]# sh all_permission_rbac.sh  test  https://10.0.0.110:6443
All namespaces is here: 
default
devops
gitlab-ver130806
go-test
ingress-nginx
kube-node-lease
kube-public
kube-system
openebs
rabbitmq
redis
test
yang
endpoint server if local network you can use https://10.0.0.110:6443
go-test
test
namespace: test was exist.
serviceaccount/test-user unchanged
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/test-user-full-access unchanged
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/test-user-view unchanged
resourcequota/test-compute-resources configured
Name:                   test-compute-resources
Namespace:              test
Resource                Used  Hard
--------                ----  ----
limits.cpu              600m  2
limits.memory           1Gi   4Gi
persistentvolumeclaims  0     5
pods                    2     10
requests.cpu            600m  1
requests.memory         1Gi   2Gi

##在本地目录下会生成一个kube-config目录,在目录下将文件内容copy到这个变量中。

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-7fLFaKbA-1650546240034)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421124730021.png)]

准备项目自动化配置文件.gitlab-ci.yml

[root@master test]# ls -a
.  ..  app.py  Dockerfile  .git  .gitlab-ci.yml  gunicorn_config.py  .project-name-canary.yaml  .project-name.yaml  README.md  requirements.txt

[root@master test]# cat .gitlab-ci.yml 
stages:
  - build
  - deploy
  - rollback

# tag name need: 20.11.21.01
variables:
  namecb: "flask-test"
  svcport: "5000"
  replicanum: "2"
  ingress: "flask-test.devops.com"
  certname: "mytls"
  CanarylIngressNum: "20"

.deploy_k8s: &deploy_k8s |
  if [ $CANARY_CB -eq 1 ];then cp -arf .project-name-canary.yaml ${namecb}-${CI_COMMIT_TAG}.yaml; sed -ri "s+CanarylIngressNum+${CanarylIngressNum}+g" ${namecb}-${CI_COMMIT_TAG}.yaml; sed -ri "s+NomalIngressNum+$(expr 100 - ${CanarylIngressNum})+g" ${namecb}-${CI_COMMIT_TAG}.yaml ;else cp -arf .project-name.yaml ${namecb}-${CI_COMMIT_TAG}.yaml;fi
  sed -ri "s+projectnamecb.devops.com+${ingress}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  sed -ri "s+projectnamecb+${namecb}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  sed -ri "s+5000+${svcport}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  sed -ri "s+replicanum+${replicanum}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  sed -ri "s+mytls+${certname}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  sed -ri "s+mytagcb+${CI_COMMIT_TAG}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  sed -ri "s+harbor.devops.com/library+${IMG_URL}+g" ${namecb}-${CI_COMMIT_TAG}.yaml
  cat ${namecb}-${CI_COMMIT_TAG}.yaml
  [ -d ~/.kube ] || mkdir ~/.kube
  echo "$KUBE_CONFIG" > ~/.kube/config
  if [ $NORMAL_CB -eq 1 ];then if kubectl get deployments.|grep -w ${namecb}-canary &>/dev/null;then kubectl delete deployments.,svc ${namecb}-canary ;fi;fi
  kubectl apply -f ${namecb}-${CI_COMMIT_TAG}.yaml --record
  echo
  echo
  echo "============================================================="
  echo "                    Rollback Indx List"
  echo "============================================================="
  kubectl rollout history deployment ${namecb}|tail -5|awk -F"[ =]+" '{print $1"\t"$5}'|sed '$d'|sed '$d'|sort -r|awk '{print $NF}'|awk '$0=""NR".   "$0'

.rollback_k8s: &rollback_k8s |
  [ -d ~/.kube ] || mkdir ~/.kube
  echo "$KUBE_CONFIG" > ~/.kube/config
  last_version_command=$( kubectl rollout history deployment ${namecb}|tail -5|awk -F"[ =]+" '{print $1"\t"$5}'|sed '$d'|sed '$d'|tail -${ROLL_NUM}|head -1 )
  last_version_num=$( echo ${last_version_command}|awk '{print $1}' )
  last_version_name=$( echo ${last_version_command}|awk '{print $2}' )
  kubectl rollout undo deployment ${namecb} --to-revision=$last_version_num
  echo $last_version_num
  echo $last_version_name
  kubectl rollout history deployment ${namecb}


build:
  stage: build
  retry: 2
  variables:
    # use dind.yaml to depoy dind'service on k8s
    DOCKER_HOST: tcp://10.102.241.137:2375/
    DOCKER_DRIVER: overlay2
    DOCKER_TLS_CERTDIR: ""
  ##services:
    ##- docker:dind
  before_script:
    - docker login ${REGISTRY_URL} -u "$DOCKER_USER" -p "$DOCKER_PASS"
  script:
    - docker pull ${REGISTRY_URL}/${REGISTRY_NS}/${namecb}:latest || true
    - docker build --network host --cache-from ${REGISTRY_URL}/${REGISTRY_NS}/${namecb}:latest --tag ${REGISTRY_URL}/${REGISTRY_NS}/${namecb}:$CI_COMMIT_TAG --tag ${REGISTRY_URL}/${REGISTRY_NS}/${namecb}:latest .
    - docker push ${REGISTRY_URL}/${REGISTRY_NS}/${namecb}:$CI_COMMIT_TAG
    - docker push ${REGISTRY_URL}/${REGISTRY_NS}/${namecb}:latest
  after_script:
    - docker logout ${REGISTRY_URL}
  tags:
    - "docker"
  only:
    - tags





#--------------------------K8S DEPLOY--------------------------------------------------

devops-deploy:
  stage: deploy
  image: harbor.test.com/library/kubectl:v1.20
  variables:
    KUBE_CONFIG: "$KUBE_CONFIG_TEST"
    IMG_URL: "${REGISTRY_URL}/${REGISTRY_NS}"
    NORMAL_CB: 1
  script:
    - *deploy_k8s
  when: manual
  only:
    - tags

# canary start
devops-canary-deploy:
  stage: deploy
  image: harbor.test.com/library/kubectl:v1.20
  variables:
    KUBE_CONFIG: "$KUBE_CONFIG_TEST"
    IMG_URL: "${REGISTRY_URL}/${REGISTRY_NS}"
    CANARY_CB: 1
  script:
    - *deploy_k8s
  when: manual
  only:
    - tags
# canary end

devops-rollback-1:
  stage: rollback
  image: harbor.test.com/library/kubectl:v1.20
  variables:
    KUBE_CONFIG: "$KUBE_CONFIG_TEST"
    ROLL_NUM: 1
  script:
    - *rollback_k8s
  when: manual
  only:
    - tags


devops-rollback-2:
  stage: rollback
  image: harbor.test.com/library/kubectl:v1.20
  variables:
    KUBE_CONFIG: "$KUBE_CONFIG_TEST"
    ROLL_NUM: 2
  script:
    - *rollback_k8s
  when: manual
  only:
    - tags


devops-rollback-3:
  stage: rollback
  image: harbor.test.com/library/kubectl:v1.20
  variables:
    KUBE_CONFIG: "$KUBE_CONFIG_TEST"
    ROLL_NUM: 3
  script:
    - *rollback_k8s
  when: manual
  only:
    - tags

准备k8s的deployment模板文件 .project-name.yaml

这里要注意提前在K8S把harbor拉取的凭证secret给创建好,命令如下:

kubectl -n test create secret docker-registry devops-secret --docker-server=harbor.boge.com --docker-username=admin --docker-password=123.com --docker-email=admin@test.com

[root@master test]# cat .project-name-canary.yaml 
---
# SVC
kind: Service
apiVersion: v1
metadata:
  labels:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
  name: projectnamecb-canary
spec:
  selector:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 5000
#      nodePort: 12345
#  type: NodePort

---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
  name: projectnamecb
  annotations:
    nginx.ingress.kubernetes.io/service-weight: |
        projectnamecb: NomalIngressNum, projectnamecb-canary: CanarylIngressNum
spec:
  tls:
  - hosts:
    - projectnamecb.devops.com
    secretName: mytls
  rules:
  - host: projectnamecb.devpos.com
    http:
      paths:
      - path: /
        backend:
          serviceName: projectnamecb
          servicePort: 80
      - path: /
        backend:
          serviceName: projectnamecb-canary
          servicePort: 80

---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: projectnamecb-canary
  labels:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
spec:
  replicas: replicanum
  selector:
    matchLabels:
      kae-app-name: projectnamecb-canary
  template:
    metadata:
      labels:
        kae: "true"
        kae-app-name: projectnamecb-canary
        kae-type: app
    spec:
      containers:
      - name: projectnamecb-canary
        image: harbor.test.com/product/projectnamecb:mytagcb
        env:
          - name: TZ
            value: Asia/Shanghai
        ports:
        - containerPort: 5000
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 5000
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 5000
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 1
          failureThreshold: 3
        resources:
          requests:
            cpu: 0.3
            memory: 0.5Gi
          limits:
            cpu: 0.3
            memory: 0.5Gi
      imagePullSecrets:
      - name: devops-secret

准备好K8S上金丝雀部署的模板文件 .project-name-canary.yaml

[root@master test]# cat .project-name-canary.yaml 
---
# SVC
kind: Service
apiVersion: v1
metadata:
  labels:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
  name: projectnamecb-canary
spec:
  selector:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
  ports:
    - name: http-port
      port: 80
      protocol: TCP
      targetPort: 5000
#      nodePort: 12345
#  type: NodePort

---
# Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
  name: projectnamecb
  annotations:
    nginx.ingress.kubernetes.io/service-weight: |
        projectnamecb: NomalIngressNum, projectnamecb-canary: CanarylIngressNum
spec:
  tls:
  - hosts:
    - projectnamecb.devops.com
    secretName: mytls
  rules:
  - host: projectnamecb.devpos.com
    http:
      paths:
      - path: /
        backend:
          serviceName: projectnamecb
          servicePort: 80
      - path: /
        backend:
          serviceName: projectnamecb-canary
          servicePort: 80

---
# Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
  name: projectnamecb-canary
  labels:
    kae: "true"
    kae-app-name: projectnamecb-canary
    kae-type: app
spec:
  replicas: replicanum
  selector:
    matchLabels:
      kae-app-name: projectnamecb-canary
  template:
    metadata:
      labels:
        kae: "true"
        kae-app-name: projectnamecb-canary
        kae-type: app
    spec:
      containers:
      - name: projectnamecb-canary
        image: harbor.test.com/library/projectnamecb:mytagcb
        env:
          - name: TZ
            value: Asia/Shanghai
        ports:
        - containerPort: 5000
        readinessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 5000
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 1
          failureThreshold: 3
        livenessProbe:
          httpGet:
            scheme: HTTP
            path: /
            port: 5000
          initialDelaySeconds: 10
          periodSeconds: 5
          timeoutSeconds: 3
          successThreshold: 1
          failureThreshold: 3
        resources:
          requests:
            cpu: 0.3
            memory: 0.5Gi
          limits:
            cpu: 0.3
            memory: 0.5Gi
      imagePullSecrets:
      - name: devops-secret

最后,在修改完代码,提交tag版本号后,即会触发CI/CD自动化流程.

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6RvfNqDE-1650546240034)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421141951866.png)]

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-3xHEkiOj-1650546240035)(/Users/lishuang/Library/Application Support/typora-user-images/image-20220421142020263.png)]

总结

我在看这套服务的时候,其实也是一知半解,遇到各种稀奇古怪的问题,看的资料也有几十份了,甚至有一天为了解决问题解决到凌晨5点,从周六下午使用安装k8s到周三才终于搞完,周四趁热打铁把笔记写了一下,期间遇到很多奇怪的问题,这里跟大家讲一下我的一个理解思路,这个文章是我在做完这套服务后才开始记录的笔记,可能会有一点点的步骤遗失,但是从我的感觉上来看,只有大概有一个理解思路,解决问题也会轻松很多。

也跟大家讲一下大家可能会在部署中存在的问题,大概在部署中从二进制安装到目前为止,节点重启,误删插件,遇到的问题感觉也有20来个了,作为运维的我们来说,日志分析肯定是特别重要的,在了解日志分析之前首先要之后访问的流程。

我认为的访问流程应该是这样的,例如我们开发人员通过git push 提交代码到gitlab,gitlab会去通知gitlab-ci,gitlab中的配置主要分为了三个快build,deploy,rollback,简单来说就是build是我们镜像的build,push,pull操作,deploy是发布或者更新我们的服务简单来说就是更新pod,rollback主要是在发布过程中可能会遇到未知的问题,这样的话就可以及时回滚避免影响,细心的肯定发现了kubectl apply -f n a m e c b − {namecb}- namecb{CI_COMMIT_TAG}.yaml --record这行配置,本质来说发布的都是deployment对象的应用,及时环境出现问题,这样的话也能及时回滚。

另外就是例如我们有两套环境生产和测试,测试环境我们希望直接完成项目的上线,而生产我们希望增加一个开关在上线环境,那么只需要删除.gitlab-ci.yaml中的when。在刚才的环境中secret主要是两种,一种是tls访问类型的secret,一种是harbor的secret,对了在gitlab-ci文件中记得修改dind的地址也就是DOCKER_HOST: tcp://10.102.241.137:2375/这处配置,可以通过kubectl get pod -n kube-system | grep dind查看,如果有想研究gitlab-ci的同学也可以去官网查看,个人感觉是比较简单的。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐