卸载 k8s集群


1、平滑移除 Node

kubectl get node
kubectl cordon $node_name # 不可调度
kubectl drain $node_name # 驱逐资源
kubectl delete $node_name
kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets

kubectl delete node --all

for service in kube-apiserver kube-controller-manager kubectl kubelet kube-proxy kube-scheduler; 
do
      systemctl stop $service
done

sudo kubeadm reset -f

# 合并版
sudo rm -rf $HOME/.kube && rm -rf ~/.kube/ && rm -rf /etc/kubernetes/ && rm -rf /etc/systemd/system/kubelet.service.d && rm -rf /etc/systemd/system/kubelet.service && rm -rf /usr/bin/kube* && rm -rf /etc/cni && rm -rf /opt/cni && rm -rf /var/lib/etcd && rm -rf /var/etcd

# 分步骤版
sudo rm -rf $HOME/.kube 
sudo rm -rf ~/.kube/
sudo rm -rf /etc/kubernetes/
sudo rm -rf /etc/systemd/system/kubelet.service.d
sudo rm -rf /etc/systemd/system/kubelet.service
sudo rm -rf /usr/bin/kube*
sudo rm -rf /etc/cni
sudo rm -rf /opt/cni
sudo rm -rf /var/lib/etcd
sudo rm -rf /var/etcd


yum clean all
yum remove kube*

docker images
docker rmi 

# 清空所有容器和镜像

docker rm -f $(docker ps -a -q)
docker images

docker rmi $(docker images | grep "^<none>" | awk "{print $3}")  # 删除没标签的镜像

docker rmi $(docker images -q)  # 删除全部镜像

安装k8s集群 (后期换成 ansible的方式安装)

世纪机房:

168、169、170、171、172、173、174、175、(176-181是mysql)182、183、184、185、186、187、188、189、190、197、198

85、86、87、88、89、90、91、92、93、94

1、时间同步

yum install -y chrony && systemctl enable --now chronyd

2、设置主机名

hostnamectl set-hostname k8s-173
echo "192.168.10.203   $(hostname)" >> /etc/hosts

3、关防火墙、禁用交换内存、关selinux

systemctl status firewalld.service 
systemctl stop firewalld.service
systemctl disable firewalld.service

swapoff -a
sed -i 's/.*swap.*/#&/' /etc/fstab
# sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
free -h

sestatus 
setenforce 0
sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

sestatus 

reboot
# 需要重启生效

4、配置网络

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

# 开放端口 6443 2379-2380 10250 10251 10252 30000-32767

# 查看网卡配置是否正确,否则修改配置并重启网卡使生效
cat /etc/sysconfig/network-scripts/ifcfg-enp0s3

5、安装docker

yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum -y update
yum list docker-ce --showduplicates | sort -r
yum install -y docker-ce docker-ce-cli containerd.io
docker version
mkdir -p /etc/docker

cat <<EOF | sudo tee /etc/docker/daemon.json
{
    "registry-mirrors": ["https://xxx.mirror.aliyuncs.com"],
	"insecure-registries": ["harbor_ip:port"],
	"exec-opts": ["native.cgroupdriver=systemd"],
	"storage-driver": "overlay2",
    "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
	"log-driver": "json-file",
	"log-opts": {
		"max-size": "10m",
		"max-file": "1"
	}
}
EOF


vim /lib/systemd/system/docker.service

# 第14行,修改
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --default-ulimit core=0:0

systemctl daemon-reload
systemctl restart docker
systemctl status docker
systemctl enable docker

docker info
systemctl status docker -l

# 如果启动失败
systemctl status docker.service
journalctl -xe

6、配置k8s源 并安装 kubectl、kubelet、kubeadm

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

yum install -y kubectl kubelet kubeadm

kubelet --version
kubeadm version
kubectl version --client

systemctl enable kubelet && systemctl start kubelet

7、初始化 master

# 查看初始化所需镜像
kubeadm config images list
# 或者
kubeadm config images list --kubernetes-version=v1.21.3


kubeadm init --kubernetes-version=1.21.3  \
--apiserver-advertise-address=192.168.1.24   \
--image-repository registry.aliyuncs.com/google_containers  \
--pod-network-cidr=172.16.0.0/16

# 最新一次
kubeadm init --kubernetes-version=1.22.4  \
--apiserver-advertise-address=192.168.10.147   \
--image-repository registry.aliyuncs.com/google_containers  \
--pod-network-cidr=172.16.0.0/16

# 报错
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.0: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.0 not found: manifest unknown: manifest unknown
, error: exit status 1

# 修复方案
kubeadm config images list

k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-controller-manager:v1.21.3
k8s.gcr.io/kube-scheduler:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/pause:3.4.1
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns/coredns:v1.8.0

docker pull coredns/coredns

docker tag coredns/coredns:latest registry.aliyuncs.com/google_containers/coredns:v1.8.0

# 再次

kubeadm init --kubernetes-version=1.21.3  \
--apiserver-advertise-address=192.168.1.24   \
--image-repository registry.aliyuncs.com/google_containers  \
--pod-network-cidr=172.16.0.0/16

# 此时执行
1、生成 apiserver-kubelet-client、proxy、etcd 等的证数 和 密钥;
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-24 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.24]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-24 localhost] and IPs [192.168.1.24 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-24 localhost] and IPs [192.168.1.24 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key

2、往/etc/kubernetes 写入各个配置文件
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file

3、kubelet 配置环境变量与配文件,并启动
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet

4、为 kube-apiserver、kube-controller-manager 、 kube-scheduler和etcd创建静态pods
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in 

5、启动 control plane 完成,存储 kubeadm配置到 kube-system 命名空间
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.503785 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-24 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-24 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

6、引导令牌
[bootstrap-token] Using token: 123456
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
7、应用基本插件 CoreDNS 和 kube-proxy
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

8、配置文件、部署网络pod、其他节点加入;
root用户直接执行 export KUBECONFIG=/etc/kubernetes/admin.conf

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.24:6443 --token 123456 \
	--discovery-token-ca-cert-hash sha256:123456


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

8、加入Node

# 查看现有 token
kubeadm token list 
# token 过期(24h)的话,重新生成 token
kubeadm token create 

# 查看 discovery-token-ca-cert-hash
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
   openssl dgst -sha256 -hex | sed 's/^.* //'

kubeadm join 192.168.1.24:6443 --token 123 --discovery-token-ca-cert-hash sha256:123

# 执行一系列检查后,启动kubelet,并显示 node 已成功加入集群
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
kubectl get nodes -o wide # 可以看到29台机器的信息,其中master机器的 roles 是 control-plane,master
kubectl get pods -o wide --all-namespaces # 可以看到当前的所有pod的信息以及分布的IP,每台work都有 kube-proxy,master上有 etcd、kube-apiserver、kube-controller-manager、kube-scheduler、kube-proxy、coredns,且这些pod都属于kube-system的命名空间

9、安装网络pod

nodes是 not ready 的状态,且 coredns 的 pod 是没有 ready 的状态,需要安装网络插件

172.16.0.0/16 是指32位IP的后16位是可变范围;

curl https://docs.projectcalico.org/manifests/calico.yaml -O
vim calico.yaml

apiVersion: policy/v1  # policy/v1beta1 改为 policy/v1
kind: PodDisruptionBudget

- name: CALICO_IPV4POOL_CIDR
  value: "172.16.0.0/16"
# 保存

# 创建资源并应用配置
kubectl apply -f calico.yaml

# 查看 node 是否 ready,corndns 是否ready
watch kubectl get pods -n calico-system

10、其他

kubectl 命令自动补全

yum install bash-completion -y
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)

pod


部署 pod

1、创建一个 Demo 项目:

DevOpsLab/K8sLab/k8s_demo.py

import datetime
import time

while 1:
    print(str(datetime.datetime.now())[:19])
    time.sleep(5)

2、创建dockerfile

DevOpsLab/dockerfile

FROM harbor_ip:port/copyright_monitor/python:3.8
COPY ./ /usr/DevOpsLab/
WORKDIR /usr/DevOpsLab/K8sLab

3、制作一个镜像,并上传到 harbor

docker build -t k8s-lab .
docker tag k8s-lab harbor_ip:port/copyright_monitor/k8s-lab
docker push harbor_ip:port/copyright_monitor/k8s-lab

4、创建 secret

kubectl create secret docker-registry harbor-root --namespace=default  \
--docker-server=harbor_ip:port \
--docker-username=root \
--docker-password='123'

5、编写 pod 资源配置文件

资源名称不能带下划线

vim k8s_pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: k8s-lab
spec:
  containers:
    - name: k8s-lab
      image: harbor_ip:port/copyright_monitor/k8s-lab
      imagePullPolicy: Always
      command: ["python","-u","k8s_demo.py"]
  imagePullSecrets:
  - name: harbor-root

kubectl apply -f k8s_pod.yaml

查看pod

# 如果READY 一直是 0/1 ,STATUS 是 CrashLoopBackOff,此时需要
kubectl describe pod k8s-lab -n kube-system
kubectl logs -f k8s-lab


# 查看pod在哪个节点运行
kubectl get pod -o wide
# 然后登录那台机器,通过 docker ps 和 docker images 可以看到该容器以及所需的镜像
# 容器的名称是 k8s_k8s-lab_k8s-lab_default_684751c4-c
# k8s_{containerName}_{podFullName}_{namespace}_{podUID}_{podrestartCount}

# 进入 pod
kubectl exec -it podname -c container_name -- bin/sh

至此,一个pod部署完成。

pod 参数

apiVersion: v1                    #必选,版本号,例如v1,版本号必须可以用 kubectl api-versions 查询到 .
kind: Pod                      #必选,Pod
metadata:                      #必选,元数据
  name: string                    #必选,Pod名称
  namespace: string               #必选,Pod所属的命名空间,默认为"default"
  labels:                       #自定义标签
    - name: string                 #自定义标签名字
  annotations:                           #自定义注释列表
    - name: string
spec:                            #必选,Pod中容器的详细定义
  containers:                       #必选,Pod中容器列表
  - name: string                        #必选,容器名称,需符合RFC 1035规范
    image: string                       #必选,容器的镜像名称
    imagePullPolicy: [ Always|Never|IfNotPresent ]  #获取镜像的策略 Alawys表示下载镜像 IfnotPresent表示优先使用本地镜像,否则下载镜像,Nerver表示仅使用本地镜像
    command: [string]               #容器的启动命令列表,如不指定,使用打包时使用的启动命令
    args: [string]                     #容器的启动命令参数列表
    workingDir: string                     #容器的工作目录
    volumeMounts:                 #挂载到容器内部的存储卷配置
    - name: string                 #引用pod定义的共享存储卷的名称,需用volumes[]部分定义的的卷名
      mountPath: string                 #存储卷在容器内mount的绝对路径,应少于512字符
      readOnly: boolean                 #是否为只读模式
    ports:                      #需要暴露的端口库号列表
    - name: string                 #端口的名称
      containerPort: int                #容器需要监听的端口号
      hostPort: int                    #容器所在主机需要监听的端口号,默认与Container相同
      protocol: string                  #端口协议,支持TCP和UDP,默认TCP
    env:                          #容器运行前需设置的环境变量列表
    - name: string                    #环境变量名称
      value: string                   #环境变量的值
    resources:                          #资源限制和请求的设置
      limits:                       #资源限制的设置
        cpu: string                   #Cpu的限制,单位为core数,将用于docker run --cpu-shares参数
        memory: string                  #内存限制,单位可以为Mib/Gib,将用于docker run --memory参数
      requests:                         #资源请求的设置
        cpu: string                   #Cpu请求,容器启动的初始可用数量
        memory: string                    #内存请求,容器启动的初始可用数量
    livenessProbe:                    #对Pod内各容器健康检查的设置,当探测无响应几次后将自动重启该容器,检查方法有exec、httpGet和tcpSocket,对一个容器只需设置其中一种方法即可
      exec:                     #对Pod容器内检查方式设置为exec方式
        command: [string]               #exec方式需要制定的命令或脚本
      httpGet:                    #对Pod内个容器健康检查方法设置为HttpGet,需要制定Path、port
        path: string
        port: number
        host: string
        scheme: string
        HttpHeaders:
        - name: string
          value: string
      tcpSocket:            #对Pod内个容器健康检查方式设置为tcpSocket方式
         port: number
       initialDelaySeconds: 0       #容器启动完成后首次探测的时间,单位为秒
       timeoutSeconds: 0          #对容器健康检查探测等待响应的超时时间,单位秒,默认1秒
       periodSeconds: 0           #对容器监控检查的定期探测时间设置,单位秒,默认10秒一次
       successThreshold: 0
       failureThreshold: 0
       securityContext:
         privileged: false
    restartPolicy: [Always | Never | OnFailure] #Pod的重启策略,Always表示一旦不管以何种方式终止运行,kubelet都将重启,OnFailure表示只有Pod以非0退出码退出才重启,Nerver表示不再重启该Pod
    nodeSelector: obeject         #设置NodeSelector表示将该Pod调度到包含这个label的node上,以key:value的格式指定
    imagePullSecrets:         #Pull镜像时使用的secret名称,以key:secretkey格式指定
    - name: string
    hostNetwork: false            #是否使用主机网络模式,默认为false,如果设置为true,表示使用宿主机网络
    volumes:                  #在该pod上定义共享存储卷列表
    - name: string              #共享存储卷名称 (volumes类型有很多种)
      emptyDir: {}              #类型为emtyDir的存储卷,与Pod同生命周期的一个临时目录。为空值
      hostPath: string            #类型为hostPath的存储卷,表示挂载Pod所在宿主机的目录
        path: string                #Pod所在宿主机的目录,将被用于同期中mount的目录
      secret:                 #类型为secret的存储卷,挂载集群与定义的secre对象到容器内部
        scretname: string  
        items:     
        - key: string
          path: string
      configMap:                      #类型为configMap的存储卷,挂载预定义的configMap对象到容器内部
        name: string
        items:
        - key: string
          path: string

1、基础参数(必选)

apiVersion: v1 # API版本号,不同的对象可能会使用不同API
kind: Pod  # 对象类型,pod
metadata:  # 元数据
  name: string # POD名称
  namespace: string # 所属的命名空间
spec: # specification of the resource content(资源内容的规范)
  containers: # 容器列表
    - name: string # 容器名称
      image: string # 容器镜像

2、标签与注释

metadata:
   labels: 
     - key: value 
     - ...
    annotations: 
      - key: value 
      - ...

3、容器常用参数

spec:
  containers:
    - name: string # 容器名称
# 镜像
      image: string
      imagePullPolicy: [Always| Never | IfNotPresent] 
      # 镜像拉取策略。Always:总是拉取;Never:从不拉取;IfNotPresent:不存在就拉取
# 启动参数
      command: [string] 
      # 容器启动命令列表,相当于Dockerfile中的ENDRYPOINT,是唯一的。如果不指定,就是使用容器本身的。示例:["/bin/sh","-c"]
      args: [string] 
      # 容器启动参数列表,相当于Dockerfile中的CMD,示例:["-c"]
# 容器工作目录
      workingDir: string
# 环境变量
      env:
        - name: string # 变量名称
          value: * # 变量值
        - name: string 
          valueFrome: # 指定值的来源
            configMapkeyRef: # 从ConfigMap中获取
              name: string # 指定ConfigMap
              key: string # 指定configMap中的key,赋值给变量
# 端口  
      ports: # 需要暴露的端口列表
        - name: string # 端口名称
          containerPort: int  # 容器端口
          hostPort: int # 容器所在主机需要监听的端口号,默认为与容器IP相同,一般可以不设置
          protocol: string # 端口协议,支持TCP和UDP,默认为TCP
# 挂载          
      volumeMounts: # 挂载定义的存储卷到容器,需要通过volumes定义
        - name: string # 定义的volume的名称
          mountPath: string # 容器内挂载的目录的绝对路径(少于512字符)
          readOnly: boolean(布尔值) # 是否只读

4、资源配额

cpu 超过 limits,pod会被限制,内存超过 limits,pod会被kill;
requests 默认和 limits 相等;

resource:
  limits: # 资源限制
    cpu: string # CPU限制。两种方式可以直接指定使用核数,也可以用单位:m来指定。 
	            # 0.5 :相当于0.5颗
	            # 一台服务器的CPU总量等于核数乘以1000。设机器的核数为两核,则总量为2000m。此时设置CPU限制为100m,则相当于是使用了100/2000,也就是5%。此时0.5=500m
    memory: string # 内存限制。
	               # 单位:直接使用正整数表示Byte;k;m;g;t;p
	               # 不区分大小写(Kilobyte,Megabyte,Gigabyte,Terabyte,Petabyte)
    requests:  # 资源请求设置,也就是容器启动时的初始资源请求,一般和limits相同可不设
      cpu: string
      memory: string

5、存储卷

可以定义 临时存储卷、持久存储卷、宿主机目录、配置文件、secret,从而给容器内部用。

spec:
  volumes: # 存储卷有多种类型,以下为一些常用类型
    - name: string # 存储卷名称
      emptyDir: {} # 该类存储卷是临时生成的一个目录,与pod生命周期同步
    - name: string 
      hostPath: # 挂载宿主机的目录
        path: string   # 用于挂载的目录
    - name: string
      nfs:
        server: string # 服务IP地址
        path: string # 用于挂载的目录
    - name: string
      persistentVolumeClaim: # 调用已经创建的持久卷
            claimName: string # 持久卷声明的名称
    - name: string
      configMap: # 挂载ConfigMap到容器内
        name: string # ConfigMap的名称
        items: # 要调用的键,值会被写入文件,可以设定多个,在被volumeMounts调用时,这些文件会被一起放在挂载目录下,也可以挂入到一个文件
          - key: string
            path: string  # 文件名
    - name: string
      secret: # 挂载secret到容器内
        secretname: string
        items:
          - key: string
            path: string

调用存储定义

spec:
  containers:
  ....
    volumeMounts:
       - name: main-conf
         mountPath: /etc/nginx/nginx.conf
         subPath: nginx.conf

6、健康检查

livenessProbe: # 如果探测失败会重启容器
   exec: # 通过在容器内执行命令或脚本的方式,命令执行状态码为0,视为探测成功
     command: [string]
   httpGet: # 通过http get的方式访问容器IP地址,并指定端口和路径,如果响应码会2xx或3xx视为成功
     path: string # 访问路径,也就是UPI示例:/index.html
     port: number # 访问端口
     host: string # 访问的主机名,默认为容器IP,可不设
     scheme: string # 用于连接的协议,默认为http,可不设
     httpHeaders: # 自定义请求头
       - name: string # 名称
         value: string # 值
   tcpSocket: # 通过tcp协议对端口进行检测如果端口可以连通就视为检测成功
     port: number
# 检测参数配置
    initialDelaySeconds: number # 初始延迟秒数,也就容器启动多久后开始检测
    timeoutSeconds: number # 响应超时时间
    periodSeconds: number # 检测周期,也就检测时间间隔

7、其他

spec:
  restartPolicy: [Always|Never|OnFailure] # 重启策略                       # OnFailure:只有在pod为非0码退出时才重启
  nodeSelector: # 根据标签调度到的指定node节点,使用前需要对节点打标签
     key: value # 使用命令kubectl label nodes node-name key=value 
   imagePullSecrets: # 指定镜像拉取时使用账户密码。需要先保存到Secret中
      - name: string
   hostNetwork: false # 是否使用主机网络,默认为false

pod 可配置:https://zhuanlan.zhihu.com/p/108018157

k8s科普:https://www.cnblogs.com/dukuan/p/11400344.html

面板:kuboard dashboard lens

面板


dashboard

0、如果之前存在,则删除旧的 dashboard

kubectl -n kubernetes-dashboard  delete $(kubectl -n kubernetes-dashboard  get pod -o name | grep dashboard)
# 或者
kubectl delete -f recommended.yaml 

1、下载 dashboard 资源文件

查看 k8s 和 dashboard 版本对应 https://github.com/kubernetes/dashboard/releases

mkdir -p /usr/k8s_yaml/dashboard

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.3.1/aio/deploy/recommended.yaml

或许会报错:
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443… failed: Connection refused.

需要在 https://www;.ipaddress.com/ 中查询 raw.githubuercontent.com 的真实IP,然后加入 hosts,在重新执行 wget 命令。

vim /etc/hosts

185.199.108.133 raw.githubusercontent.com
185.199.109.133 raw.githubusercontent.com
185.199.110.133 raw.githubusercontent.com
185.199.111.133 raw.githubusercontent.com

2、解析资源文件

打开 recommended.yaml,可以看到做了如下配置

一个 名为 kubernetes-dashboard 的 Namespace;

其余资源都在 这个 Namespace下配置;

一个 名为 kubernetes-dashboard 的 ServiceAccount;
一个 名为 kubernetes-dashboard 的 Service;
一个名为 kubernetes-dashboard-certs 的 Secret;
一个名为 kubernetes-dashboard-csrf 的 Secret;
一个名为 kubernetes-dashboard-key-holder 的 Secret;
一个名为 kubernetes-dashboard-settings 的 ConfigMap;
一个名为 kubernetes-dashboard 的 Role;
一个名为 kubernetes-dashboard 的 ClusterRole;
一个名为 kubernetes-dashboard 的 RoleBinding;
一个名为 kubernetes-dashboard 的 ClusterRoleBinding;
一个名为 kubernetes-dashboard 的 Service;
一个名为 dashboard-metrics-scraper 的 Deployment;

3、修改资源文件

修改对外端口

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #增加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30000  # 增加,对外端口为30000
  selector:
    k8s-app: kubernetes-dashboard

修改 token过期时间

          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            - --token-ttl=43200  # 增加,修改为 token失效时间12小时

4、配置证书(不用执行)

mkdir dashboard-certs

cd dashboard-certs/

openssl genrsa -out dashboard.key 2048

openssl req -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'

openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt -days 36500

kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

5、创建

kubectl create -f recommended.yaml

# 此时报错 The connection to the server localhost:8080 was refused - did you specify the right host or port?

# 解决:
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile

#再次创建,并查看部署情况:
kubectl create -f recommended.yaml
kubectl get pods --all-namespaces
kubectl get pods -A  -o wide
kubectl get service -n kubernetes-dashboard  -o wide
kubectl get pod,svc -n kubernetes-dashboard # dashboard 关联 pod 和 service 的状态
kubectl describe pod podname -n kubernetes-dashboard

6、创建dashboard管理员

vim dashboard-admin.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard

kubectl create -f dashboard-admin.yaml

7、分配权限

vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard

kubectl create -f dashboard-admin-bind-cluster-role.yaml

8、查看 token

kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')

9、访问 dashboard

输入 https://ip:NodePort
输入 token

kuboard

参考:https://www.kuboard.cn/install/v3/install-in-k8s.html#%E5%AE%89%E8%A3%85
1、安装

kubectl apply -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
watch kubectl get pods -n kuboard

访问 http://your-node-ip-address:30080

2、重置admin账户密码

kubectl exec -it -n kuboard $(kubectl get pods -n kuboard | grep kuboard-v3 | awk '{print $1}') -- /bin/bash

kuboard-admin reset-password

3、卸载

kubectl delete -f https://addons.kuboard.cn/kuboard/kuboard-v3.yaml
rm -rf /usr/share/kuboard

lens

kubeconfig里是内网IP 怎么导入lens呀

把apiserver用ssl转发出去,随便做个nginx做个四层转发就OK了,nginx对外

把项目拆分为多个模块,部署到k8s


应用分类

我们目前的业务中运行的业务可以分为如下:

1、脚本程序:特点是只需要部署一份,且更改相对频繁,需要实时看到运行状态,比如:调度程序、统计程序、监控程序等;

2、需要部署多个的服务:特点是需要多机部署,比如爬虫、数据分析、提特征、抽帧等;

3、HTTP 接口:特点是需要提供给内部或者外部调用,比如系统、服务类接口等;

需要部署多个的服务使用k8s会得到最直观的优化,比如部署、扩容缩容、实时监控等都非常方便,因此,先用一个分布式部署的服务,获取视频信息服务做实验。

部署 Deployment(VideoComplete)

1、项目描述

VideoComplete 是一个scrapy-redis项目,用来补充视频信息;

2、编写 项目依赖的 python 包

新建文件夹 video-complete ,把项目 VideoComplete 放入 video-complete;
在 video-complete/VideoComplete 路径 下 新建 requirements.txt 文件,并编辑

scrapy==2.5.0
redis==3.5.3
scrapy_redis==0.7.1
kafka-python==2.0.2
beautifulsoup4

3、制作镜像

在 video-complete路径 下 新建 dockerfile 文件,并编辑

FROM harbor_ip:port/copyright_monitor/python:3.8
COPY ./ /usr/VideoComplete/
RUN pip install --no-cache-dir -r requirements.txt -i  https://pypi.tuna.tsinghua.edu.cn/simple/
WORKDIR /usr/VideoComplete/VideoComplete

4、构建镜像并上传到私有镜像仓库 harbor

在 video-complete 路径 下 新建 build_and_push_image.sh

#!/bin/bash -ilex

echo "Start build image..."
# shellcheck disable=SC2164

docker build -t video-complete .

echo "Build  image successful!"

echo "Start push image..."

docker tag video-complete harbor_ip:port/copyright_monitor/video-complete

docker push harbor_ip:port/copyright_monitor/video-complete

echo "Push image successful!"

5、放到服务器上,执行构建与上传镜像脚本

yum install dos2unix

dos2unix build_and_push_image.sh

bash build_and_push_image.sh

6、编写 yaml 文件,在 k8s 上部署 deployment

参考 https://blog.csdn.net/lixinkuan328/article/details/103993274

deployment 功能:

  • 定义一组Pod期望数量,Controller会维持Pod数量与期望数量一致
  • 配置Pod的发布方式,controller会按照给定的策略更新Pod,保证更新过程中不可用Pod维持在限定数量范围内
  • 发布有问题支持回滚

deployment 原理:

  • 实际状态与期望状态不断比较,有偏差则调整
  • 实际状态:Kubelet通过心跳汇报的容器和节点状态、监控系统中的监测数据、控制器主动收集的数据;
    期望状态:用户提交的 yaml 文件,保存在 etcd 中;
  • 控制器从etcd中根据标签等参数获取实际pod的数量,并与 yaml 定义的 replicas(期望值)比较,来决定删除还是创建 pod;

如何部署到指定的 node:

方法1 node Selector:

  • 通过 kubectl get node --show-labels 查看每个node 的label(有多个等号 连接的键值对,用逗号分隔)
  • 可以通过 kubectl label node-name key=value 来给指定node打标签,也可以不打,用系统默认的标签
  • 通过 nodeSelector(在yaml中和containers平级) 指定 label,从而部署到指定 node
  • ps:只能指定一个node,多个的话需要用到方法2的节点亲和性

方法2 节点亲和性:

可以分配到指定的多个node

参考:https://kubernetes.io/zh/docs/concepts/scheduling-eviction/assign-pod-node/

分为

硬要求(requiredDuringSchedulingIgnoredDuringExecution )和
软要求(preferredDuringSchedulingIgnoredDuringExecution)

spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - k8s-11
            - k8s-12
            - k8s-13

如果我们要在不同的3个机器部署3个副本,即 replicas 是 3,如果按照上面的写法,只能保证部署范围是这三个节点,但是不能保证部署的3个副本在3个不同的节点,因为会有一个节点部署两个副本而另一个节点没副本的情况,此时需要用到pod反亲和性:

spec:
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - video-complete
          topologyKey: "kubernetes.io/hostname"
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: kubernetes.io/hostname
            operator: In
            values:
            - k8s-11
            - k8s-12
            - k8s-13

方法3:yaml 中指定 nodeName (和 containers 平级)

定义 deployment.yaml

template 前是控制器的定义,template里的pod的定义;

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-video-complete
spec:
  restartPolicy: Always
  replicas: 3
  selector:
    matchLabels:
      app: video-complete
  template:
    metadata:
      labels:
        app: video-complete
    spec:
      containers:
        - name: video-complete-74
          image: harbor_ip:port/copyright_monitor/video-complete
          imagePullPolicy: Always
          command: ["python","-u","start_0.74.py"]
        - name: video-complete-73
          image: harbor_ip:port/copyright_monitor/video-complete
          imagePullPolicy: Always
          command: [ "python","-u","start_0.73.py" ]
        - name: video-complete-72
          image: harbor_ip:port/copyright_monitor/video-complete
          imagePullPolicy: Always
          command: [ "python","-u","start_0.72.py" ]
        - name: video-complete-71
          image: harbor_ip:port/copyright_monitor/video-complete
          imagePullPolicy: Always
          command: [ "python","-u","start_0.71.py" ]
        - name: video-complete-7
          image: harbor_ip:port/copyright_monitor/video-complete
          imagePullPolicy: Always
          command: [ "python","-u","start_0.7.py" ]
      imagePullSecrets:
      - name: harbor-root
      affinity:
        podAntiAffinity:
	      requiredDuringSchedulingIgnoredDuringExecution:
	        - labelSelector:
	            matchExpressions:
	              - key: app
	                operator: In
	                values:
	                  - video-complete
	          topologyKey: "kubernetes.io/hostname"
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
              - matchExpressions:
                  - key: kubernetes.io/hostname
                    operator: In
                    values:
                      - k8s-11
                      - k8s-12
                      - k8s-13

这样就保证了3个副本限制在3个node运行,且不同副本不能在同一node运行;

创建 deployment

kubectl apply -f deployment.yaml
kubectl get deployments
kubectl get pods -o wide --show-labels  --all-namespaces
kubectl get pods -l app=video-complete --show-labels
kubectl delete deployments nginx-deployment
kubectl delete -f deployment.yaml
kubectl describe deployment deployment-video-complete

kubectl get deployment 发现 READY 是 2/3 ,即有一个pod失败了,再通过
kubectl get pods -o wide --all-namespaces发现是k8s-11的 deployment-video-complete-56fb54df4d-nm2ds READY 是0/5,然后kubectl describe pod deployment-video-complete-56fb54df4d-nm2ds发现

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "deployment-video-complete-56fb54df4d-nm2ds": operation timeout: context deadline exceeded

error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded

通过 kubectl get pods -o wide --all-namespaces 查看,发现 k8s-11机器的 calico 的READY 状态是 0/1,最后我是通过卸载calico插件并重装解决的

kubectl delete -f calico.yaml
kubectl apply -f calico.yaml

然后再删掉 deployment重新apply,再查看发现部署成功。

kubectl delete -f deployment.yaml
kubectl apply -f deployment.yaml

部署 Deployment(scrapyd-spider)

1、项目描述

scrapyd-spider 是一个scrapyd 项目,作为业务的主要数据获取源,需要在每台机器部署一个 scrapyd服务;

2、编写 项目依赖的 python 包

新建文件夹scrapyd-spider ,把 ScrapydSpider 放到 scrapyd-spider 文件夹下;

在 scrapyd-spider/ScrapydSpider 路径 下 新建 requirements.txt 文件,并编辑

attrs==19.3.0
Automat==20.2.0
beautifulsoup4==4.8.2
certifi==2019.11.28
cffi==1.14.0
chardet==3.0.4
constantly==15.1.0
cryptography==2.8
cssselect==1.1.0
demjson==2.2.4
fastdtw==0.3.4
hyperlink==19.0.0
idna==2.9
incremental==17.5.0
kafka-python==2.0.1
lxml==4.5.0
numpy==1.18.2
parsel==1.5.2
Protego==0.1.16
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
PyDispatcher==2.0.5
PyHamcrest==2.0.2
pymongo==3.10.1
PyMySQL==0.9.3
pyOpenSSL==19.1.0
python-dateutil==2.8.1
queuelib==1.5.0
redis==3.4.1
requests==2.23.0
Scrapy==2.0.1
scrapy-redis==0.6.8
scrapyd==1.2.1
selenium==3.141.0
service-identity==18.1.0
six==1.14.0
soupsieve==2.0
Twisted==20.3.0
urllib3==1.25.8
w3lib==1.21.0
zope.interface==5.0.1
pillow
mq_http_sdk
elasticsearch
emoji
gne
pypinyin
pyexecjs
pycryptodome

3、制作镜像

在 scrapyd-spider 路径 下 新建 dockerfile 文件,并编辑

FROM harbor_ip:port/copyright_monitor/python:3.8

RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' >/etc/timezone

RUN wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | apt-key add -

RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list'

RUN apt-get -y update && apt-get install -y google-chrome-stable && apt-get install -yqq unzip

RUN wget -O /tmp/chromedriver.zip http://chromedriver.storage.googleapis.com/`curl -sS chromedriver.storage.googleapis.com/LATEST_RELEASE`/chromedriver_linux64.zip

RUN unzip /tmp/chromedriver.zip chromedriver -d /usr/local/bin/

ENV DISPLAY=:99

COPY ./ /usr/scrapyd-spider


RUN pip install --no-cache-dir -r /usr/scrapyd-spider/ScrapydSpider/requirements.txt -i  https://pypi.tuna.tsinghua.edu.cn/simple/

RUN cp  /usr/scrapyd-spider/ScrapydSpider/default_scrapyd.conf /usr/local/lib/python3.8/site-packages/scrapyd/default_scrapyd.conf

WORKDIR /usr/scrapyd-spider/ScrapydSpider/

4、在 scrapyd-spider 路径 下 新建 build_and_push_image.sh 文件,并编辑

#!/bin/bash -ilex

echo "Start build image..."
# shellcheck disable=SC2164

docker build -t scrapyd-spider .

echo "Build image successful!"

echo "Start push image..."

docker tag scrapyd-spider harbor_ip:port/copyright_monitor/scrapyd-spider

docker push harbor_ip:port/copyright_monitor/scrapyd-spider

echo "Push image successful!"

bash build_and_push_image.sh

5、执行脚本,构建与上传镜像

dos2unix build_and_push_image.sh

bash build_and_push_image.sh

6、编写 yaml 文件,在 k8s 上部署 deployment

apiVersion: apps/v1
kind: Deployment
metadata:
  name: deployment-scrapyd-download-video
spec:
  replicas: 10
  selector:
    matchLabels:
      app: scrapyd-download-video
  template:
    metadata:
      labels:
        app: scrapyd-download-video
    spec:
      hostNetwork: true
      restartPolicy: Always
      containers:
        - name: scrapyd-download-video
          image: harbor_ip:port/copyright_monitor/scrapyd-download-video
          imagePullPolicy: Always
          command: ["scrapyd"]

      imagePullSecrets:
      - name: harbor-root
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: app
                    operator: In
                    values:
                      - scrapyd-download-video
              topologyKey: "kubernetes.io/hostname"

执行部署命令

kubectl apply -f deployment-scrapyd-download-video.yaml
kubectl get pods -o wide --all-namespaces

发现一个节点的 STATUS 一直是 ContainerCreating 的状态,再查看这个pod的状态:

kubectl describe pod deployment-scrapyd-download-video-6c847d4668-pbl7k

发现如下错误:

Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "deployment-scrapyd-download-video-6c847d4668-pbl7k": operation timeout: context deadline exceeded

error determining status: rpc error: code = DeadlineExceeded desc = context deadline exceeded

到该节点上执行 系统日志查询

journalctl -xe

发现错误:

stream copy error: reading from a closed fifo

没找到解决办法,把 deployment delete之后再apply,错误就消失了;

部署 API 服务

参考:https://blog.csdn.net/M2l0ZgSsVc7r69eFdTj/article/details/79988685?utm_medium=distribute.pc_relevant.none-task-blog-2defaultbaidujs_title~default-0.no_search_link&spm=1001.2101.3001.4242

hostNetwork、NodePort,LoadBalancer 和 Ingress

docker container prune

制作精简的python镜像

3个地方可以精简

pip3 install --no-cache-dir -r /tmp/requirements.txt -i https://mirrors.aliyun.com/pypi/simple/

RUN apt-get update && \
    apt-get install -y --no-install-recommends \
        python3-qrcode \
        python3-renderpm \
        python3-psycopg2 \
        python3-babel \
        python3-jinja2 \
        python3-pip \
        python3-wheel \
    && pip3 install --no-cache-dir -r /tmp/requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ 
    && rm -rf /var/lib/apt/lists/*

apt-get autoremove -y
apt-get remove -y python3-pip

K8s CICD方案

gitlab CICD

argo

jenkins
jenkinsX

gogs

drone

Tekton

使用 gitlab CICD 实现 k8s CICD


k8s双机房的CICD方案

我们目前有两个机房,我的 k8s双机房的CICD方案说明如下:

机房1:40台机器,装有 k8s集群、gitlab、Prometheus,在机房1所在的机器安装并注册gitlab-runner;

机房2:15台机器,装有k8s集群(和机房1的k8s集群独立),在k8s-master所在的机器安装并注册gitlab-runner;

双机房的master 安装与注册 gitlab-runner

1、在需要安装 gitlab-runner 的机器上安装 git

yum remove git

vim /etc/yum.repos.d/wandisco-git.repo

[wandisco-git]

name=Wandisco GIT Repository

baseurl=http://opensource.wandisco.com/centos/7/git/$basearch/

enabled=1

gpgcheck=1

gpgkey=http://opensource.wandisco.com/RPM-GPG-KEY-WANdisco

# 保存

rpm --import http://opensource.wandisco.com/RPM-GPG-KEY-WANdisco

yum -y install git
yum update git
git --version
# git version 2.31.1

2、卸载旧的 gitlab-runner(如果有的话)

gitlab-runner stop
chkconfig gitlab-runner off
gitlab-runner uninstall

# 清理文件
rm -rf /etc/gitlab-runner
rm -rf /usr/local/bin/gitlab-runner
rm -rf /usr/bin/gitlab-runner
rm -rf /etc/sudoers.d/gitlab-runner

yum remove gitlab-runner

3、安装 gitlab-runner 并注册 (二进制方式)

https://docs.gitlab.com/runner/install/linux-manually.html

二进制安装

curl -LJO "https://gitlab-runner-downloads.s3.amazonaws.com/latest/rpm/gitlab-runner_amd64.rpm"

rpm -i gitlab-runner_amd64.rpm

注册

sudo gitlab-runner register
# 1、输入gitlab 地址,Menu => admin => 概览 => Runner 中可以查看
# 2、输入 注册 token,Menu => admin => 概览 => Runner 中可以查看
# 3、输入 description (gitlab-runner-shared)
# 4、输入 tag (gitlab-runner-shared)
# 5、选择一个 executor (shell)

4、将 gitlab-runner 加入 docker 用户组

查看 gitlab-runner 是否有权访问 docker

sudo -u gitlab-runner -H docker info

将 gitlab-runner 用户 加入 docker 用户组,并再次验证

sudo usermod -aG docker gitlab-runner

sudo -u gitlab-runner -H docker info

cat /etc/group

sudo chmod a+rw /var/run/docker.sock

5、设置gitlab-runner 所在服务器的 gitlab-runner 到其他服务器(多机房方案中的k8s-master所在服务器)免密登录

gitlab-runner 所在服务器:

ps aux|grep gitlab-runner

su gitlab-runner
ssh-keygen -t rsa
cd  /home/gitlab-runner/.ssh
ls
# 复制到本机的root下
cat /home/gitlab-runner/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

# 复制到其他机器
ssh-copy-id -i ~/.ssh/id_rsa.pub -p port username@remote_ip

6、设置 gitlab-runner 为系统服务,并开机自启动

7、在 k8s master 的机器上执行 chmod 666 /etc/kubernetes/admin.conf

video-complete 项目的CICD

1、在 gitlab中新建项目 video-complete

2、打开本地项目文件夹 video-complete,在文件夹内执行

git init --initial-branch=main
git remote add origin git@gitlab_ip:Ezrealer/video-complete.git
git add .
git commit -m "Initial commit"
git push -u origin main

3、刷新 gitlab,发现项目已提交成功

4、在本地项目文件夹的根目录创建 .gitlab-ci.yml 文件,并编辑

更新 k8s deployment的方法:

  • kubectl apply -f deployment.yaml (yaml文件需要有变化)
  • kubectl delete -f deployment.yaml && kubectl apply -f deployment.yaml (任何更新都会生效)
  • kubectl rollout restart deployment deployment-video-complete (只针对镜像更新生效,imagePullPolicy: Always)
  • kubectl scale deployment XXXX --replicas=0 -n {namespace} && kubectl scale deployment XXXX --replicas=1 -n {namespace} (只针对镜像更新生效,imagePullPolicy: Always)
  • kubectl set image deployment/nginx nginx=nginx:1.16.1 --record (只针对镜像更新生效)
  • 通过 kubectl patch deployment 更新
kubectl patch deployment <deployment-name> \
  -p '{"spec":{"template":{"spec":{"containers":[{"name":"<container-name>","env":[{"name":"RESTART_","value":"'$(date +%s)'"}]}]}}}}'
variables:
  IMAGE_NAME: "${HARBOR_REPOSITORY}/copyright_monitor/video-complete:${CI_COMMIT_SHORT_SHA}"
stages:
  - build
  - deploy

build-and-push-image:
  stage: build
  script:
    - pwd
    - ls
    - echo ${IMAGE_NAME}
    - echo $HARBOR_PASSWORD | docker login -u $HARBOR_USERNAME --password-stdin $HARBOR_REPOSITORY
    - docker build -t ${IMAGE_NAME} .
    # - docker tag video-complete ${IMAGE_TAG_NAME}
    - docker push ${IMAGE_NAME}
    - sleep 5

  tags:
    - century-computer-room

deploy-to-k8s:
  stage: deploy
  script:
    - pwd
    - ls
    - echo ${IMAGE_NAME}
    - envsubst < deployment-video-complete.yaml | kubectl apply -f -
  tags:
    - ruide-computer-room

5、把gitllab-ci.yml 的变量传递到 deployment.yaml

https://blog.csdn.net/kingwinstar/article/details/116976310

更多CICD 技巧

https://blog.csdn.net/weixin_36572983/article/details/86680515

镜像与容器相关


参考

https://www.cnblogs.com/wuchangblog/default.html

https://www.cnblogs.com/diantong/

制作镜像的步骤

第一基础镜像,是基于哪个操作系统,比如Centos7或者其他的

第二步中间件镜像,比如服务镜像,跑的像nginx服务,tomcat服务

第三步项目镜像,它是服务镜像之上的,将你的项目打包进去,那么这个项目就能在你这个服务镜像里面运行了

一般运维人员都是提前将镜像做好,而开发人员就能直接拿这个镜像去用,这个镜像一定要符合现在环境部署的环境

容器产生的三种数据

启动时需要的初始数据,比如配置文件;

启动过程中产生的临时数据,该临时数据可能需要多个容器间共享;

启动过程中产生的持久化数据;

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐