首先,为了方便下面的实验,首先将第6节web-demo项目的src/main/java/com/mooc/demo/controller/DemoController.java文件中更改的k8s恢复为hello,重新构建镜像hub.lzxlinux.cn/kubernetes/web:latest并推送至仓库hub.lzxlinux.cn


Ingress-Nginx

前面简单使用了ingress做服务发现,但还存在一些问题,真正在生产环境下使用ingress还远远不够。

更改部署方式

之前使用的是Deployment部署的ingress,这并不合适。使用DaemonSet更好,

# kubectl get deployments -n ingress-nginx

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
default-http-backend       1/1     1            1           12d
nginx-ingress-controller   3/3     3            3           12d
# kubectl get deploy -n ingress-nginx nginx-ingress-controller -o yaml > nginx-ingress-controller.yaml

# vim nginx-ingress-controller.yaml
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  revisionHistoryLimit: 2147483647
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  updateStrategy:
    rollingUpdate:
      maxUnavailable: 1
    type: RollingUpdate
  template:
    metadata:
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
      creationTimestamp: null
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --configmap=$(POD_NAMESPACE)/nginx-configuration
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx
        - --annotations-prefix=nginx.ingress.kubernetes.io
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
        image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: nginx-ingress-controller
        ports:
        - containerPort: 80
          hostPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          hostPort: 443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources: {}
        securityContext:
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          procMount: Default
          runAsUser: 33
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      hostNetwork: true
      nodeSelector:
        app: ingress
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      serviceAccount: nginx-ingress-serviceaccount
      serviceAccountName: nginx-ingress-serviceaccount
      terminationGracePeriodSeconds: 30
# kubectl delete deploy -n ingress-nginx nginx-ingress-controller

# kubectl apply -f nginx-ingress-controller.yaml

# kubectl get ds -n ingress-nginx

NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
nginx-ingress-controller   3         3         0       3            0           app=ingress     24s

# kubectl get pods -n ingress-nginx -o wide

NAME                                    READY   STATUS    RESTARTS   AGE     IP             NODE    NOMINATED NODE   READINESS GATES
default-http-backend-5c9bb94849-mlvjq   1/1     Running   4          13d     172.10.5.248   node3   <none>           <none>
nginx-ingress-controller-s46x6          1/1     Running   0          3m52s   192.168.1.55   node2   <none>           <none>
nginx-ingress-controller-scz5c          1/1     Running   0          3m52s   192.168.1.56   node3   <none>           <none>
nginx-ingress-controller-vw8nz          1/1     Running   0          3m52s   192.168.1.54   node1   <none>           <none>

只要node节点打上标签app: ingress即可运行nginx-ingress-controller。

kubectl label nodes <nodename> app=ingress

四层代理

当tcp服务需要对外提供服务时,tcp服务如何通过ingress做服务发现呢?

重新部署web-demo项目,

# vim web-dev.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-demo
  namespace: dev
spec:
  selector:
    matchLabels:
      app: web-demo
  replicas: 1
  template:
    metadata:
      labels:
        app: web-demo
    spec:
      containers:
      - name: web-demo
        image: hub.lzxlinux.cn/kubernetes/web:latest
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: web-demo
  namespace: dev
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: web-demo
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-demo
  namespace: dev
spec:
  rules:
  - host: web.lzxlinux.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: web-demo
          servicePort: 80
# kubectl apply -f web-dev.yaml

# kubectl get all -n dev

NAME                             READY   STATUS    RESTARTS   AGE
pod/dubbo-demo-598bff497-64lxn   1/1     Running   0          2m18s
pod/web-demo-786c69fdf4-2h9fh    1/1     Running   0          7s

NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
service/web-demo   ClusterIP   10.101.14.46   <none>        80/TCP    7s

NAME                         READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/dubbo-demo   1/1     1            1           2m18s
deployment.apps/web-demo     1/1     1            1           8s

NAME                                   DESIRED   CURRENT   READY   AGE
replicaset.apps/dubbo-demo-598bff497   1         1         1       2m18s
replicaset.apps/web-demo-786c69fdf4    1         1         1       8s

浏览器访问,

在这里插入图片描述

接着创建tcp-services的服务配置ConfigMap,

# kubectl get cm -n ingress-nginx

NAME                              DATA   AGE
ingress-controller-leader-nginx   0      13d
nginx-configuration               0      13d
tcp-services                      0      13d
udp-services                      0      13d

# vim tcp-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: tcp-services
  namespace: ingress-nginx
data:
  "30000": dev/web-demo:80              #暴露dev命令空间的web-demo服务
# kubectl apply -f tcp-config.yaml

任一node节点查看端口:netstat -lntp |grep 30000,会看到已经监听30000端口。通过端口访问服务,

在这里插入图片描述

这就达到了四层代理的目的。


自定义ingress-nginx配置
# vim nginx-cm.yaml
kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app: ingress-nginx
data:
  proxy-body-size: "64m"
  proxy-read-timeout: "180"
  proxy-send-timeout: "180"
# kubectl apply -f nginx-cm.yaml

任选一个node节点,进入ingress-nginx容器查看nginx配置文件,

# docker ps |grep ingress-nginx

# docker exec -it fe1 bash

www-data@node1:/etc/nginx$ cat nginx.conf|grep "64m"
			client_max_body_size                    "64m";
			client_max_body_size                    "64m";
			
www-data@node1:/etc/nginx$ cat nginx.conf|grep "180"
			proxy_send_timeout                      180s;
			proxy_read_timeout                      180s;
			proxy_send_timeout                      180s;
			proxy_read_timeout                      180s;
		listen 18080 default_server reuseport backlog=511;
		listen [::]:18080 default_server reuseport backlog=511;

可以看到,自定义配置已经生效。


ingress配置https
# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout lzxlinux.key -out lzxlinux.crt -subj "/CN=*.lzxlinux.cn/O=*.lzxlinux.cn"

# kubectl create secret tls lzxlinux-tls --key lzxlinux.key --cert lzxlinux.crt -n dev

# kubectl get secrets -n dev

NAME                  TYPE                                  DATA   AGE
default-token-6jlgk   kubernetes.io/service-account-token   3      71m
lzxlinux-tls          kubernetes.io/tls                     2      7s

# vim nginx-ingress-controller.yaml
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
        - --configmap=$(POD_NAMESPACE)/nginx-configuration
        - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
        - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx
        - --annotations-prefix=nginx.ingress.kubernetes.io
        - --default-ssl-certificate=dev/lzxlinux-tls                #指定默认证书
# kubectl apply -f nginx-ingress-controller.yaml

# vim web-dev.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-demo
  namespace: dev
spec:
  selector:
    matchLabels:
      app: web-demo
  replicas: 1
  template:
    metadata:
      labels:
        app: web-demo
    spec:
      containers:
      - name: web-demo
        image: hub.lzxlinux.cn/kubernetes/web:latest
        ports:
        - containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
  name: web-demo
  namespace: dev
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: web-demo
  type: ClusterIP
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-demo
  namespace: dev
spec:
  rules:
  - host: web.lzxlinux.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: web-demo
          servicePort: 80
  tls:
    - hosts:
      - web.lzxlinux.cn
      secretName: lzxlinux-tls
# kubectl apply -f web-dev.yaml

浏览器访问,

在这里插入图片描述

因为自制证书,所以提示不安全,https配置成功。


session保持

重新创建deployment,使用springboot-web镜像,方便区分。

# vim springboot-web.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: springboot-web-demo
  namespace: dev
spec:
  selector:
    matchLabels:
      app: web-demo
  replicas: 1
  template:
    metadata:
      labels:
        app: web-demo
    spec:
      containers:
      - name: web-demo
        image: hub.lzxlinux.cn/kubernetes/springboot-web:latest
        ports:
        - containerPort: 8080
# kubectl apply -f springboot-web.yaml

# kubectl get pods -n dev

NAME                                  READY   STATUS    RESTARTS   AGE
dubbo-demo-598bff497-64lxn            1/1     Running   0          95m
springboot-web-demo-f9747858b-wspnn   1/1     Running   0          16s
web-demo-786c69fdf4-2h9fh             1/1     Running   0          93m

# while sleep 1; do curl -k "https://web.lzxlinux.cn/hello?name=test"; echo ""; done

Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hello test! This is my dubbo service! This is the Web Service CI/CD!

可以看到,此时访问web-demo服务,返回结果各占一半。下面让它保持session,

# vim web-demo-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/affinity: cookie
    nginx.ingress.kubernetes.io/session-cookie-hash: sha1
    nginx.ingress.kubernetes.io/session-cookie-name: route              #自定义cookie名称
  name: web-demo
  namespace: dev
spec:
  rules:
  - host: web.lzxlinux.cn
    http:
      paths:
      - backend:
          serviceName: web-demo
          servicePort: 80
        path: /
  tls:
    - hosts:
      - web.lzxlinux.cn
      secretName: lzxlinux-tls
# kubectl apply -f web-demo-ingress.yaml

在这里插入图片描述

此时不断刷新网页,也只返回一个结果,除非退出浏览器或删除cookie。


流量控制
# kubectl edit ds -n ingress-nginx nginx-ingress-controller             #升级镜像版本

image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0

# kubectl create ns canary

# vim web-canary-a.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-canary-a
  namespace: canary
spec:
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  selector:
    matchLabels:
      app: web-canary-a
  replicas: 1
  template:
    metadata:
      labels:
        app: web-canary-a
    spec:
      containers:
      - name: web-canary-a
        image: hub.lzxlinux.cn/kubernetes/web:latest
        imagePullPolicy: Always
        ports:
        - containerPort: 8080
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 20
          periodSeconds: 10
          failureThreshold: 2
          successThreshold: 1
          timeoutSeconds: 5
        readinessProbe:
          httpGet:
            path: /hello?name=test
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 20
          periodSeconds: 10
          failureThreshold: 2
          successThreshold: 1
          timeoutSeconds: 5
      imagePullSecrets:
      - name: hub-secret
---
apiVersion: v1
kind: Service
metadata:
  name: web-canary-a
  namespace: canary
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: web-canary-a
  type: ClusterIP
# kubectl apply -f web-canary-a.yaml

# cp web-canary-a.yaml web-canary-b.yaml

# sed -i 's/web-canary-a/web-canary-b/g' web-canary-b.yaml

# sed -i 's/web:latest/springboot-web:latest/g' web-canary-b.yaml

# kubectl apply -f web-canary-b.yaml

# kubectl get pods -n canary

NAME                           READY   STATUS    RESTARTS   AGE
web-canary-a-cd4fc8895-6n79q   1/1     Running   0          2m38s
web-canary-b-554cdcc49-s7j64   1/1     Running   0          100s

# vim ingress-common.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-canary-a
  namespace: canary
spec:
  rules:
  - host: canary.lzxlinux.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: web-canary-a
          servicePort: 80
# kubectl apply -f ingress-common.yaml

在Windows电脑hosts文件中添加本地dns:

192.168.1.54 canary.lzxlinux.cn

在这里插入图片描述

访问到web-canary-a服务。接着上线web-canary-b服务,将10%的流量转发至web-canary-b服务。

# vim ingress-weight.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-canary-b
  namespace: canary
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
  rules:
  - host: canary.lzxlinux.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: web-canary-b
          servicePort: 80
# kubectl apply -f ingress-weight.yaml

# echo '192.168.1.54 canary.lzxlinux.cn' >> /etc/hosts

# while sleep 1; do curl 'http://canary.lzxlinux.cn/hello?name=test'; echo ""; done

Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!

# kubectl edit -f ingress-weight.yaml -n canary             #做下面更改,将90%流量转发至web-canary-b服务

nginx.ingress.kubernetes.io/canary-weight: "90"

# while sleep 1; do curl 'http://canary.lzxlinux.cn/hello?name=test'; echo ""; done

Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!

这就是通过权重实现精确的流量控制。还可以通过cookie实现定向的流量控制,

# vim ingress-cookie.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-canary-b
  namespace: canary
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-cookie: "web-canary"
spec:
  rules:
  - host: canary.lzxlinux.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: web-canary-b
          servicePort: 80
# kubectl apply -f ingress-cookie.yaml

# while sleep 1; do curl 'http://canary.lzxlinux.cn/hello?name=test'; echo ""; done

Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!

# while sleep 1; do curl --cookie "web-canary=always" 'http://canary.lzxlinux.cn/hello?name=test'; echo ""; done

Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!

这就实现了通过cookie实现了定向的流量控制,便于测试使用。另外也可以通过header实现定向的流量控制,

# vim ingress-header.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: web-canary-b
  namespace: canary
  annotations:
    nginx.ingress.kubernetes.io/canary: "true"
    nginx.ingress.kubernetes.io/canary-by-header: "web-canary"
spec:
  rules:
  - host: canary.lzxlinux.cn
    http:
      paths:
      - path: /
        backend:
          serviceName: web-canary-b
          servicePort: 80
# kubectl apply -f ingress-header.yaml

# while sleep 1; do curl --cookie "web-canary=always" 'http://canary.lzxlinux.cn/hello?name=test'; echo ""; done

Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!
Hello test! This is my dubbo service! This is the Web Service CI/CD!

# while sleep 1; do curl -H "web-canary: always" 'http://canary.lzxlinux.cn/hello?name=test'; echo ""; done

Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!
Hi test! Cicd for the springboot-web-demo project in k8s!

可以看到,通过header实现的效果与cookie类似。

权重、cookie和header三者可以组合使用,优先级:header > cookie > weight


共享存储

对于无状态的应用,迁移到Kubernetes非常方便,但对于有状态的应用,就涉及到数据存储的问题。

PV

PV就是Persistent Volume(持久化数据卷),由集群管理员静态创建或使用StorageClass(存储类)动态创建。它是集群中的资源,就像节点是集群资源一样。PV具有持久性,生命周期独立于使用PV的任何单独Pod。

  • PV的提供方式:
静态:

集群管理员创建许多PV。它们包含实际存储的详细信息,可供集群用户使用。它们存在于Kubernetes API中,可以使用。

动态:

当管理员创建的静态PV没有一个与用户的PVC匹配时,集群可能会尝试动态地为PVC提供一个专门的卷。

此方式基于StorageClass: PVC需要请求于一个StorageClass,而这个StorageClass必须已经由管理员创建并配置才能进行动态创建。
  • PV的回收策略:
Retain  保留,允许手动回收资源。当删除PVC时,PV仍然存在,需要手动回收

Delete  删除,允许删除PV及外部存储设施(如AWS EBS、GCE PD、Azure Disk等)

Recycle 循环使用,PV执行基础删除(rm -rf /thevolume/*)后用于新的PVC
  • PV的访问模式:
ReadWriteOnce   卷可以被一个节点以读写方式挂载

ReadOnlyMany    卷可以被多个节点以只读方式挂载

ReadWriteMany   卷可以被多个节点以读写方式挂载

注意:一个卷只能使用一种访问模式挂载,即使它支持多种访问模式。

  • 示例:
apiVersion: v1
kind: PersistentVolume              #类型PV
metadata:
  name: nfs             #PV名
spec:
  capacity:
    storage: 10Gi               #存储容量
  accessModes:
    - ReadWriteOnce             #读写权限,只允许被1个Node挂载
  persistentVolumeReclaimPolicy: Recycle                #回收策略
  storageClassName: nfs             #存储类名
  nfs:
    path: "/tmp"                #nfs路径
    server: 172.22.1.2

PVC

PVC就是Persistent Volume Claim(持久化数据卷声明),它是用户对存储的请求。它与Pod相似,Pod消耗节点资源,PVC消耗PV资源。Pod可以请求特定级别的资源(CPU和内存),PVC声明可以请求特定的数据卷大小和访问模式。

PVC和PV是一一绑定的。PV需要满足PVC的需求(如存储大小,读写权限等),且storageClassName必须一致。

  • 示例:
apiVersion: v1
kind: PersistentVolumeClaim             #类型PVC
metadata:
  name: nfsclaim                #PVC名
spec:
  accessModes:
    - ReadWriteOnce
  storageClassName: nfs
  resources:
    requests:
      storage: 10Gi
  • Pod使用PVC示例:
apiVersion: v1
kind: Pod
metadata:
  name: mypod
spec:
  containers:
    - name: myfrontend
      image: nginx
      volumeMounts:
      - mountPath: "/var/www/html"
        name: mypd
  volumes:
    - name: mypd
      persistentVolumeClaim:
        claimName: myclaim

Pod通过使用PVC作为一个卷访问存储。PVC必须与使用PVC的Pod在同一Namespace中。集群在Pod的Namespace中找到PVC,并使用它获得与之绑定的PV。

一个Pod可以使用多个PVC,一个PVC可以同时给多个Pod提供服务。一个PVC只能绑定一个PV,一个PV只能对应一种后端存储。

在这里插入图片描述


StorageClass

StorageClass(存储类)定义存储的类别,会根据PVC自动创建PV,用于实现PV与PVC的动态绑定。

PV和PVC的存储类别必须一致才能绑定。每个PV和PVC都有storageClassstorageClassName字段允许为空,storageClassName为空的PVC表示请求一个storageClassName为空的PV,即没有类的PV。

每个StorageClass都包含provisionerparametersreclaimPolicy字段,这些字段会在StorageClass需要动态分配PersistentVolume时会使用到。

StorageClass动态创建的PV会在的reclaimPolicy字段中指定回收策略,可以是Delete或者Retain。如果StorageClass对象被创建时没有指定reclaimPolicy,它将默认为Delete。通过StorageClass手动创建并管理的PV会使用它们被创建时指定的回收策略。

  • 示例:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
  - debug
volumeBindingMode: Immediate

PV可以配置为可扩展。当allowVolumeExpansion字段设置为true时,允许用户通过编辑相应的PVC对象来调整卷大小。


StatefulSet

在Kuberbetes中,Deployment适合部署一类应用:

无状态

pod无差别

没有顺序性

而StatefulSet(状态集)适合部署另一类应用:

有顺序性

持久存储区分

StatefulSet是用来管理有状态应用程序的工作负载API对象。管理一组Pod的部署和扩展,并提供关于这些Pod的顺序和惟一性的保证。

与Deployment类似,StatefulSet管理基于相同容器规范的Pod。这些Pod是根据相同的规范创建的,但是不能互换:每个Pod都有一个持久的标识符,它在任何重新调度过程中都要维护这个标识符。

  • 示例:
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: "nginx"
  replicas: 3
  template:
    metadata:
      labels:
        app: nginx
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "my-storage-class"
      resources:
        requests:
          storage: 1Gi

上面,volumeClaimTemplates是创建PVC的模板,自动创建PVC从而创建PV,使用PV提供稳定的存储。


Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐