为什么需要数据卷?

容器中的文件在磁盘上是临时存放的,这给容器中运行比较重要的应用程序带来一些问题。

• 问题1:当容器升级或者崩溃时,kubelet会重建容器,容器内文件会丢失

• 问题2:一个Pod中运行多个容器需要共享文件 Kubernetes 卷(Volume) 这一抽象概念能够解决这两个问题。

hostPath卷:挂载Node文件系统(Pod所在节点)上文件或者目 录到Pod中的容器。

应用场景:Pod中容器需要访问宿主机文件

在yaml文件中:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: htf-unify-data-service
  namespace: prod
  labels:
    app: htf-unify-data-service
  annotations:
    deployment.kubernetes.io/revision: "1"
spec:
  selector:
    matchLabels:
      app: htf-unify-data-service
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 0
      maxSurge: 1
  template:
    metadata:
      labels:
        app: htf-unify-data-service
    spec:
      nodeSelector:
        type: network
      containers:
        - name: htf-unify-data-service
          image: k8s-registry.qhtx.local/haitong/htf-unify-data-service-0.0.6-snapshot:7574
          imagePullPolicy: Always
          resources:
            requests:
              cpu: 200m
              memory: 1000Mi
            limits:
              cpu: 8000m
              memory: 10000Mi
          ports:
            - name: http
              containerPort: 8080
              protocol: TCP
          env:
            - name: LANG
              value: en_US.utf8
            - name: LC_ALL
              value: en_US.utf8
          volumeMounts:
          - name : varlibdockercontainers
            mountPath: /var/lib/docker/containers
          - name: timezone
            mountPath: /etc/localtime
          - name : logs
            mountPath: /root/logs
      volumes:
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: timezone
        hostPath:   
          path: /usr/share/zoneinfo/Asia/Shanghai
      - name: logs
        hostPath:
          path: /root/logs
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: htf-unify-data-service
  name: htf-unify-data-service
  namespace: prod
spec:
  ports:
  - name: htf-unify-data-service
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: htf-unify-data-service
  sessionAffinity: None
  type: ClusterIP
status:
 loadBalancer: {}

其中:

          volumeMounts:
          - name : logs
            mountPath: /root/logs ------容器内部路径
      volumes:
      - name: logs
        hostPath:
          path: /root/logs  ------宿主机内部路径(node节点)

看应用部署在哪个node节点,可以通过以下命令来查看:

[root@qhtx-k8s-master-001 yaml]# kubectl get pods -n prod -o wide
NAME                                                   READY   STATUS    RESTARTS   AGE    IP             NODE                NOMINATED NODE   READINESS GATES
htf-unify-data-service-c44d655cc-fjfbk                 1/1     Running   0          38m    10.226.97.9    qhtx-k8s-node-002   <none>           <none>
htf-unify-data-service-c44d655cc-wprk8                 1/1     Running   0          38m    10.226.96.16   qhtx-k8s-node-001   <none>           <none>

得知这个应用部署在 qhtx-k8s-node-002,和 qhtx-k8s-node-001节点下

[root@qhtx-k8s-node-001 ~]# cd /root/logs/
[root@qhtx-k8s-node-001 logs]# ls
nacos

会在qhtx-k8s-node-001和qhtx-k8s-node-002节点下自动创建/root/logs目录,并且把容器中目录下的文件给同步过来,当容器关闭,文件仍然会存在

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐