Volumes存储挂载

EmptyDir

用于同一个Pod中多个容器共享数据

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx:1.15.12-alpine
    name: test-container
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}
hostPath

把宿主机上的文件或目录挂载到Pod上

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx:1.15.12-alpine
    name: test-container
    volumeMounts:
    - mountPath: /etc/timezone
      name: timezone
  volumes:
  - name: timezone
    hostPath: 
      path: /etc/timezone
      type: File

安装nfs

#所有机器安装nfs客户端
yum install -y nfs-utils	  
	  
#在一个master节点安装nfs服务端
yum install -y nfs-server rpcbind 

#nfs主节点配置权限
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports 
mkdir -p /nfs/data
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03

systemctl enable rpcbind --now
systemctl enable nfs-server --now
#配置生效
exportfs -r

#查看共享的目录
exportfs

在从节点上配置操作

#执行以下命令挂载 nfs 服务器上的共享目录到本机路径 /nfs/data
mkdir -p /nfs/data

#挂载主节点的目录
mount -t nfs 192.168.99.200:/nfs/data /nfs/data

挂载nfs

编写配置文件 vi deploy-nfs.yaml

apiVersion: v1
kind: Pod
metadata:
  name: test-pd
spec:
  containers:
  - image: nginx:1.15.12-alpine
    name: test-container
    volumeMounts:
    - mountPath: /opt
      name: nfs-volume
  volumes:
  - name: nfs-volume
    nfs: 
      server: 192.168.99.200
      path: /data/nfs/test-dp

PV & PVC

原生方式数据挂载的缺点

  1. 需要先手动创建目录
  2. pod删除后,目录不会自动删除
  3. 磁盘的使用空间没法做限制,权限没法控制

PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置
PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格

创建PV vi pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv01-10m
spec:
  capacity:
    storage: 10M
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/01
    server: 192.168.99.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv02-1gi
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/02
    server: 192.168.99.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv03-3gi
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteMany
  storageClassName: nfs
  nfs:
    path: /nfs/data/03
    server: 192.168.99.200

其中storageClassName名称可以重复

kubectl apply -f pv.yaml

#查看创建好的PV
kubectl get persistentvolume

#状态都是Available可用的
kubectl get pv	
PV回收策略

Retain:保留,允许手动回收资源
Recycle:回收
Delete:删除
通过persistentVolumeReclaimPolicy: Recycle字段配置

PV访问策略

ReadWriteOnce:单节点以读写模式RWO
ReadOnlyMany:多个节点以只读模式ROX
ReadWriteMany:多个节点以读写模式RWX

PV的状态

Available:没有被PVC绑定的空闲资源
Bound:已经被PVC绑定
Released:已释放,PVC被删除,但是资源还未被重新使用
Failed:失败,自动回收失败


hostPath类型PV

kind: PersistentVolume
apiVersion: v1
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  storageClassName: hostpath
    capacity:
      storage: 1Gi
    accessModes:
    - ReadWriteOnce
    hostPath:
      path: "/mnt/data"

创建PVC

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: task-pvc-claim
spec:
  storageClassName: nfs
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

执行

kubectl apply -f pvc.yaml

#Bound状态表示已绑定,CLAIM显示是那个PVC使用到了PV,如 default/nginx-pvc 
kubectl get pv

#删除PVC
kubectl delete -f pvc.yaml

#状态变更为Released,Retain策略保留PV中的数据,在Released状态上PV不能重新分配
kubectl get pv

#重新应用,绑定另一个PV
kubectl apply -f pvc.yaml
kubectl get pv
kubectl get pvc

创建Pod绑定PVC

kind: Pod
apiVersion: v1
metadata:
  name: task-pv-pod
spec:
  volumes: 
  - name: task-pv-storage
    persistentVolumeClaim:
      claimName: task-pvc-claim
  containers: 
  - name: task-pv-container
    image: nginx
    ports: 
    - containerPort: 80
      name: "http-server"
    volumeMounts: 
	- mountPath: "/usr/share/nginx/html"
      name: task-pv-storage

PVC创建和挂载失败的原因

PVC一直Pending的原因:
PVC的空间申请大小大于PV的大小
PVC的StorageClassName没有和PV的一致
PVC的accessModes和PV的不一致

挂载PVC的Pod一直处于Pending:
PVC没有创建成功/PVC不存在
PVC和Pod不在同一个Namespace

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐