目录

什么是持久卷

创建静态pv

动态PV


什么是持久卷

PV

PersistentVolume(持久卷,简称PV)是集群内,由管理员提供的网络存储的一部分。就像集群中的节点一样,PV也是集群中的一种资源。它也像Volume一样,是一种volume插件,但是它的生命周期却是和使用它的Pod相互独立的。PV这个API对象,捕获了诸如NFS、ISCSI、或其他云存储系统的实现细节。

PVC

PersistentVolumeClaim(持久卷声明,简称PVC)是用户的一种存储请求。它和Pod类似,Pod消耗Node资源,而PVC消耗PV资源。Pod能够请求特定的资源(如CPU和内存)。PVC能够请求指定的大小和访问的模式(可以被映射为一次读写或者多次只读)。

有两种PV提供的方式:静态和动态。
(1)静态PV:集群管理员创建多个PV,它们携带着真实存储的详细信息,这些存储对于集群用户是可用的。它们存在于Kubernetes API中,并可用于存储使用。
(2)动态PV:当管理员创建的静态PV都不匹配用户的PVC时,集群可能会尝试专门地供给volume给PVC。这种供给基于StorageClass。

PVC与PV的绑定是一对一的映射。没找到匹配的PV,那么PVC会无限期得处于unbound未绑定状态。

使用时:
集群检查PVC,查找绑定的PV,并映射PV给Pod。对于支持多种访问模式的PV,用户可以指定想用的模式。一旦用户拥有了一个PVC,并且PVC被绑定,那么只要用户还需要,PV就一直属于这个用户。用户调度Pod,通过在Pod的volume块中包含PVC来访问PV。

释放时:
当用户使用PV完毕后,他们可以通过API来删除PVC对象。当PVC被删除后,对应的PV就被认为是已经是“released”了,但还不能再给另外一个PVC使用。前一个PVC的属于还存在于该PV中,必须根据策略来处理掉。

回收时:
PV的回收策略告诉集群,在PV被释放之后集群应该如何处理该PV。当前,PV可以被Retained(保留)、 Recycled(再利用)或者Deleted(删除)。保留允许手动地再次声明资源。对于支持删除操作的PV卷,删除操作会从Kubernetes中移除PV对象,还有对应的外部存储(如AWS EBS,GCE PD,Azure Disk,或者Cinder volume)。动态供给的卷总是会被删除。

做好实验准备,配置nfs输出目录

创建静态pv

编辑pv.yaml文件,创建全局资源

[root@k8s2 pv]# vim pv.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

name: pv1

spec:

capacity:

storage: 5Gi #内存大小

volumeMode: Filesystem

accessModes: #访问方式

- ReadWriteOnce #一次读写 ,一个节点

persistentVolumeReclaimPolicy: Recycle #回收策略,回收再利用

storageClassName: nfs

nfs: #nfs的服务器输出地址

path: /nfsdata/pv1

server: 192.168.56.171

---

apiVersion: v1

kind: PersistentVolume

metadata:

name: pv2

spec:

capacity:

storage: 10Gi #大小不同

volumeMode: Filesystem

accessModes:

- ReadWriteMany #多个读写

persistentVolumeReclaimPolicy: Recycle

storageClassName: nfs

nfs:

path: /nfsdata/pv2

server: 192.168.56.171

---

apiVersion: v1

kind: PersistentVolume

metadata:

name: pv3

spec:

capacity:

storage: 15Gi #大小不同

volumeMode: Filesystem

accessModes:

- ReadOnlyMany #多读

persistentVolumeReclaimPolicy: Recycle

storageClassName: nfs

nfs:

path: /nfsdata/pv3

server: 192.168.56.171

运行

[root@k8s2 pv]# kubectl apply -f pv.yaml

查看三个pv,大小不同,访问方式不同,其余同,状态为Available模式

[root@k8s2 pv]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

pv1 5Gi RWO Recycle Available nfs 3m30s

pv2 10Gi RWX Recycle Available nfs 2s

pv3 15Gi ROX Recycle Available nfs 2s

创建pvc进行使用捆绑,内存申明小于,其他相同才能满足,若超过pv的内存则不会绑定

[root@k8s2 pv]# vim pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc1

spec:

storageClassName: nfs

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 1Gi

---

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc2

spec:

storageClassName: nfs

accessModes:

- ReadWriteMany

resources:

requests:

storage: 10Gi

---

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc3

spec:

storageClassName: nfs

accessModes:

- ReadOnlyMany

resources:

requests:

storage: 15Gi

[root@k8s2 pv]# kubectl apply -f pvc.yaml

查看pvc,全绑定

[root@k8s2 pv]# kubectl get pvc

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

pvc1 Bound pv1 5Gi RWO nfs 4m1s

pvc2 Bound pv2 10Gi RWX nfs 2m4s

pvc3 Bound pv3 15Gi ROX nfs 4s

查看pv,绑定

[root@k8s2 pv]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

pv1 5Gi RWO Recycle Bound default/pvc1 nfs 12m

pv2 10Gi RWX Recycle Bound default/pvc2 nfs 8m37s

pv3 15Gi ROX Recycle Bound default/pvc3 nfs 8m37s

创建pod

[root@k8s2 pv]# vim pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: test-pd

spec:

containers:

- image: nginx

name: nginx

volumeMounts:

- mountPath: /usr/share/nginx/html

name: vol1

volumes:

- name: vol1

persistentVolumeClaim: #在pod内调用的是pvc1

claimName: pvc1

运行

[root@k8s2 pv]# kubectl apply -f pod.yaml

nfs输出目录中创建测试页

[root@k8s1 pv1]# echo pv1 > index.html

[root@k8s2 pv]# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

test-pd 1/1 Running 0 12s 10.244.106.144 k8s4 <none> <none>

访问地址成功

[root@k8s2 pv]# curl 10.244.106.144

pv1

回收资源,需要按顺序回收: pod -> pvc -> pv

[root@k8s2 pv]# kubectl delete pod test-pd

[root@k8s2 pv]# kubectl delete -f pvc.yaml

回收pvc后,pv会被回收再利用

[root@k8s2 pv]# kubectl get pv

NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE

pv1 5Gi RWO Recycle Available nfs 25m

pv2 10Gi RWX Recycle Available nfs 22m

pv3 15Gi ROX Recycle Available nfs 22m

pv的回收需要拉取镜像,提前在node节点导入镜像:k8s.gcr.io/debian-base:v2.0.0,创建一个pod,工作室清理pv

[root@k8s2 pv]# kubectl delete -f pv.yaml

动态PV

一个大规模的Kubernetes集群里,可能有成千上万个PVC,这就意味着运维人员必须实现创建出这个多个PV,此外,随着项目的需要,会有新的PVC不断被提交,那么运维人员就需要不断的添加新的,满足要求的PV,否则新的Pod就会因为PVC绑定不到PV而导致创建失败。而且通过 PVC 请求到一定的存储空间也很有可能不足以满足应用对于存储设备的各种需求。

而且不同的应用程序对于存储性能的要求可能也不尽相同,比如读写速度、并发性能等,为了解决这一问题,Kubernetes 又为我们引入了一个新的资源对象:StorageClass,通过 StorageClass 的定义,管理员可以将存储资源定义为某种类型的资源,比如快速存储、慢速存储等,用户根据 StorageClass 的描述就可以非常直观的知道各种存储资源的具体特性了,这样就可以根据应用的特性去申请合适的存储资源了。

storageclass

官网: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner

上传镜像

编辑nfs-client-provisioner.yaml文件(从官网下载即可),注意镜像不太好找

[root@k8s2 nfs]# vim deploy.yaml

apiVersion: v1

kind: Namespace #创建ns

metadata:

labels:

kubernetes.io/metadata.name: nfs-client-provisioner

name: nfs-client-provisioner 

---

apiVersion: v1

kind: ServiceAccount #创建sa

metadata:

name: nfs-client-provisioner

namespace: nfs-client-provisioner #指定ns

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: nfs-client-provisioner-runner

rules:

- apiGroups: [""]

resources: ["nodes"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["persistentvolumes"]

verbs: ["get", "list", "watch", "create", "delete"]

- apiGroups: [""]

resources: ["persistentvolumeclaims"]

verbs: ["get", "list", "watch", "update"]

- apiGroups: ["storage.k8s.io"]

resources: ["storageclasses"]

verbs: ["get", "list", "watch"]

- apiGroups: [""]

resources: ["events"]

verbs: ["create", "update", "patch"]

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: run-nfs-client-provisioner

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

namespace: nfs-client-provisioner #修改

roleRef:

kind: ClusterRole

name: nfs-client-provisioner-runner

apiGroup: rbac.authorization.k8s.io

---

kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

namespace: nfs-client-provisioner

rules:

- apiGroups: [""]

resources: ["endpoints"]

verbs: ["get", "list", "watch", "create", "update", "patch"]

---

kind: RoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: leader-locking-nfs-client-provisioner

namespace: nfs-client-provisioner

subjects:

- kind: ServiceAccount

name: nfs-client-provisioner

namespace: nfs-client-provisioner

roleRef:

kind: Role

name: leader-locking-nfs-client-provisioner

apiGroup: rbac.authorization.k8s.io

---#记得分割

apiVersion: apps/v1

kind: Deployment

metadata:

name: nfs-client-provisioner

labels:

app: nfs-client-provisioner

namespace: nfs-client-provisioner #指定ns

spec:

replicas: 1 #一个副本

strategy:

type: Recreate #重建策略

selector:

matchLabels:

app: nfs-client-provisioner

template:

metadata:

labels:

app: nfs-client-provisioner

spec:

serviceAccountName: nfs-client-provisioner

containers:

- name: nfs-client-provisioner

image: sig-storage/nfs-subdir-external-provisioner:v4.0.2

volumeMounts:

- name: nfs-client-root

mountPath: /persistentvolumes

env:

- name: PROVISIONER_NAME

value: k8s-sigs.io/nfs-subdir-external-provisioner

- name: NFS_SERVER

value: 192.168.56.171 #地址修改

- name: NFS_PATH

value: /nfsdata #路径修改

volumes:

- name: nfs-client-root

nfs:

server: 192.168.56.171 #修改

path: /nfsdata

---

apiVersion: storage.k8s.io/v1

kind: StorageClass #类别

metadata:

name: nfs-client

annotations:

storageclass.kubernetes.io/is-default-class: "true"

provisioner: k8s-sigs.io/nfs-subdir-external-provisioner

parameters:

archiveOnDelete: "false"

#true表示删除pvc后,目录打包 false表示删除pvc后,目录直接删除

运行

[root@k8s2 nfs]# kubectl apply -f deploy.yaml

查看pod

[root@k8s2 nfs]# kubectl -n nfs-client-provisioner get pod

NAME READY STATUS RESTARTS AGE

nfs-client-provisioner-5df54dbfcc-m5cvp 1/1 Running 0 28s

查看sc

[root@k8s2 nfs]# kubectl get sc

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE

nfs-client k8s-sigs.io/nfs-subdir-external-provisioner Delete Immediate false 86s

不用创建pv,直接编辑pvc

[root@k8s2 pv]# vim pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc1

spec:

#storageClassName: nfs-client

accessModes:

- ReadWriteOnce

resources:

requests:

storage: 10Gi

---

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc2

spec:

#storageClassName: nfs-client

accessModes:

- ReadWriteMany

resources:

requests:

storage: 20Gi

---

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: pvc3

spec:

#storageClassName: nfs-client

accessModes:

- ReadOnlyMany

resources:

requests:

storage: 30Gi

[root@k8s2 pv]# kubectl apply -f pvc.yaml

自动创建三个pv,并且绑定状态

 nfs输出目录自动创建

创建测试页 

创建pod,调用pvc

[root@k8s2 pv]# vim pod.yaml

apiVersion: v1

kind: Pod

metadata:

name: test-pd

spec:

containers:

- image: nginx

name: nginx

volumeMounts:

- mountPath: /usr/share/nginx/html

name: vol1

volumes:

- name: vol1

persistentVolumeClaim:

claimName: pvc1

---

apiVersion: v1

kind: Pod

metadata:

name: test-pd-2

spec:

containers:

- image: nginx

name: nginx

volumeMounts:

- mountPath: /usr/share/nginx/html

name: vol1

volumes:

- name: vol1

persistentVolumeClaim:

claimName: pvc2

---

apiVersion: v1

kind: Pod

metadata:

name: test-pd-3

spec:

containers:

- image: nginx

name: nginx

volumeMounts:

- mountPath: /usr/share/nginx/html

name: vol1

volumes:

- name: vol1

persistentVolumeClaim:

claimName: pvc3

[root@k8s2 pv]# kubectl apply -f pod.yaml

访问测试页面成功

删除pod的时候pv保留

[root@k8s2 pv]# kubectl delete -f pod.yaml

动态创建的pv,回收时自动删除,nfs输出目录也会被删除。

[root@k8s2 pv]# kubectl delete -f pvc.yaml

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐