K8S持久化存储PV和PVC精确绑定的简单实现
NFS服务器[root@V71 nfs_test]# cat /etc/exports#/nfs_test 192.168.0.0/16(rw,no_root_squash,no_all_squash,sync)/nfs_test 192.168.0.0/16(rw,sync,all_squash)/nfs2 192.168.0.0/16(rw,sync,all_squash)/nfs...
NFS服务器
[root@V71 nfs_test]# cat /etc/exports
#/nfs_test 192.168.0.0/16(rw,no_root_squash,no_all_squash,sync)
/nfs_test 192.168.0.0/16(rw,sync,all_squash)
/nfs2 192.168.0.0/16(rw,sync,all_squash)
/nfs3 192.168.0.0/16(rw,sync,all_squash)
[root@V71 nfs_test]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service
配置两个PV,两个PVC,并分别对应,红色字体部份是绑定的键值
[root@k8s1 pv]# cat pv2.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv2
labels:
pv: two
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.137.1
path: /nfs2
[root@k8s1 pv]# cat pvc2.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc2
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 150Mi
selector:
matchLabels:
pv: two
[root@k8s1 pv]# cat pv3.yml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv3
labels:
pv: three
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.137.1
path: /nfs3
[root@k8s1 pv]# cat pvc3.yml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc3
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 150Mi
selector:
matchLabels:
pv: three
创建PV,PVC
[root@k8s1 pv]# kubectl create -f pv2.yml
persistentvolume "pv2" created
[root@k8s1 pv]# kubectl create -f pvc2.yml
persistentvolumeclaim "pvc2" created
[root@k8s1 pv]# kubectl create -f pv3.yml
persistentvolume "pv3" created
[root@k8s1 pv]# kubectl create -f pvc3.yml
persistentvolumeclaim "pvc3" created
此时查看PV,已经精确对应了
[root@k8s1 pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv1 2Gi RWO Recycle Bound default/pvc1 16h
pv2 200Mi RWX Retain Bound default/pvc2 58s
pv3 200Mi RWX Retain Bound default/pvc3 17s
更多推荐
所有评论(0)