kubernets存储之local-storage、local-path-provisioner、nfs挂载
ConfigMap创建的三种方式1、使用目录创建[root@master config]# ls /opt/k8s/configgame.propertiesui.properties[root@master config]# cat game.propertiesenemies=alienslives=3[root@master config]# cat ui.propertiescolor.g
PersistentVolume、PersistentVolumeClaim
概念
PersistentVolume (PV)
是由管理员设置的存储,它是群集的一部分。就像节点是集群中的资源一样,PV 也是集群中的资源。 PV 是
Volume 之类的卷插件,但具有独立于使用 PV 的 Pod 的生命周期。此 API 对象包含存储实现的细节,即 NFS、
iSCSI 或特定于云供应商的存储系统
PersistentVolumeClaim (PVC)
是用户存储的请求。它与 Pod 相似。Pod 消耗节点资源,PVC 消耗 PV 资源。Pod 可以请求特定级别的资源
(CPU 和内存)。声明可以请求特定的大小和访问模式(例如,可以以读/写一次或 只读多次模式挂载)
绑定
每个pod都有自己的请求模板,创建对应pvc,pvc通过storageClassName、accessModes参数与pv进行匹配,匹配成功后与之绑定。一旦 PV 和 PVC 绑定后, PersistentVolumeClaim 绑定是排他性的,不管它们是如何绑定的。 Pod、PVC 跟PV 绑定是一对一的映射
访问模式(accessModes)
-
ReadWriteOnce——该卷可以被单个节点以读/写模式挂载
-
ReadOnlyMany——该卷可以被多个节点以只读模式挂载
-
ReadWriteMany——该卷可以被多个节点以读/写模式挂载
在命令行中,访问模式缩写为:
RWO - ReadWriteOnce
ROX - ReadOnlyMany
RWX - ReadWriteMany
回收策略
- Retain(保留)——手动回收
- Delete(删除)——关联的存储资产(例如 AWS EBS、GCE PD、Azure Disk 和 OpenStack Cinder 卷)
将被删除
状态
- 卷可以处于以下的某种状态:
- Available(可用)——一块空闲资源还没有被任何声明绑定
- Bound(已绑定)——卷已经被声明绑定
- Released(已释放)——声明被删除,但是资源还未被集群重新声明
- Failed(失败)——该卷的自动回收失败
演示说明
kubernetes使用存储有两种方式:静态与动态
静态:需管理员手动编写pv.yaml并创建pv路径
动态:通过pvc请求动态创建pv同时动态pv路径
静态
创建
local-storageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-storage
#表示不需要自动创建PV
provisioner: kubernetes.io/no-provisioner
#WaitForFirstConsumer表示需要等到Pod运行之后才让PVC和PV绑定。因为在使用Local Persistent Volume的时候PV和对应的PVC必须
#要跟随Pod在同一node下面,否则会调度失败。
volumeBindingMode: WaitForFirstConsumer
parameters:
#删除时是否存档,false表示不存档,即删除数据,true表示存档,即重命名路径,默认为true
#命名格式为oldPath前面增加archived-的前缀
archiveOnDelete: "true"
#回收策略是手动回收
reclaimPolicy: Retain
#允许扩容
allowVolumeExpansion: true
创建pvc static-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: static-pvc
spec:
selector:
matchLabels:
app: static-pv
accessModes:
- ReadWriteOnce
#这边的storageClassName要与static-pv.yaml匹配,同时selector标签匹配时,才能够与pv进行绑定
storageClassName: local-storage
resources:
requests:
storage: 1Gi
[root@master pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
static-pvc Pending local-storage 3s
#由于local-storageClass.yaml使用的是WaitForFirstConsumer.pod还没有进行调用,所有pvc处于挂起状态
创建pv static-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: static-pv
labels:
app: static-pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
#与local-storageClass.yaml中name标签相同,否则不需要等待pod调用,pv创建之后就可以直接bound
storageClassName: local-storage
#local字段指定了它是一个 Local Persistent Volume,而path 字段,指定的这个 PV 对应的本地磁盘的路径:/static
#并且用nodeAffinity指定这个PV必须运行在node1节点上
local:
path: /static
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node1
$ kubectl apply -f static-pv.yaml
persistentvolume/static-pv created
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
static-pv 1Gi RWO Retain Available local-storage 3s
创建pod
apiVersion: v1
kind: Pod
metadata:
name: static-pod
spec:
containers:
- image: mynginx:v1
name: mynginx
command:
- "/bin/sh"
args:
- "-c"
- "echo 'hello k8s local storage' > /mnt/SUCCESS && sleep 36000 || exit 1"
volumeMounts:
- mountPath: /mnt
name: example-volume
volumes:
- name: example-volume
persistentVolumeClaim:
#与static-pvc.yaml中metadata-name名称一致,表示pod调用该pvc
claimName: static-pvc
此时发现pod一直处于ContainerCreating状态,kubectl dsrcribe查看提示/static,与static-pv.yaml文件中path字段对应。原来master节点根目录下没有static文件夹。
$ kubectl describe pod static-pod
...
Warning FailedMount 8s (x5 over 15s) kubelet, master MountVolume.NewMounter initialization failed for volume "static-pv" : path "/static" does not exist
创建该文件夹后,pod运行 pvc
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
static-pod 1/1 Running 0 4m37s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
static-pvc Bound static-pv 1Gi RWO local-storage 6m10s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
static-pv 1Gi RWO Retain Bound default/static-pvc local-storage 13m
$ cat /static/SUCCESS
hello k8s local storage
回收
由于回收策略是手动回收,所以当pod和pvc被删除时、本地存储的数据不会被删除,pv会变成Released(释放状态)
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
static-pv 1Gi RWO Retain Released default/static-pvc local-storage 3m45s
$ ls /static/SUCCESS
/static/SUCCESS
pv会变成Released(释放状态)无法在被绑定,原因是pv中依然留存着曾经的绑定记录
$ kubectl get pv static-pv -o yaml
...
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: static-pvc
namespace: default
resourceVersion: "41242"
uid: f49e6358-1c09-4cb1-b594-1f279747359f
...
$ kubectl edit pv static-pv
#删除上面的claimRef后,恢复成Available(可用)状态
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
static-pv 1Gi RWO Retain Available local-storage 9m7s
动态
创建
服务器类型 | ip |
---|---|
master | 192.168.234.132 |
node1 | 192.168.234.134 |
nfs共享存储 | 192.168.234.133 |
本地存储local-path-provisioner
local-path-provisioner.yaml
apiVersion: v1
kind: Namespace
metadata:
name: local-path-storage
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [ "" ]
resources: [ "nodes", "persistentvolumeclaims", "configmaps" ]
verbs: [ "get", "list", "watch" ]
- apiGroups: [ "" ]
resources: [ "endpoints", "persistentvolumes", "pods" ]
verbs: [ "*" ]
- apiGroups: [ "" ]
resources: [ "events" ]
verbs: [ "create", "patch" ]
- apiGroups: [ "storage.k8s.io" ]
resources: [ "storageclasses" ]
verbs: [ "get", "list", "watch" ]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: local-path-storage
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: local-path-storage
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.22
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- --debug
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: local-path-storage
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/opt/local-path-provisioner"]
}
]
}
setup: |-
#!/bin/sh
set -eu
mkdir -m 0777 -p "$VOL_DIR"
teardown: |-
#!/bin/sh
set -eu
rm -rf "$VOL_DIR"
helperPod.yaml: |-
apiVersion: v1
kind: Pod
metadata:
name: helper-pod
spec:
containers:
- name: helper-pod
image: busybox
imagePullPolicy: IfNotPresent
默认创建在/opt/local-path-provisioner目录下,如果生产最大目录不是opt下需要修改,创建规则:pv-namespace-pvc
调用的pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-pvc
spec:
accessModes:
- ReadWriteOnce #在rancher/local-path-provisioner里,这里不能用ReadWriteMany
storageClassName: local-path #与local-path-provisioner.yaml StorageClass name一致
resources:
requests:
storage: 50Gi
nfs共享存储:
$ yum -y install nfs-utils
#启动rpcbind、nfs服务
$ systemctl restart rpcbind && systemctl enable rpcbind
$ systemctl restart nfs && systemctl enable nfs
$ cat /etc/exports
/opt 192.168.234.132(insecure,rw,sync,no_root_squash)
#配置生效
$ exportfs -r
#查看生效
$ exportfs
/opt 192.168.234.132
master:
#目的是能够使用showmount命令
$ yum -y install nfs-utils
#出现以下输出,即表示nfs可正常挂载
$ showmount -e 192.168.234.133
Export list for 192.168.234.133:
/opt 192.168.234.132
NFS Provisioner 是一个自动配置卷程序,它使用现有的和已配置的 NFS 服务器来支持通过持久卷声明动态配置 Kubernetes 持久卷。
Kubernetes 集群大部分是基于 RBAC 的权限控制,所以创建一个一定权限的 ServiceAccount 与后面要创建的 “NFS Provisioner” 绑定 nfs-rbac.yaml
nfs-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
nfs-storageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: share-storage
annotations:
storageclass.kubernetes.io/is-default-class: "true" #将nfs设置成默认存储类
provisioner: nfs-client #这里的值要和nfs-provisioner文件中PROVISIONER_NAME的值保持一致
parameters:
#pathPattern: "${.PVC.namespace}-${.PVC.name}" #可以指定在nfs机器上产生的文件夹名称,默认是namespace−{pvcName}-${pvName}
archiveOnDelete: "true" #删除pv的时候,pv的内容是否要备份
reclaimPolicy: Retain #回收策略-手动回收
allowVolumeExpansion: true #允许扩容
nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner #与nfs-rbac.yaml文件中一致
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-client #与nfs-storageClass.yaml中保持一致
- name: NFS_SERVER
value: 192.168.234.133 #真实nfs服务端ip
- name: NFS_PATH
value: /opt #挂载的目录
volumes:
- name: nfs-client-root
nfs:
server: 192.168.234.133 #真实nfs服务端ip
path: /opt #挂载的目录
$ kubectl apply -f nfs-storageClass.yaml
$ kubectl apply -f nfs-rbac.yaml
$ kubectl apply -f nfs-provisioner.yaml
$ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 11m
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-75bc8db759-bhz56 1/1 Running 0 8m51s
#此时并没有pv、pvc服务
$ kubectl get pv
No resources found in default namespace.
$ kubectl get pvc
No resources found in default namespace.
创建pvc nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: share-storage #与nfs-storageClass.yaml文件中name一致
resources:
requests:
storage: 2Gi
$ kubectl apply -f nfs-pvc.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be 2Gi RWO share-storage 107s
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be 2Gi RWO Retain Bound default/nfs-pvc share-storage 109s
我们发现,与静态不同我们不需要手动的创建pv,pvc请求自动创建pv,于此同时在服务端(192.168.234.133)服务器的/opt目录下,已经创建了文件夹 名称格式为:namespace−{pvcName}-${pvName}。
$ pwd
ll /opt/default-nfs-pvc-pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be
创建pod nfs-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nfs-pod
spec:
nodeName: master
containers:
- image: mynginx:v1
name: mynginx
command:
- "/bin/sh"
args:
- "-c"
- "echo 'hello k8s local storage' > /mnt/SUCCESS && sleep 36000 || exit 1"
volumeMounts:
- mountPath: /mnt
name: example-volume
volumes:
- name: example-volume
persistentVolumeClaim:
claimName: nfs-pvc
$ kubectl apply -f nfs-pod.yaml
服务端:
$ ls default-nfs-pvc-pvc-0e543a5b-e9df-46ab-b778-f769f96f6ea4/SUCCESS
default-nfs-pvc-pvc-0e543a5b-e9df-46ab-b778-f769f96f6ea4/SUCCESS
回收
回收与静态相似。由于回收策略是手动回收,所以当pod和pvc被删除时、本地存储的数据不会被删除,pv会变成Released(释放状态)该状态下同样无法被绑定,如果再次创建pvc时,会生成新的pv,与静态相同,只有将claimRef删除后,恢复成Available(可用)状态
#pod与pvc删除后,pv会变成Released状态
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be 2Gi RWO Retain Released default/nfs-pvc share-storage 4h25m
#nfs服务端
$ ls default-nfs-pvc-pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be
SUCCESS
#再次创建pvc
$ kubectl apply -f nfs-pvc.yaml
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
nfs-pvc Bound pvc-97c8bca2-be92-49f6-9d76-126c7e8c4479 2Gi RWO share-storage 6s
#原先的pvc是Released无法被重新绑定,会生成新的pv
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-97c8bca2-be92-49f6-9d76-126c7e8c4479 2Gi RWO Retain Bound default/nfs-pvc share-storage 4s
pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be 2Gi RWO Retain Released default/nfs-pvc share-storage 62m
#nfs服务端
$ ll
总用量 0
drwxrwxrwx. 3 root root 18 3月 15 17:31 default-nfs-pvc-pvc-97c8bca2-be92-49f6-9d76-126c7e8c4479
drwxrwxrwx. 2 root root 6 3月 15 21:58 default-nfs-pvc-pvc-f990a82a-14a0-4292-aeb3-9f34a68f01be
更多推荐
所有评论(0)