k8s使用nas存储基于nfs实现动态存储持久卷PV,PVC
k8s使用nas存储基于nfs实现动态存储持久卷PV,PVC工作中遇到项目组采用statefulset部署有状态副本集,需要存储一些中间件应用数据,应用有多个副本,静态pv不能满足需求,因此需要考虑动态创建持久卷。目前开发测试环境采用nas存储数据,已经安装了nfs服务端以及客户端。采用动态存储需要提前安装nfs工具,安装并设置nfs存储卷之后开始以下步骤:首先需要在项目所在k8s集群创建stor
k8s使用nas存储基于nfs实现动态存储持久卷PV,PVC
工作中遇到项目组采用statefulset部署有状态副本集,需要存储一些中间件应用数据,应用有多个副本,静态pv不能满足需求,因此需要考虑动态创建持久卷。目前开发测试环境采用nas存储数据,已经安装了nfs服务端以及客户端。采用动态存储需要提前安装nfs工具,安装并设置nfs存储卷之后开始以下步骤:
首先需要在项目所在k8s集群创建storageclass,即存储类。创建storageclass首先需要创建provisioner副本。创建副本之前需要先创建RBAC。如下
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
接着创建provisioner,如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.10.10.60
- name: NFS_PATH
value: /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 10.10.10.60
path: /ifs/kubernetes
PROVISIONER_NAME:provisioner的名称,创建storageclass时需要指定;
NFS_SERVER:nfs服务器的地址;
NFS_PATH:nfs服务器开放的地址;
接着创建storageclass,如下
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: fuseim.pri/ifs # or choose another name, must match deployment's env PROVISIONER_NAME'
parameters:
archiveOnDelete: "false"
reclaimPolicy: Delete
provisioner:必须匹配deployment中的PROVISIONER_NAME;
reclaimPolicy:策略支持三种,分别是Delete,Retain,Recycle
保持(Retain):删除PV后后端存储上的数据仍然存在,如需彻底删除则需要手动删除后端存储volume
删除(Delete):删除被PVC释放的PV和后端存储volume
回收(Recycle):保留PV,但清空PV上的数据(已废弃)
全部部署完成之后创建一个statefulset测试一下:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-1-14-0
namespace: poc-5
spec:
podManagementPolicy: OrderedReady
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
project.cpaas.io/name: poc
service.cpaas.io/name: statefulset-nginx-1-14-0
serviceName: ''
template:
metadata:
labels:
project.cpaas.io/name: poc
service.cpaas.io/name: statefulset-nginx-1-14-0
spec:
containers:
- image: 'pocharbor.zybank.com.cn/library/nginx.1.14.0:20200426'
imagePullPolicy: IfNotPresent
name: nginx-1-14-0
resources:
limits:
cpu: 500m
memory: 500Mi
requests:
cpu: 500m
memory: 500Mi
volumeMounts:
- mountPath: /tmp
name: test
restartPolicy: Always
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
volumeClaimTemplates:
- metadata:
annotations:
volume.beta.kubernetes.io/storage-class: managed-nfs-storage
name: test
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
主要是volumeClaimTemplates这部分,在annotations中配置storageclass的名称,创建成功之后可以自动分配PVC,PV。
使用过程中碰到PVC创建失败的问题,报错如下:
waiting for volume to be create, either by external provisioner "dmc-uat8.com/nfs" or manually create by system administrator
检查之后发现原因是在provisioner所在的命名空间找不到ServiceAccount,检查发现第一步创建SA的命名空间和RoleBinding以及ClusterRoleBinding中SA的命名空间不一样,改成一样之后就可以创建成功了。
更多推荐
所有评论(0)