1、安装nfs客户端

具体操作请参考我的另外一篇博客:mysql在k8s集群中的搭建并且实现持久化存储

2、部署nfs-client-provisioner插件

2.1 配置授权(RBAC)
vim rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

2.2 Deployment

需配置挂载目录192.168.0.109: /data/nfs/dynamic

vim nfs-client-provisioner.yaml

kind: Deployment
apiVersion: apps/v1 
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs.provisioner  #根据自己的名称进行修改,需要和class.yaml中的provisioner 名字一致
            - name: NFS_SERVER 
              value: 192.168.0.109
            - name: NFS_PATH
              value: /data/nfs/dynamic
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.109
            path: /data/nfs/dynamic

2.3 创建StorageClass
vim storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-sc
provisioner: nfs.provisioner
parameters:
  archiveOnDelete: "true" #archiveOnDelete: “true”,PVC被删除后,挂载的文件夹将会被标记为“archived”
allowVolumeExpansion: true

分别执行上面的yaml文件

 kubectl apply -f storageclass.yaml 
 kubectl apply -f rbac.yaml
 kubectl apply -fnfs-client-provisioner.yaml

查看创建的存储类 kubectl get sc在这里插入图片描述
查看创建的deploy kubectl get deploy
在这里插入图片描述
查看pod kubectl get pod ,如果pod的状态为running,说明创建成功,如果为其他的状态使用,kubectl describe pod <podName> 进行查看即可。
在这里插入图片描述
到现在为止,nfs就已经创建好了,我们需要创建一个pvc,来实现动态的创建pv,并可以实现自动的分配存储空间。因为NFS动态供给可以根据访问模式主要分为ReadWriteOnce(RWO)ReadWriteMany(RWM)两种方式,我们实现也使用这两种访问模式进行实现。

  • ReadWriteOnce——该卷可以被单个节点以读/写模式挂载
  • ReadOnlyMany——该卷可以被多个节点以只读模式挂载
  • ReadWriteMany——该卷可以被多个节点以读/写模式挂载

每个卷只能同一时刻只能以一种访问模式挂载,即使该卷能够支持 多种访问模式。例如,一个 GCEPersistentDisk 卷可以被某节点以 ReadWriteOnce 模式挂载,或者被多个节点以 ReadOnlyMany 模式挂载,但不可以同时以两种模式 挂载。

注意:
如果是 v1.20 版本以上,会报错 waiting for a volume to be created, either by external provisioner "nfs-diy" or manually created by system administrator

如果是 v1.20 版本以上 apiserver 默认禁止使用 selfLink,需要手动配置- --feature-gates=RemoveSelfLink=false 开启。

cat /etc/kubernetes/manifests/kube-apiserver.yaml
在这里插入图片描述

3. 使用示例
3.1 Deployment+RWO

vim 1-deployment-rwo.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-deploy-rwo
spec:
  storageClassName: "nfs-sc" #指定我们创建的存储类
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deploy-rwo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-deploy-rwo
  template:
    metadata:
      labels:
        app: nginx-deploy-rwo
    spec:
      containers:
      - image: nginx:stable-alpine
        name: nginx-deploy-rwo
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nginx-deploy-rwo #指定pvc的名称

查看创建的pvc,因为是动态的创建,所以pv也被创建出来了。
在这里插入图片描述
自动创建的pv回收策略是Delete,
在这里插入图片描述
文件挂载测试

# 进入pod
kubectl exec -it nginx-deploy-rwo-77d89c8d6-ndt67  -- sh

# 在持久化目录下生成文件
echo "hello,1-deployment-rwo" > /usr/share/nginx/html/1-deployment-rwo.html

到本地挂载目录查看文件
在这里插入图片描述
其中有些archived开头的文件就是之前测试的时候已经将其删除,但是数据还依旧保留在本地服务器中。进入对应的文件夹中进行查看,发现内容被挂载到宿主机中。
在这里插入图片描述
3.2 Statefulset+RWO

Statefulset多副本一对一挂载使用volumeClaimTemplates,并指定storageClass,将自动创建pvc、pv

vim 2-nginx-sfs-rwo.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nginx-sfs-rwo
spec:
  selector:
    matchLabels:
      app: nginx-sfs-rwo
  serviceName: "nginx-sfs-rwo"
  replicas: 3 
  template:
    metadata:
      labels:
        app: nginx-sfs-rwo
    spec:
      containers:
      - name: nginx-sfs-rwo
        image: nginx:stable-alpine
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: data
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-sc"
      resources:
        requests:
          storage: 5Gi

kubectl apply -f 2-nginx-sfs-rwo.yaml
在这里插入图片描述
文件挂载测试

# 进入nginx-sfs-rwo-0
kubectl exec -it nginx-sfs-rwo-0 -- sh
echo "hello,this is nginx-sfs-rwo-0" > /usr/share/nginx/html/index.html

# 进入nginx-sfs-rwo-1
kubectl exec -it nginx-sfs-rwo-1 -- sh
echo "hello,this is nginx-sfs-rwo-1" > /usr/share/nginx/html/index.html

然后分别访问nginx-sfs-rwo-0和nginx-sfs-rwo-1
在这里插入图片描述
3.3 Deployment+RWX

多pod挂载同一个卷,多个pod可以访问同一个卷中的内容

vim 3-nginx-rwx.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginx-rwx
spec:
  storageClassName: "nfs-sc" #指定存储类
  accessModes:
    - ReadWriteMany  
  resources:
    requests:
      storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-rwx
  labels:
    app: nginx-rwx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx-rwx
  template:
    metadata:
      labels:
        app: nginx-rwx
    spec:
      containers:
      - image: nginx:stable-alpine
        name: nginx-rwx
        volumeMounts:
        - name: data
          mountPath: /usr/share/nginx/html
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nginx-rwx

文件挂载测试

# 进入第一个pod
 kubectl exec  -it nginx-rwx-59c9659ff7-76kg7 -- sh
echo "hello,this is nginx-rwx-0" > /usr/share/nginx/html/index.html

访问第二个pod,获取到测试的文件。
在这里插入图片描述
查看宿主机的文件
在这里插入图片描述
3.4 Statefulset+RWX

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐