nfs-provisioner配置相对简单:

rbac配置

[root@k8s-1 nfs-provisioner]# cat sa.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

deployment配置

[root@k8s-1 nfs-provisioner]# cat deploy.yaml

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-provisioner # 和3.Storage中provisioner保持一致便可
            - name: NFS_SERVER
              value: 192.168.2.220
            - name: NFS_PATH
              value: /data/nfs_provisioner
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.2.220
            path: /data/nfs_provisioner

storageclass配置

[root@k8s-1 nfs-provisioner]# cat storageclass.yaml 

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
  name: nfs-storage
provisioner: nfs-provisioner
volumeBindingMode: Immediate
reclaimPolicy: Delete

应用部署

[root@k8s-1 nfs-provisioner]# kubectl apply -f sa.yaml 

[root@k8s-1 nfs-provisioner]# kubectl apply -f deploy.yaml

[root@k8s-1 nfs-provisioner]# kubectl apply -f  storageclass.yaml

[root@k8s-1 nfs-provisioner]# kubectl get pod  |grep nfs
nfs-client-provisioner-7b6b68d687-pc2hz   1/1     Running   0               75m

测试:

[root@k8s-1 nfs-provisioner]# cat nginx_sts_pvc.yaml 
apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  ports:
  - port: 80
    name: web
  clusterIP: None
  selector:
    app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  serviceName: "nginx"
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-storage"
      resources:
        requests:
          storage: 10Mi

[root@k8s-1 nfs-provisioner]# kubectl apply -f nginx_sts_pvc.yaml 
service/nginx created
statefulset.apps/web created

[root@k8s-1 nfs-provisioner]# kubectl get pod -A |grep web
default       web-0                                     1/1     Running   0               44s
default       web-1                                     1/1     Running   0               41s

自动创建pv,pvc

[root@k8s-1 nfs-provisioner]# kubectl get pv,pvc |grep web
persistentvolume/pvc-1a4130ff-4c42-48b2-af27-2eac30b90e96   10Mi       RWO            Delete           Bound    default/www-web-0      nfs-storage             13h
persistentvolume/pvc-a41fd05f-94ec-4b56-8232-f65512486508   10Mi       RWO            Delete           Bound    default/www-web-1      nfs-storage             13h
persistentvolumeclaim/www-web-0      Bound    pvc-1a4130ff-4c42-48b2-af27-2eac30b90e96   10Mi       RWO            nfs-storage    13h
persistentvolumeclaim/www-web-1      Bound    pvc-a41fd05f-94ec-4b56-8232-f65512486508   10Mi       RWO            nfs-storage    13h

[root@k8s-1 nfs-provisioner]# kubectl get svc |grep nginx
nginx        ClusterIP   None             <none>        80/TCP         3m43s

[root@k8s-1 nfs-provisioner]# curl  web-1.nginx.default.svc.cluster.local
web-1
[root@k8s-1 nfs-provisioner]# curl  web-0.nginx.default.svc.cluster.local
web-0

部署成功。

参考文章:https://www.kubebiz.com/wow/nfs-client-provisioner?k8sv=v1.24

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐