K8s系列—【安装nfs文件系统(为k8s提供动态创建pv的能力)】
安装nfs文件系统(为k8s提供动态创建pv的能力)
1.1 安装nfs-server

# 在每个机器执行下面这条命令(包含master)。
yum install -y nfs-utils

下面的/nfs/data目录可以自定义,这个是用来供node节点往master节点同步pv数据用的目录

# 在master 执行以下命令,直接粘贴执行,或者粘贴到shell脚本中执行
echo "/nfs/data/ *(insecure,rw,sync,no_root_squash)" > /etc/exports


# 在master执行以下命令,启动 nfs 服务;创建共享目录
mkdir -p /nfs/data


# 在master执行
systemctl enable rpcbind
systemctl enable nfs-server
systemctl start rpcbind
systemctl start nfs-server

# 使配置生效
exportfs -r


#检查配置是否生效
exportfs

验证:执行完命令之后,出现/nfs/data ,则说明执行成功。

1.2 配置nfs-client(选做)
主要用来把node节点的/nfs/data的数据同步到master节点,下面命令直接复制所有并在所有node节点执行。

  #在所有node节点执行,下面的ip改成你自己的master的ip,注意:这里也可以自己挂自己,即一台机器上两个目录间共享。
  showmount -e 192.168.110.181
  
  mkdir -p /nfs/data
  
  #在所有node节点执行,下面的ip改成你自己的master的ip,注意:如果自己挂自己,最后一个"/nfs/data"换成本机上的另一个目录,上一步创建的和第一个服务端目录不能一样,因为一台机器上不可能创建两个一模一样的文件夹。
  mount -t nfs 192.168.110.181:/nfs/data /nfs/data

1.3 配置动态创建pv默认存储
把下面的两处ip更换成自己的nfs的server服务的ip,这里我已把master作为nfs的server服务,所以更换成master的ip即可。

在nfs服务器上创建sc.yml文件:vi sc.yaml

把下面的代码粘贴到sc.yaml文件中

在master执行kubectl get sc和kubectl get storageclass,此时查看都是没有的,No resource found。

在master执行:kubectl apply -f sc.yaml

## 创建了一个存储类
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"
provisioner: k8s-sigs.io/nfs-subdir-external-provisioner
parameters:
  archiveOnDelete: "true"  ## 删除pv的时候,pv的内容是否要备份

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/nfs-subdir-external-provisioner:v4.0.2
          # resources:
          #    limits:
          #      cpu: 10m
          #    requests:
          #      cpu: 10m
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: k8s-sigs.io/nfs-subdir-external-provisioner
            - name: NFS_SERVER
              value: 172.31.0.4 ## 指定自己nfs服务器地址
            - name: NFS_PATH  
              value: /nfs/data  ## nfs服务器共享的目录
      volumes:
        - name: nfs-client-root
          nfs:
            server: 172.31.0.4
            path: /nfs/data
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["nodes"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io    

1.4 验证
执行kubectl get sc,此时可以看到一个默认的存储类,nfs-storage (default),说明执行成功,下面的可以用测试,直接省略即可。

测试动态创建pv的能力(选做):

执行:kubectl get pod -A ,查看nfs-client-provisioner-322342c323是否Running。
pvc的创建与绑定

kind: PersistentVolumeClaim
  apiVersion: v1
  metadata:
    name:nginx-pvc
  spec:
    accessModes:
      - ReadWriteMany
    resources:
      requests:
        storage: 200Mi
    # 使用kubectl get sc 查看默认存储类的name,一般为nfs-storage,
    也可以不指定,自动会挂载到默认存储类上
    sotrageClassName: nfs-storage

创建pvc.yaml,并把上面的命令粘贴到pvc.yaml文件中

vi pvc.yaml

kubectl apply -f pvc.yaml

  #pvc处于绑定状态
kubectl get pvc

  #这时会发现,已经自动创建了pv 
kubectl get pv      
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐