k8s 配置nfs-client-provisioner

参考链接: K8S 实战(六)| 配置 NFS 动态卷提供持久化存储
更多详情,公众号: ZisFinal

1、环境

  • kubelet version: Kubernetes v1.22.0
  • nfs: nfs-utils-1.3.0-0.68.el7.2.x86_64

2、前言

本节中 K8S 使用 NFS 远程存储,为托管的 pod 提供了动态存储服务,pod 创建者无需关心数据以何种方式存在哪里,只需要提出需要多大空间的申请即可。

总体流程是:

  1. 创建 NFS 服务器。
  2. 创建 Service Account。用来管控 NFS provisioner 在k8s集群中运行的权限。
  3. 创建 StorageClass。负责创建 PVC 并调用 NFS provisioner 进行预定的工作,并关联 PV 和 PVC。
  4. 创建 NFS provisioner。有两个功能,一个是在NFS共享目录下创建挂载点(volume),二是建立 PV 并将 PV 与 NFS 挂载点建立关联。

三、nfs 安装

NFS 服务器安装

yum install nfs-utils -y

# 启动服务
# 注意先后顺序,先启动rpcbind,再启动nfs-server
systemctl start rpcbind
systemctl start nfs

# 开机启动
systemctl enable rpcbind
systemctl enable nfs

# 创建共享目录
mkdir -p /data/nfs

# 修改共享目录权限
chmod -R 777 /data/nfs

# 修改配置文件
vim /etc/exports

# 添加共享目录
/data/nfs *(rw,sync,no_root_squash,no_all_squash)

# 重启服务
systemctl restart rpcbind
systemctl restart nfs

# 查看服务状态
systemctl status rpcbind
systemctl status nfs

# 查看共享目录
showmount -e nfs_server_ip

# 查看rpc服务
rpcinfo -p nfs_server_ip

四、nfs-privisoner 部署安装

4.1 创建RBAC授权

nfs-rbac.yaml

---
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
  namesapce: kube-system
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: kube-system
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

4.2 创建 Storageclass

nfs-storage.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true"  #---设置为默认的storageclass
provisioner: nfs-client  #---动态卷分配者名称,必须和上面创建的"PROVISIONER_NAME"变量中设置的Name一致
parameters:
  archiveOnDelete: "true"  #---设置为"false"时删除PVC不会保留数据,"true"则保留数据
mountOptions:
  - hard        #指定为硬挂载方式
  - nfsvers=4   #指定NFS版本,这个需要根据 NFS Server 版本号设置

4.3 创建nfs-client-provisioner自动配置程序,以便自动创建持久卷(PV)

  • 自动创建的 PV 以 namespace−namespace−{pvcName}-${pvName} 的命名格式创建在 NFS 上
  • 当这个 PV 被回收后会以 archieved-namespace−namespace−{pvcName}-${pvName} 的命名格式存在 NFS 服务器上

nfs-provisioner-deploy.yaml

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: nfs-client-provisioner
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate  #---设置升级策略为删除再创建(默认为滚动更新)
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-client  #---nfs-provisioner的名称,以后设置的storageclass要和这个保持一致
            - name: NFS_SERVER
              value: 192.168.1.13  #---NFS服务器地址,和 valumes 保持一致
            - name: NFS_PATH
              value: /data/nfs  #---NFS服务器目录,和 valumes 保持一致
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.1.13  #---NFS服务器地址
            path: /data/nfs #---NFS服务器目录

4.4 创建测试pvc

test-claim.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: nfs-storage
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

4.5 部署安装

# 创建sa
kubectl create -f nfs-rbac.yaml
# 创建 sc
kubectl create -f nfs-storage.yaml
# 创建nfs-privisioner-client
kubectl create -f nfs-provisioner-deploy.yaml
# 创建测试pvc 
kubectl create -f test-claim.yaml

创建完成后,可执行一下命令进行查看

查看sa

kubectl get sa -A | grep nfs-client
# output
# kube-system            nfs-client-provisioner               1         7d6h

查看sc

kubectl get sc
# output
# nfs-storage (default)   nfs-client    Delete          Immediate           false                  5h55m

查看pvc

kubectl get pvc -A | grep test-claim
# output
# default     test-claim                     Bound    pvc-0e32355e-12ca-4171-8b7c-d935a4aba080   10Gi       RWX            nfs-storage    7s

注: 如果test-claim一直出于pending状态,需要查看nfs-client pod日志

五、问题处理

5.1 pull image quay.io/external_storage/nfs-client-provisioner:latest timeout

文中提到的镜像,因为某些原因无法下载,所以需要通过科学上网方法,将镜像下载到本地,然后在nfs-client-provisioner部署的节点上加载进去即可

如果还是不能下载,通过下面方式下载即可

链接: https://pan.baidu.com/s/1qdRsI28AqVDxBTbY0PdQVA?pwd=y5f7 提取码: y5f7

加载方式

如果是底层采用的是docker运行方式,则直接执行以下命令加载即可

docker load -i nfs-client-provisioner.tar

如果采用的是containerd运行方式,则直接执行以下命令加载即可

ctr -n=k8s.io image import nfs-client-provisioner.tar

5.2 selfLink was empty, can’t make reference

kubernetes 1.20之后,已经移除了selfLink的支持,所以部署后,当创建pvc的时候,nfs-provisioner 会报如下错误

官方参考链接: issues_25

provision "default/test-claim" class "nfs-storage": unexpected error getting claim reference: selfLink was empty, can't make reference

解决办法

按照官方给定的解决办法,在kube-apiserver启动参数上,加如下参数即可

--feature-gates=RemoveSelfLink=false

因为采用的是二进制部署,所以直接修改kube-apiserver.service文件启动参数,在后面直接加上即可

完整kube-apiserver.service如下

[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
      --v=2  \
      --logtostderr=true  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --insecure-port=0  \
      --advertise-address=192.168.1.7 \
      --service-cluster-ip-range=192.168.0.0/16  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://192.168.1.7:2379,https://192.168.1.8:2379,https://192.168.1.9:2379 \
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User \
      --feature-gates=RemoveSelfLink=false # 最后添加这一行即可

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target

然后在所有master节点,重启kube-apiserver(因为修改了service,重启之前需要reload一下)

systemctl daemon-reload 
systemctl restart kube-apiserver

5.3 mkdir /peresistentVolume/* permission denied

出现这个问题,一般是nfs服务器文件权限导致的,因为nfs服务器,在操作的时候,用到的是用户和用户组都是nfsnobody

nfs的挂载路径为/data/nfs,所以将此目录的用户和用户组全部修改为nfsnobody即可

chown -R nfsnobody:nfsnobody /data/nfs

执行上述命令,即可解决报错问题

5.4 mount nfs: mounting * failed, reason given by server: No such file or directory

出现这类问题,是在pvc声明的时候,引用了旧版的格式,旧版格式如下

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
spec:
  storageClassName: nfs-storage # 声明的时候,采用了旧版的声明,已经不需要在spec下面添加了,更改到了annotations
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi

需要改成如下格式

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-claim
  annotations:
    volume.beta.kubernetes.io/storage-class: nfs-storage
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐