对于运行在云服务商中的k8s集群(比如GKE等),有比较完善的存储卷支持。而自建的k8s集群,这方面往往比较麻烦。经过调查,发现在自建k8s集群中使用nfs卷是一个比较简单可行的方案。nfs服务器可以独立于k8s集群,便于集中管理集群中的卷和文件。

本文内容包括:

安装配置nfs服务器

使用nfs客户端连接nfs共享文件夹

在k8s集群中通过手动方式创建nfs卷

本文实验环境为ubuntu/Debian,对于centos等系统,只在于nfs的安装和配置略有不同。

安装配置nfs服务器

sudo apt-get update

sudo apt install nfs-kernel-server

sudo mkdir -p /mnt/sharedfolder

sudo chown nobody:nogroup /mnt/sharedfolder

sudo chmod 777 /mnt/sharedfolder

sudo nano /etc/exports

复制代码

步骤1: 安装nfs-kernel-server

sudo apt-get update

sudo apt install nfs-kernel-server

复制代码

步骤2: 创建导出目录

导出目录是用于与nfs客户端共享的目录,这个目录可以是linux上的任意目录。这里我们使用一个创建的新目录。

sudo mkdir -p /mnt/sharedfolder

复制代码

步骤3: 通过nfs输出文件为客户端分配服务器访问权限

编辑/etc/exports文件

sudo vi /etc/exports

复制代码

在文件中追加配置,可以分配不同类型的访问权限:

分配给单个客户端访问权限的配置格式:

/mnt/sharedfolder clientIP(rw,sync,no_subtree_check)

复制代码分配给多个客户端访问权限的配置格式:

/mnt/sharedfolder client1IP(rw,sync,no_subtree_check)

/mnt/sharedfolder client2IP(rw,sync,no_subtree_check)

复制代码通过指定一个完整的客户端子集来分配多个客户端访问权限的配置格式:

/mnt/sharedfolder subnetIP/24(rw,sync,no_subtree_check)

复制代码

示例:

这是分配给192.168.0.101客户端读写权限的示例配置

/mnt/sharedfolder 192.168.0.101(rw,sync,no_subtree_check)

复制代码

步骤4: 输出共享目录

执行命令,输出共享目录:

sudo exportfs -a

复制代码

重启nfs-kernel-server服务,使所有配置生效

sudo systemctl restart nfs-kernel-server

复制代码

使用nfs客户端连接nfs共享文件夹

可以使用win10的资源管理器连接nfs服务器进行测试,也可以使用linux连接测试。

这里使用局域网另一台ubuntu挂载nfs共享目录进行测试:

步骤1: 安装nfs-common

nfs-common包含nfs客户端所需的软件

sudo apt-get update

sudo apt-get install nfs-common

复制代码

步骤2: 创建一个用于nfs共享目录的挂载点

sudo mkdir -p /mnt/sharedfolder_client

复制代码

步骤3: 挂在共享目录到客户端

挂载命令格式:

sudo mount serverIP:/exportFolder_server /mnt/mountfolder_client

根据之前的配置,挂载命令如下:

sudo mount 192.168.100.5:/mnt/sharedfolder /mnt/sharedfolder_client

复制代码具体配置时,需要根据实际nfs server ip地址填写

步骤4: 测试连接

可以往共享目录复制文件,在其他机器上可以看到这个文件。

在k8s集群中通过手动方式创建nfs卷

创建基于nfs的pv

nfs.yaml:

apiVersion: v1

kind: PersistentVolume

metadata:

name: nfs-pv

labels:

name: mynfs # name can be anything

spec:

storageClassName: manual # same storage class as pvc

capacity:

storage: 200Mi

accessModes:

- ReadWriteMany

nfs:

server: 192.168.1.7 # ip addres of nfs server

path: "/srv/nfs/mydata2" # path to directory

复制代码

部署nfs.yaml:

$ kubectl apply -f nfs.yaml

$ kubectl get pv,pvc

persistentvolume/nfs-pv 100Mi RWX Retain Available

复制代码

创建pvc

创建持久卷声明文件,并部署,需要注意accessModes必须要与之前创建的pv中一致

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

name: nfs-pvc

spec:

storageClassName: manual

accessModes:

- ReadWriteMany # must be the same as PersistentVolume

resources:

requests:

storage: 50Mi

复制代码

部署

$ kubectl apply -f nfs_pvc.yaml

$ kubectl get pvc,pv

persistentvolumeclaim/nfs-pvc Bound nfs-pv 100Mi RWX

复制代码

创建pod

创建一个简单的使用这个pvc的nginx部署,nfs-pod.yaml:

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

labels:

app: nginx

name: nfs-nginx

spec:

replicas: 1

selector:

matchLabels:

app: nginx

template:

metadata:

labels:

app: nginx

spec:

volumes:

- name: nfs-test

persistentVolumeClaim:

claimName: nfs-pvc # same name of pvc that was created

containers:

- image: nginx

name: nginx

volumeMounts:

- name: nfs-test # name of volume should match claimName volume

mountPath: /usr/share/nginx/html # mount inside of contianer

复制代码

部署Nginx:

$ kubectl apply -f nfs_pod.yaml

$ kubectl get po

nfs-nginx-6cb55d48f7-q2bvd 1/1 Running

复制代码

常见问题:创建pod失败--原因缺少nfs驱动

在k8s中创建使用nfs卷的pod出现错误:

原因:在节点上没有安装挂载nfs客户端所需要的软件包

root@k8s0:~# kubectl describe pod/nfs-nginx-766d4bf45f-n7dlt

Name: nfs-nginx-766d4bf45f-n7dlt

Namespace: default

Priority: 0

Node: k8s2/172.16.2.102

Start Time: Fri, 10 Jul 2020 18:04:58 +0800

Labels: app=nginx

pod-template-hash=766d4bf45f

Annotations: cni.projectcalico.org/podIP: 192.168.109.86/32

Status: Running

IP: 192.168.109.86

IPs:

IP: 192.168.109.86

Controlled By: ReplicaSet/nfs-nginx-766d4bf45f

Containers:

nginx:

Container ID: docker://88299398d40ead29e991e57c6bad5d0e6d0396c21c2e69b0d2afb4ab7cce6044

Image: nginx

Image ID: docker-pullable://nginx@sha256:21f32f6c08406306d822a0e6e8b7dc81f53f336570e852e25fbe1e3e3d0d0133

Port:

Host Port:

State: Running

Started: Fri, 10 Jul 2020 18:17:00 +0800

Ready: True

Restart Count: 0

Environment:

Mounts:

/usr/share/nginx/html from nfs-test (rw)

/var/run/secrets/kubernetes.io/serviceaccount from default-token-mhtqt (ro)

Conditions:

Type Status

Initialized True

Ready True

ContainersReady True

PodScheduled True

Volumes:

nfs-test:

Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)

ClaimName: nfs-pvc

ReadOnly: false

default-token-mhtqt:

Type: Secret (a volume populated by a Secret)

SecretName: default-token-mhtqt

Optional: false

QoS Class: BestEffort

Node-Selectors:

Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s

node.kubernetes.io/unreachable:NoExecute for 300s

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Normal Scheduled default-scheduler Successfully assigned default/nfs-nginx-766d4bf45f-n7dlt to k8s2

Warning FailedMount 21m kubelet, k8s2 MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32

Mounting command: systemd-run

Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7c4a8d470e74/volumes/kubernetes.io~nfs/nfs-pv --scope -- mount -t nfs 172.16.100.105:/mnt/sharedfolder /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7c4a8d470e74/volumes/kubernetes.io~nfs/nfs-pv

Output: Running scope as unit run-r3892d691a70441eb975bc53bb7aeca72.scope.

mount: wrong fs type, bad option, bad superblock on 172.16.100.105:/mnt/sharedfolder,

missing codepage or helper program, or other error

(for several filesystems (e.g. nfs, cifs) you might

need a /sbin/mount. helper program)

In some cases useful info is found in syslog - try

dmesg | tail or so.

Warning FailedMount 21m kubelet, k8s2 MountVolume.SetUp failed for volume "nfs-pv" : mount failed: exit status 32

Mounting command: systemd-run

Mounting arguments: --description=Kubernetes transient mount for /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7c4a8d470e74/volumes/kubernetes.io~nfs/nfs-pv --scope -- mount -t nfs 172.16.100.105:/mnt/sharedfolder /var/lib/kubelet/pods/9c0b53d9-581c-4fc4-a286-7c4a8d470e74/volumes/kubernetes.io~nfs/nfs-pv

Output: Running scope as unit run-r8774f015f759436d843d408eb6c941ec.scope.

复制代码

解决办法:

ubuntu/debian在k8s节点上执行,安装nfs客户端支持

sudo apt-get install nfs-common

复制代码

安装完成后过一段时间,可以发现pod能够正常运行

测试k8s正确使用了nfs卷

在nginx pod中创建一个测试网页,文件名index.html:

$ kubectl exec -it nfs-nginx-6cb55d48f7-q2bvd bash

#填入index.html内容用于测试

$ sudo vi /usr/share/nginx/html/index.html

this should hopefully work

复制代码

可以验证nfs服务器上现在已经有了同样的文件,并且验证nginx可以读取这个文件:

$ ls /srv/nfs/mydata$

$ cat /srv/nfs/mydata/index.html

this should hopefully work

# 将nginx pod通过nodeport暴露为服务,以使可通过浏览器访问

$ kubectl expose deploy nfs-nginx --port 80 --type NodePort

$ kubectl get svc

$ nfs-nginx NodePort 10.102.226.40 80:32669/TCP

复制代码

打开浏览器,输入 :

本例是:192.168.99.157:32669

删除所有部署,可以验证测试文件仍然存在我们的目录中:

$ kubectl delete deploy nfs-nginx

$ kubectl delete pvc nf-pvc

--> kubectl delete svc nfs-nginx

$ ls /srv/nfs/mydata/

index.html

复制代码

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐