创建nfs服务,并在K8s中挂载为SC
rbac.yaml:#唯一需要修改的地方只有namespace,根据实际情况定义。(storeageclass设为默认)在10.101.17.11上挂载该目录。列出集群中的 StorageClass。
·
创建nfs服务,并在K8s中挂载为SC
(storeageclass设为默认)
一、在Ubuntu搭建nfs服务器
1、安装nfs服务器
sudo apt install nfs-kernel-server
2、创建nfs服务器共享目录
创建一个目录用于nfs服务器将文件共享给客户端,这个目录将会写入到nfs配置文件中:
mkdir /data/nfs-demo
3、修改nfs服务器配置文件
打开nfs服务器配置文件/etc/exports,指定nfs服务器共享目录及其属性,内容如下:
/data/nfs-demo *(rw,sync,no_root_squash)
......
/data/nfs-demo:指定/nfsroot为nfs服务器的共享目录
*:允许所有的网段访问,也可以使用具体的IP
rw:挂接此目录的客户端对该共享目录具有读写权限
sync:资料同步写入内存和硬盘
no_root_squash:root用户具有对根目录的完全管理访问权限
no_subtree_check:不检查父目录的权限
......
4、重启nfs服务器
sudo service nfs-kernel-server restart
或者
sudo /etc/init.d/nfs-kernel-server restart
到此,Ubuntu安装nfs服务器的过程就完成了,可以执行下面这个命令查看nfs服务器的共享目录:
showmount -e localhost
在10.101.17.11上挂载该目录
sudo mount -t nfs 10.101.17.13:/data/nfs-demo /data/nfs-demo
二、k8s基于nfs创建storageclass
创建命名空间
kubectl create namespace nfs
1、使用以下文档配置account及相关权限
rbac.yaml: #唯一需要修改的地方只有namespace,根据实际情况定义
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs #根据实际环境设定namespace,下面类同
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io
2、创建NFS provisioner
nfs-provisioner.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs #与RBAC文件中的namespace保持一致
spec:
replicas: 1
selector:
matchLabels:
app: nfs-client-provisioner
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: nfs-storage #provisioner名称,请确保该名称与 nfs-StorageClass.yaml文件中的provisioner名称保持一致
- name: NFS_SERVER
value: 10.101.17.13 #NFS Server IP地址
- name: NFS_PATH
value: /data/nfs-demo #NFS挂载卷
volumes:
- name: nfs-client-root
nfs:
server: 10.101.17.13 #NFS Server IP地址
path: /data/nfs-demo #NFS 挂载卷
3、创建NFS资源的StorageClass
nfs-StorageClass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: nfs-storage #这里的名称要和provisioner配置文件中的环境变量PROVISIONER_NAME保持一致parameters: archiveOnDelete: "false"
4、执行命令
kubectl apply -f rbac.yaml
kubectl apply -f nfs-provisioner.yaml
kubectl apply -f nfs-StorageClass.yaml
三、配置默认 StorageClass
列出集群中的 StorageClass
kubectl get storageclass
标记默认 StorageClass 非默认:
kubectl patch storageclass openebs-hostpath -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
标记一个 StorageClass 为默认:
kubectl patch storageclass managed-nfs-storage -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
四、测试
测试pvc
test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
kubectl create -f test-claim.yaml
kubectl get pvc
......
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-40afffdb-13e1-429a-bf15-6798487694a8 1Mi RWX managed-nfs-storage 5s
遇见如下情况,发现pvc的状态一直是pending,说明没有创建pv,也未能与pv进行绑定
.......
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Pending managed-nfs-storage 9s
.......
1、通过kubectl describe命令查看错误提示信息,
信息中有:waiting for a volume to be created, either by external provisioner “nfs-storage” or manually created by system administrator
2、通过kubectl logs命令查看pod(nfs-client-provisioner)日志,
日志中有:unexpected error getting claim reference: selfLink was empty, can’t make reference。
3、使用第二步骤的信息去网上查找,按照链接中的步骤解决该问题https://blog.csdn.net/qq_41793064/article/details/123111934
测试pod
test-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: test-pod
spec:
containers:
- name: test-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: test-claim
kubectl create -f test-pod.yaml
kubectl get pods
NAME READY STATUS RESTARTS AGE
test-pod 0/1 Pending 0 9s
清除环境
kubectl delete -f test-pod.yaml
kubectl delete -f test-claim.yaml
更多推荐
已为社区贡献8条内容
所有评论(0)