k8s搭建nfs存储类
NFS原理不再赘述,可以自行了解,在NFS StorageClass中最核心了解Provisioner,那什么是Provisioner?Provisioner是StorageClass中必须的一个资源,它是存储资源自动调配器,可以将其看作是后端存储驱动。对于NFS类型,K8S没有提供内部Provisioner,但可以使用外部的Provisioner。Provisioner必须符合存储卷的开发规范(
一、StorageClass介绍
1.概念
StorageClass作为对外存储资源的抽象定义,对用户设置的PVC申请屏蔽后端存储的细节,一方面减少了用户对于存储资源细节的关注,另一方面减轻了管理员手动管理PV的工作,由系统自动完成PV的创建和绑定,实现动态的资源供应。基于StorageClass的动态资源供应模式将逐步成为云平台的标准存储管理模式。
目前支持的类参考:https://kubernetes.io/zh-cn/docs/concepts/storage/storage-classes/
2.NFS StorageClass介绍
NFS原理不再赘述,可以自行了解,在NFS StorageClass中最核心了解Provisioner,那什么是Provisioner?
Provisioner是StorageClass中必须的一个资源,它是存储资源自动调配器,可以将其看作是后端存储驱动。对于NFS类型,K8S没有提供内部Provisioner,但可以使用外部的Provisioner。Provisioner必须符合存储卷的开发规范(CSI)。本文档中使用NFS提供的Provisioner。
3.Pod申请PV流程如下图:
流程分解:
①pod挂载pvc
②pvc配置存储类请求pv
③storageclass找到provisioner申请pv
④nfs provisioner生成pvc需要的pv,提供给pod做存储
二、搭建NFS服务
系统环境:Centos7.9
NFS ServerIP:172.19.58.188
NFS ClientIP:172.19.58.189、172.19.58.190
1.查看系统是否已安装NFS
[root@k8s-master ~]# rpm -qa | grep nfs
[root@k8s-master ~]# rpm -qa | grep rpcbind
[root@k8s-master ~]#
2.NFS Server端安装NFS
[root@k8s-master~]# yum -y install nfs-utils rpcbind
3.服务端配置
[root@k8s-maste~]# mkdir -p /data/nfs
[root@k8s-master storageclass]# vim /etc/exports
/data/nfs 172.19.58.0/16(rw,no_root_squash,no_all_squash,sync)
常见的参数则有:
参数值 内容说明
rw read-write 读写
ro read-only 只读
sync 请求或写入数据时,数据同步写入到NFS server的硬盘后才返回。数据安全,但性能降低了
async 优先将数据保存到内存,硬盘有空档时再写入硬盘,效率更高,但可能造成数据丢失。
root_squash 当NFS 客户端使用root用户访问时,映射为NFS 服务端的匿名用户
no_root_squash 当NFS 客户端使用root 用户访问时,映射为NFS服务端的root 用户
all_squash 不论NFS 客户端使用任何帐户,均映射为NFS 服务端的匿名用户
配置生效
[root@k8s-master~]# exportfs -r
启动rpcbind、nfs服务
[root@k8s-master~]# systemctl start nfs
[root@k8s-master~]# systemctl start rpcbind
在你的 NFS 服务器设定妥当之后,我们可以在 server 端先自我测试一下是否可以联机,就是利用 showmount 指令来检测
[root@k8s-master storageclass]# showmount -e localhost
Export list for localhost:
/data/nfs 172.19.58.0/16
4.NFS client端配置
[root@k8s-node1 ~]# yum -y install nfs-utils
[root@k8s-node2 ~]# yum -y install nfs-utils
创建挂载目录
[root@k8s-node1 ~]# mkdir /data/nfs
[root@k8s-node2 ~]# mkdir /data/nfs
为了提高NFS的稳定性,使用TCP协议挂载,NFS默认用UDP协议
[root@k8s-node1 ~]# mount -t nfs 172.19.58.188:/data/nfs /data/nfs -o proto=tcp -o nolock
[root@k8s-node2 ~]# mount -t nfs 172.19.58.188:/data/nfs /data/nfs -o proto=tcp -o nolock
添加至开机自动挂载
[root@k8s-node1 ~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Tue Nov 23 09:19:43 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=8af70500-d87e-4b91-b88e-dff450511d3f / ext4 defaults 1 1
172.19.58.188:/data/nfs /data/nfs nfs defaults 0 0
三、搭建StorageClass
需注意,StorageClass为全局资源,所以命名空间为default,不影响其他命名空间调用
1.创建 provisioner
如果K8s版本为1.20.x
在/etc/kubernetes/manifests/kube-apiserver.yaml的command中添加:
- --feature-gates=RemoveSelfLink=false
否则会报错:
Kubernetes v1.20.13 报"unexpected error getting claim reference: selfLink was empty, can’t make reference"
下载yaml文件
[root@k8s-master tmp]# wget http://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/nfs-client/deploy/deployment.yaml
[root@k8s-master tmp]# vim deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner is deployed
namespace: default
spec:
replicas: 1 ###想做高可用的话这里可以改成3,一般为大于等于3的奇数
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
#image: quay.io/external_storage/nfs-subdir-external-provisioner:v4.0.2 ###这个镜像可以支持动态pv创建子目录,也就是本文档中的NFS,一般测试中两个镜像选其一即可
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME ###这个名字必须与storageclass里面的名字一致,不然storageclass找不到provisioner
value: fuseim.pri/ifs
- name: ENABLE_LEADER_ELECTION ###设置高可用允许选举,如果replicas参数等于1,那这个参数可以不用配置
value: "True"
- name: NFS_SERVER
value: 10.10.10.60 ###修改nfsserver地址为:172.19.58.188
- name: NFS_PATH ###修改共享地址:/data/nfs
value: /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 10.10.10.60 ###修改nfsserver地址为:172.19.58.188
path: /ifs/kubernetes ###修改共享地址为:/data/nfs
[root@k8s-master data]# kubectl apply -f nfs-client.yaml
创建account并绑定角色
[root@k8s-master storageclass]# vim nfs-client-sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["create", "delete", "get", "list", "watch", "patch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: default
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
kubectl apply -f nfs-client-sa.yaml
创建storageclass
[root@k8s-master storageclass]# vim nfs-client-class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: course-nfs-storage
provisioner: fuseim.pri/ifs ##配置与PROVISIONER_NAME相同的名字
###以下两条配置默认没有,支持NFS创建子目录,用于不同应用使用NFS的不同目录
#parameters:
# pathPattern: "${.PVC.namespace}/${.PVC.annotations.nfs.io/storage-path}"
[root@k8s-master data]# kubectl apply -f nfs-client-class.yaml
标记为默认storageclass:
[root@k8s-master data]# kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
查看
[root@k8s-master storageclass]# kubectl get storageclass
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
course-nfs-storage (default) fuseim.pri/ifs Delete Immediate false 9d
2.创建一个动态pv测试一下
[root@k8s-master storageclass]# vim test-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-test
annotations:
volume.beta.kubernetes.io/storage-class: "course-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
[root@k8s-master storageclass]# kubectl apply -f test-pvc.yaml
persistentvolumeclaim/pv-test created
[root@k8s-master data]# kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default pvc-test Bound pvc-5ff7ea1b-be85-4dea-8303-c5d89ecc38ac 1Gi RWX course-nfs-storage 4m57s
kuboard prometheus-k8s-db-prometheus-k8s-0 Bound pvc-9236af94-7ca9-45c1-b2f6-cea89954b475 40Gi RWO course-nfs-storage 6d5h
kuboard prometheus-k8s-db-prometheus-k8s-1 Bound pvc-47a3fe8f-8b94-4b23-812d-a54ca47fe303 40Gi RWO course-nfs-storage 6d5h
kuboard storage-kuboard-loki-0 Bound pvc-bd5e08ec-2cf0-4880-8eb0-492570fac701 20Gi RWX course-nfs-storage 6d4h
[root@k8s-master data]# kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-47a3fe8f-8b94-4b23-812d-a54ca47fe303 40Gi RWO Delete Bound kuboard/prometheus-k8s-db-prometheus-k8s-1 course-nfs-storage 6d5h
pvc-5ff7ea1b-be85-4dea-8303-c5d89ecc38ac 1Gi RWX Delete Bound default/pvc-test course-nfs-storage 5m23s
pvc-9236af94-7ca9-45c1-b2f6-cea89954b475 40Gi RWO Delete Bound kuboard/prometheus-k8s-db-prometheus-k8s-0 course-nfs-storage 6d5h
pvc-bd5e08ec-2cf0-4880-8eb0-492570fac701 20Gi RWX Delete Bound kuboard/storage-kuboard-loki-0 course-nfs-storage 6d4h
更多推荐
所有评论(0)