场景描述

glusterfs本身可以借助heketi来供k8s使用,如下图在这里插入图片描述
有个场景:

  • glusterfs是先于k8s安装的,机器已经没有新的磁盘供你单独给Heketi使用了
  • 你没有权限创建新的卷,你只能使用提供给你的卷

针对上面的场景,可以使用下面的架构

在这里插入图片描述

部署
  • 服务器A和服务器B上,部署NFS服务
yum -y install nfs-utils rpcbind
systemctl start rpcbind
systemctl start nfs
  • 服务器A和服务器B上,配置NFS服务
# /ai-data/nfs-data 是我服务器上的挂载点
# glusterfs的卷要打开 nfs的支持
/ai-data/nfs-data *(rw,fsid=0,no_root_squash)
exportfs -rv
# 执行完上面命令后如果能看到正常的输出,则证明正常

在这里插入图片描述

  • 配置k8s service
    yaml文件
apiVersion: v1
kind: Service
metadata:
  name: nfs-server-svc
spec:
  ports:
    - name: tcp
      port: 2049
      protocol: TCP
    - name: udp
      port: 2049
      protocol: UDP
---
kind: Endpoints
apiVersion: v1
metadata:
  name: nfs-server-svc
  namespace: default
subsets:
  - addresses:
      - ip: 192.168.0.213 # 服务器A
    ports:
      - name: tcp
        port: 2049
        protocol: TCP
      - name: udp
        port: 2049
        protocol: UDP
  - addresses:
      - ip: 192.168.0.214 # 服务器B
    ports:
      - name: tcp
        port: 2049
        protocol: TCP
      - name: udp
        port: 2049
        protocol: UDP

在这里插入图片描述

上面使用nodeport,如果没有需求的话可以不使用

  • 部署nfs provisioner
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner-1215
  labels:
    app: nfs-client-provisioner-new
  # replace with namespace where provisioner is deployed
  namespace: ha-test-env # 命名空间要注意
spec:
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner-1215
  template:
    metadata:
      labels:
        app: nfs-client-provisioner-1215
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          securityContext:
            privileged: true
          image: quay.io/external_storage/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifsnew # 此处的名字要注意
            - name: NFS_SERVER
              value: 10.96.64.180 # 此处是cluster ip
            - name: NFS_PATH
              value: /  # 此处要是根目录,因为是NFS4
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.96.64.180
            path: / # 此处要是根目录,因为是NFS4
  • 配置 storageclass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-storage-1215
provisioner: fuseim.pri/ifsnew
parameters:
  onDelete: delete
  • 测试
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: magpie-claim
spec:
  storageClassName: nfs-storage-1215
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi    #请求50M的空间
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-pvc-deployment
  labels:
    app: mg-tornado
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mg-tornado
  template:
    metadata:
      labels:
        app: mg-tornado
    spec:
      nodeSelector:
        use: magpie
      containers:
      - name: busybox
        image: busybox
        volumeMounts:
        - mountPath: /db
          name: db
      volumes:
       - name: db
         persistentVolumeClaim:   #指定pvc,注意下面声明的pvc指向的是上面定义的pvc名称
          claimName: magpie-claim
  • 进入容器后,对容器内的挂载点进行数据的写入,如果可以写入则证明配置成功
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐