一、EmptyDir

使用emptyDir,当Pod分配到Node上时,将会创建emptyDir,并且只要Node上的Pod一直运行,Volume就会一直存在。当Pod从Node上被删除时,emptyDir也同时会删除,存储的数据也将永久删除。

1.创建一个实例

# cat pod-emptydir.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: my-demo
  namespace: default
  labels:
    name: myapp
    tier: appfront
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    ports:
    - name: http
      containerPort: 80
    volumeMounts:
    - name: empty1
      mountPath: /usr/share/nginx/html/
  - name: busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    volumeMounts:
    - name: empty1
      mountPath: /data/
    command:
    - "/bin/sh"
    - "-c"
    - "while true; do echo $(date) >> /data/index.html; sleep 10; done"
  volumes:
  - name: empty1
    emptyDir: {}

该实例中创建了两个容器,其中一个输入日期到index.html中,然后验证访问nginx的html是否可以获取日期。以验证两个容器之间挂载的emptyDir实现共享。

2.验证

# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
my-demo   2/2     Running   0          2m    10.42.0.1   k8s-5   <none>           <none>
# curl 10.42.0.1
Tue Jul 2 16:49:39 UTC 2019
Tue Jul 2 16:49:49 UTC 2019
Tue Jul 2 16:49:59 UTC 2019
Tue Jul 2 16:50:09 UTC 2019

二、HostPath

挂载Node上的文件系统到Pod里面去,如果Pod需要使用Node上的文件,可以使用hostPath,在pod删除时,存储数据不会丢失。

1.创建一个实例

# cat pod-hostpath.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-hostpath
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: volume2
      mountPath: /usr/share/nginx/html
  volumes:
    - name: volume2
      hostPath:
        path: /data/pod/volume2
        type: DirectoryOrCreate

type: DirectoryOrCreate,如果不存在该目录就创建,其它的类型还有:Directory,FileOrCreate,File,BlockDevice等

# kubectl get pod -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP          NODE    NOMINATED NODE   READINESS GATES
pod-hostpath   1/1     Running   0          5m49s   10.40.0.1   k8s-4   <none>           <none>

2.切换到pod所在node,往本地目录下写入一个文件

[root@K8S-4 ~]# touch /data/pod/volume2/index.html
[root@K8S-4 ~]# echo $(date) >> /data/pod/volume2/index.html

3.验证数据

# curl 10.40.0.1
Wed Jul 3 01:10:34 CST 2019

4.删除该Pod查看数据是否还在

# kubectl delete -f pod-hostpath.yaml 
pod "pod-hostpath" deleted
[root@K8S-4 ~]# cat /data/pod/volume2/index.html
Wed Jul 3 01:10:34 CST 2019

三、NFS

通过配置挂载NFS到Pod中,NFS中的数据是可以永久保存的,同时NFS支持同时写操作。Pod被删除时,内容不会被删除,仅仅是解除挂在状态而已,这就意味着NFS能够允许我们提前对数据进行处理,而且这些数据可以在Pod之间相互传递.并且,NFS可以同时被多个pod挂载并进行读写

1.在任一节点上安装nfs,并配置

[root@K8S-5 ~]# yum install -y nfs-utils
[root@K8S-5 ~]# mkdir -p /data/nfs
[root@K8S-5 ~]# vim /etc/exports
/data/nfs 20.0.20.0/24(rw,no_root_squash)
[root@K8S-5 ~]# systemctl start nfs
[root@K8S-5 ~]# showmount -e
Export list for K8S-5:
/data/nfs 20.0.20.0/24

2.在另一节点测试挂载

[root@K8S-4 ~]#  yum install -y nfs-utils
[root@K8S-4 ~]# mount -t nfs K8S-5:/data/nfs /mnt
[root@K8S-4 ~]# df -h
。。。
K8S-5:/data/nfs           50G  2.3G   48G   5% /mnt

3.创建实例

# cat pod-nfs.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-nfs
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: nfs
      mountPath: /usr/share/nginx/html
  volumes:
    - name: nfs
      nfs:
        path: /data/nfs
        server: K8S-5
# kubectl get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
pod-nfs   1/1     Running   0          8s    10.42.0.1   k8s-5   <none>           <none>

需要在pod所在的节点安装nfs才能挂载成功

4.在nfs服务器上创建测试文件,并测试

[root@K8S-5 ~]# echo $(date) >> /data/nfs/index.html
# curl 10.42.0.1
Wed Jul 3 01:52:11 CST 2019

四、PV&PVC

  • PersistentVolume(PV) 是 Volume 之类的卷插件,也是集群中的资源,但独立于Pod的生命周期(即不会因Pod删除而被删除),不归属于某个Namespace。
  • PersistentVolumeClaim(PVC)是用户存储的请求,PVC消耗PV的资源,可以请求特定的大小和访问模式,需要指定归属于某个Namespace,在同一个Namespace的Pod才可以指定对应的PVC。
创建一个nfs使用PV和PVC的实例:

1.配置nfs存储

[root@K8S-5 nfs]# mkdir v{1,2,3}
[root@K8S-5 nfs]# ls
v1  v2  v3
[root@K8S-5 nfs]# vim /etc/exports
/data/nfs/v1 20.0.20.0/24(rw,no_root_squash)
/data/nfs/v2 20.0.20.0/24(rw,no_root_squash)
/data/nfs/v3 20.0.20.0/24(rw,no_root_squash)
[root@K8S-5 nfs]# exportfs -arv
exporting 20.0.20.0/24:/data/nfs/v3
exporting 20.0.20.0/24:/data/nfs/v2
exporting 20.0.20.0/24:/data/nfs/v1
[root@K8S-5 nfs]# showmount -e
Export list for K8S-5:
/data/nfs/v3 20.0.20.0/24
/data/nfs/v2 20.0.20.0/24
/data/nfs/v1 20.0.20.0/24

2.定义PV

# cat pv-nfs.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv001
  labels:
    name: pv001
spec:
  nfs:
    path: /data/nfs/v1
    server: K8S-5
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 1Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv002
  labels:
    name: pv002
spec:
  nfs:
    path: /data/nfs/v2
    server: K8S-5
  accessModes: ["ReadWriteOnce"]
  capacity:
    storage: 2Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv003
  labels:
    name: pv003
spec:
  nfs:
    path: /data/nfs/v3
    server: K8S-5
  accessModes: ["ReadWriteMany","ReadWriteOnce"]
  capacity:
    storage: 3Gi
# kubectl apply -f pv-nfs.yaml 
persistentvolume/pv001 created
persistentvolume/pv002 created
persistentvolume/pv003 created
# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                   19s
pv002   2Gi        RWO            Retain           Available                                   19s
pv003   3Gi        RWO,RWX        Retain           Available                                   19s
  • 访问模式包括:
    ReadWriteOnce——该卷可以被单个节点以读/写模式挂载
    ReadOnlyMany——该卷可以被多个节点以只读模式挂载
    ReadWriteMany——该卷可以被多个节点以读/写模式挂载
  • 在命令行中,访问模式缩写为:
    RWO - ReadWriteOnce
    ROX - ReadOnlyMany
    RWX - ReadWriteMany
  • 一个卷一次只能使用一种访问模式挂载,即使它支持很多访问模式。

  • PV的回收策略可以设定PVC在释放后如何处理对应的Volume,目前有 Retained(保留), Recycled(回收)和Deleted(删除)三种策略
  • 要修改PV的回收策略,可执行以下命令
    kubectl patch pv <pv_name> -p ‘{“spec”:{“persistentVolumeReclaimPolicy”:“Retain”}}’

3.定义PVC

# cat pod-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mypvc
  namespace: default
spec:
  accessModes: ["ReadWriteMany"]
  resources:
    requests:
      storage: 3Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-pvc
  namespace: default
spec:
  containers:
  - name: myapp
    image: ikubernetes/myapp:v1
    volumeMounts:
    - name: nfsvolume
      mountPath: /usr/share/nginx/html
  volumes:
    - name: nfsvolume
      persistentVolumeClaim:
        claimName: mypvc
# kubectl apply -f pod-pvc.yaml 
persistentvolumeclaim/mypvc created
pod/pod-pvc created
# kubectl get pv
NAME    CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM           STORAGECLASS   REASON   AGE
pv001   1Gi        RWO,RWX        Retain           Available                                           6m34s
pv002   2Gi        RWO            Retain           Available                                           6m34s
pv003   3Gi        RWO,RWX        Retain           Bound       default/mypvc                           6m34s
# kubectl get pvc
NAME    STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
mypvc   Bound    pv003    3Gi        RWO,RWX                       14s
# k get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
pod-pvc   1/1     Running   0          4s    10.42.0.1   k8s-5   <none>           <none>

4.访问测试

在nfs上创建index.html,并写入数据

[root@K8S-5 nfs]# echo $(date) >> /data/nfs/v3/index.html
# curl 10.42.0.1
Wed Jul 3 18:42:33 CST 2019

五、StorageClass

StorageClass作为存储资源的抽象定义,对用户设置的PVC申请屏蔽后端细节,减少了手工管理PV的工作,由系统自动完成PV的创建和绑定,实现了动态的资源供应。
例如:在存储系统中划分一个1TB的存储空间提供给Kubernetes使用,当用户需要一个10G的PVC时,会立即通过restful发送请求,从而让存储空间创建一个10G的image,之后在我们的集群中定义成10G的PV供给给当前的PVC作为挂载使用。

配置示例:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: standard
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2
reclaimPolicy: Retain
mountOptions:
  - debug

六、配置一个GlusterFS的动态存储实例

参考文档:GlusterFS Kubernetes

1.环境准备
①节点信息

# kubectl get node -o wide
NAME    STATUS   ROLES    AGE   VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION          CONTAINER-RUNTIME
k8s-1   Ready    master   34d   v1.14.2   20.0.20.101   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-2   Ready    <none>   33d   v1.14.2   20.0.20.102   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-3   Ready    <none>   33d   v1.14.2   20.0.20.103   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-4   Ready    <none>   15h   v1.14.2   20.0.20.104   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6
k8s-5   Ready    <none>   15h   v1.14.2   20.0.20.105   <none>        CentOS Linux 7 (Core)   3.10.0-693.el7.x86_64   docker://18.9.6

②设置label

部署过程中将使用3,4,5作为GlusterFS的三个节点,设置这三个节点的label,以便于在这三个节点上启动glusterFS pod

[root@K8S-1 ~]# kubectl label node k8s-3 storagenode=glusterfs
node/k8s-3 labeled
[root@K8S-1 ~]# kubectl label node k8s-4 storagenode=glusterfs
node/k8s-4 labeled
[root@K8S-1 ~]# kubectl label node k8s-5 storagenode=glusterfs
node/k8s-5 labeled
[root@K8S-1 ~]# kubectl get node -L storagenode
NAME    STATUS   ROLES    AGE   VERSION   STORAGENODE
k8s-1   Ready    master   34d   v1.14.2   
k8s-2   Ready    <none>   33d   v1.14.2   
k8s-3   Ready    <none>   33d   v1.14.2   glusterfs
k8s-4   Ready    <none>   16h   v1.14.2   glusterfs
k8s-5   Ready    <none>   16h   v1.14.2   glusterfs

③安装GFS客户端

想要正常的在kubernetes集群中使用或者挂载glusterfs,集群中的对应节点都需要安装 glusterfs-fuse

# yum install -y glusterfs glusterfs-fuse

④创建模拟磁盘

使用Heketi管理GFS,需要所对应的节点上有一块空白磁盘,此处使用loop device来模拟磁盘;loop device在操作系统重启后可能会被卸载,导致GlusterFS无法正常使用。

[root@K8S-2 ~]# mkdir -p /home/glusterfs
[root@K8S-2 ~]# cd /home/glusterfs/
[root@K8S-2 glusterfs]# dd if=/dev/zero of=gluster.disk bs=1024 count=$(( 1024 * 1024 * 20 ))
20971520+0 records in
20971520+0 records out
21474836480 bytes (21 GB) copied, 1264.7 s, 17.0 MB/s
[root@K8S-2 glusterfs]# losetup -l
NAME       SIZELIMIT OFFSET AUTOCLEAR RO BACK-FILE
/dev/loop0         0      0         0  0 /home/glusterfs/gluster.disk

⑤下载gluster-kubernetes

git clone https://github.com/gluster/gluster-kubernetes.git

2.部署GFS

[root@K8S-1 ~]# cd gluster-kubernetes/deploy/kube-templates/
[root@K8S-1 kube-templates]# cat glusterfs-daemonset.yaml 
---
kind: DaemonSet
apiVersion: extensions/v1beta1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs: pod
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: IfNotPresent
        name: glusterfs
        env:
        # alternative for /dev volumeMount to enable access to *all* devices
        - name: HOST_DEV_DIR
          value: "/mnt/host-dev"
        # set GLUSTER_BLOCKD_STATUS_PROBE_ENABLE to "1" so the
        # readiness/liveness probe validate gluster-blockd as well
        - name: GLUSTER_BLOCKD_STATUS_PROBE_ENABLE
          value: "1"
        - name: GB_GLFS_LRU_COUNT
          value: "15"
        - name: TCMU_LOGDIR
          value: "/var/log/glusterfs/gluster-block"
        resources:
          requests:
            memory: 100Mi
            cpu: 100m
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-host-dev
          mountPath: "/mnt/host-dev"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-block-sys-class
          mountPath: "/sys/class"
        - name: glusterfs-block-sys-module
          mountPath: "/sys/module"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        - name: kernel-modules
          mountPath: "/lib/modules"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh readiness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - "if command -v /usr/local/bin/status-probe.sh; then /usr/local/bin/status-probe.sh liveness; else systemctl status glusterd.service; fi"
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 50
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-host-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-block-sys-class
        hostPath:
          path: "/sys/class"
      - name: glusterfs-block-sys-module
        hostPath:
          path: "/sys/module"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
      - name: kernel-modules
        hostPath:
          path: "/lib/modules"
[root@K8S-1 kube-templates]# kubectl create -f glusterfs-daemonset.yaml 
daemonset.extensions/glusterfs created
[root@K8S-1 kube-templates]# kubectl get pod
NAME              READY   STATUS    RESTARTS   AGE
glusterfs-l9cvv   0/1     Running   0          2m33s
glusterfs-mxnmd   1/1     Running   0          2m33s
glusterfs-w9gwt   0/1     Running   0          2m33s

3.部署Heketi

Heketi是一个提供RESTful API管理GlusterFS卷的框架,支持GlusterFSduo多集群管理。

①创建topology.json文件

在Heketi能够管理GFS集群之前,要先为其提供GlusterFS集群拓扑信息。可以用topology.json来完成各个GFS节点和设备的定义。

[root@K8S-1 kube-templates]# cd ..
[root@K8S-1 deploy]# cp topology.json.sample topology.json
[root@K8S-1 deploy]# cp topology.json.sample topology.json
[root@K8S-1 deploy]# vim topology.json
[root@K8S-1 deploy]# cat topology.json
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "K8S-3"
              ],
              "storage": [
                "20.0.20.103"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/loop0"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "K8S-4"
              ],
              "storage": [
                "20.0.20.104"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/loop0"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "K8S-5"
              ],
              "storage": [
                "20.0.20.105"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/loop0"
          ]
        }
      ]
    }
  ]
}

②使用GlusterFS Kubernetes项目中的gk-deploy来创建Heketi

该脚本中有一处使用了kubectl get pod --show-all选项,当前版本移除了--show-all选项,需要在脚本中删除

[root@K8S-1 deploy]# ./gk-deploy
......
[Y]es, [N]o? [Default: Y]: 
Using Kubernetes CLI.
Using namespace "default".
Checking for pre-existing resources...
  GlusterFS pods ... found.
  deploy-heketi pod ... not found.
  heketi pod ... not found.
  gluster-s3 pod ... not found.
Creating initial resources ... serviceaccount/heketi-service-account created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view created
clusterrolebinding.rbac.authorization.k8s.io/heketi-sa-view labeled
OK
secret/heketi-config-secret created
secret/heketi-config-secret labeled
service/deploy-heketi created
deployment.extensions/deploy-heketi created
Waiting for deploy-heketi pod to start ... OK
Creating cluster ... ID: f93b7411594f33075a762c1f11c48b9e
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node k8s-3 ... ID: 80faa4b3ab2ad0af09161151e02fce7c
Adding device /dev/loop0 ... OK
Creating node k8s-4 ... ID: 50351cd51945c4094c4cbfb6dbd9ed0c
Adding device /dev/loop0 ... OK
Creating node k8s-5 ... ID: 71a2df81f463f7733d8af18ff0e38a7d
Adding device /dev/loop0 ... OK
heketi topology loaded.
Saving /tmp/heketi-storage.json
secret/heketi-storage-secret created
endpoints/heketi-storage-endpoints created
service/heketi-storage-endpoints created
job.batch/heketi-storage-copy-job created
service/heketi-storage-endpoints labeled
pod "deploy-heketi-865f55765-j6ghj" deleted
service "deploy-heketi" deleted
deployment.apps "deploy-heketi" deleted
replicaset.apps "deploy-heketi-865f55765" deleted
job.batch "heketi-storage-copy-job" deleted
secret "heketi-storage-secret" deleted
service/heketi created
deployment.extensions/heketi created
Waiting for heketi pod to start ... OK
heketi is now running and accessible via http://10.40.0.1:8080 . To run
administrative commands you can install 'heketi-cli' and use it as follows:

  # heketi-cli -s http://10.40.0.1:8080 --user admin --secret '<ADMIN_KEY>' cluster list

You can find it at https://github.com/heketi/heketi/releases . Alternatively,
use it from within the heketi pod:

  # /usr/bin/kubectl -n default exec -i heketi-85dbbbb55-cvfsp -- heketi-cli -s http://localhost:8080 --user admin --secret '<ADMIN_KEY>' cluster list

For dynamic provisioning, create a StorageClass similar to this:

---
apiVersion: storage.k8s.io/v1beta1
kind: StorageClass
metadata:
  name: glusterfs-storage
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.40.0.1:8080"

Deployment complete!

③进入Heketi容器查看topology信息

[root@K8S-1 deploy]# kubectl exec -ti heketi-85dbbbb55-cvfsp -- /bin/bash
[root@heketi-85dbbbb55-cvfsp /]# heketi-cli topology info

Cluster Id: f93b7411594f33075a762c1f11c48b9e

    File:  true
    Block: true

    Volumes:

    Name: heketidbstorage
    Size: 2
    Id: d9ea60134a1f2ab16f0973034ab31110
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Mount: 20.0.20.104:heketidbstorage
    Mount Options: backup-volfile-servers=20.0.20.105,20.0.20.103
    Durability Type: replicate
    Replica: 3
    Snapshot: Disabled

        Bricks:
            Id: 40719676ab880160ce872e3812775bdd
            Path: /var/lib/heketi/mounts/vg_aeb3d8cd14f81464471a9b8bb4a29b99/brick_40719676ab880160ce872e3812775bdd/brick
            Size (GiB): 2
            Node: 80faa4b3ab2ad0af09161151e02fce7c
            Device: aeb3d8cd14f81464471a9b8bb4a29b99

            Id: 9e6f91cfab59db365567cf6a394f3393
            Path: /var/lib/heketi/mounts/vg_5625e95ac4aa99f058e1953aafd74426/brick_9e6f91cfab59db365567cf6a394f3393/brick
            Size (GiB): 2
            Node: 50351cd51945c4094c4cbfb6dbd9ed0c
            Device: 5625e95ac4aa99f058e1953aafd74426

            Id: ff85ed6fc05c6dffbd6cae6609aff9b4
            Path: /var/lib/heketi/mounts/vg_ba5eb17eee4105a54764a6dcc0323f39/brick_ff85ed6fc05c6dffbd6cae6609aff9b4/brick
            Size (GiB): 2
            Node: 71a2df81f463f7733d8af18ff0e38a7d
            Device: ba5eb17eee4105a54764a6dcc0323f39

    Nodes:

    Node Id: 50351cd51945c4094c4cbfb6dbd9ed0c
    State: online
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Zone: 1
    Management Hostnames: k8s-4
    Storage Hostnames: 20.0.20.104
    Devices:
        Id:5625e95ac4aa99f058e1953aafd74426   Name:/dev/loop0          State:online    Size (GiB):19      Used (GiB):2       Free (GiB):17      
            Bricks:
                Id:9e6f91cfab59db365567cf6a394f3393   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_5625e95ac4aa99f058e1953aafd74426/brick_9e6f91cfab59db365567cf6a394f3393/brick

    Node Id: 71a2df81f463f7733d8af18ff0e38a7d
    State: online
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Zone: 1
    Management Hostnames: k8s-5
    Storage Hostnames: 20.0.20.105
    Devices:
        Id:ba5eb17eee4105a54764a6dcc0323f39   Name:/dev/loop0          State:online    Size (GiB):19      Used (GiB):2       Free (GiB):17      
            Bricks:
                Id:ff85ed6fc05c6dffbd6cae6609aff9b4   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_ba5eb17eee4105a54764a6dcc0323f39/brick_ff85ed6fc05c6dffbd6cae6609aff9b4/brick

    Node Id: 80faa4b3ab2ad0af09161151e02fce7c
    State: online
    Cluster Id: f93b7411594f33075a762c1f11c48b9e
    Zone: 1
    Management Hostnames: k8s-3
    Storage Hostnames: 20.0.20.103
    Devices:
        Id:aeb3d8cd14f81464471a9b8bb4a29b99   Name:/dev/loop0          State:online    Size (GiB):19      Used (GiB):2       Free (GiB):17      
            Bricks:
                Id:40719676ab880160ce872e3812775bdd   Size (GiB):2       Path: /var/lib/heketi/mounts/vg_aeb3d8cd14f81464471a9b8bb4a29b99/brick_40719676ab880160ce872e3812775bdd/brick

4.定义StorageClass

# cat glusterfs-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-volume
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: "http://10.40.0.1:8080"
  restauthenabled: "flase"  
# kubectl get storageclass
NAME             PROVISIONER               AGE
gluster-volume   kubernetes.io/glusterfs   13s

5.定义PVC

# cat glusterfs-pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: glusterfs-pvc
spec:
  storageClassName: gluster-volume
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

此处可见系统自动创建了PV

# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS     REASON   AGE
persistentvolume/pvc-db9abc87-9e0a-11e9-a2f3-00505694834d   1Gi        RWX            Delete           Bound    default/glusterfs-pvc   gluster-volume            9s

NAME                                  STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
persistentvolumeclaim/glusterfs-pvc   Bound    pvc-db9abc87-9e0a-11e9-a2f3-00505694834d   1Gi        RWX            gluster-volume   26s

6.定义一个Pod使用PVC

# cat pod-usepvc.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: busybox
spec:
  containers:
    - image: busybox
      command:
        - sleep
        - "3600"
      name: busybox
      volumeMounts:
        - mountPath: /usr/share/busybox
          name: mypvc
  volumes:
    - name: mypvc
      persistentVolumeClaim:
        claimName: glusterfs-pvc
# kubectl exec -ti busybox -- /bin/sh
/ # df -h
shm                      64.0M         0     64.0M   0% /dev/shm
20.0.20.103:vol_7852b88167b5f961d1f7869674851490
                       1020.1M     42.8M    977.3M   4% /usr/share/busybox
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐