k8s-StorageClass的动态存储
环境:ubuntu16.04 + k8s-1.15 + nfs存储节点为了满足用户动态存储供应需求,使用StorageClass首先,在ip为192.168.69.132的机器上部署nfs服务/etc/exports 配置文件如下:/data/v6 *(rw,sync,no_subtree_check)/data/v5 *(rw,sync,no_subtree_check)/data/v4 *(r
环境:ubuntu16.04 + k8s-1.15 + nfs存储节点
为了满足用户动态存储供应需求,使用StorageClass
首先,在ip为192.168.69.132的机器上部署nfs服务
/etc/exports 配置文件如下:
/data/v6 *(rw,sync,no_subtree_check)
/data/v5 *(rw,sync,no_subtree_check)
/data/v4 *(rw,sync,no_subtree_check)
/data/v3 *(rw,sync,no_subtree_check)
/data/v2 *(rw,sync,no_subtree_check)
/data/v1 *(rw,sync,no_subtree_check)
创建pv, pv-storageclass-demo.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv001
labels:
name: pv001
spec:
nfs:
path: /data/v1
server: 192.168.69.132
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: stateful-storage
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
labels:
name: pv002
spec:
nfs:
path: /data/v2
server: 192.168.69.132
accessModes: ["ReadWriteMany"]
storageClassName: stateful-storage
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv003
labels:
name: pv003
spec:
nfs:
path: /data/v3
server: 192.168.69.132
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: stateful-storage
capacity:
storage: 5Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv004
labels:
name: pv004
spec:
nfs:
path: /data/v4
server: 192.168.69.132
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: stateful-storage
capacity:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv005
labels:
name: pv005
spec:
nfs:
path: /data/v5
server: 192.168.69.132
accessModes: ["ReadWriteMany","ReadWriteOnce"]
storageClassName: stateful-storage
capacity:
storage: 10Gi
---
kubectl apply -f pv-storageclass-demo.yaml
创建StorageClass, storage-class-demo.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: stateful-storage
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
kubectl apply -f storage-class-demo.yaml
创建Service和Statefulet, stateful-demo.yaml
---
apiVersion: v1
kind: Service
metadata:
name: myapp-svc-stateful
labels:
app: myapp-svc-stateful
spec:
ports:
- port: 80
name: web
clusterIP: None
selector:
app: myapp-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: myapp
spec:
serviceName: myapp-svc-stateful
replicas: 3
selector:
matchLabels:
app: myapp-pod
template:
metadata:
labels:
app: myapp-pod
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
ports:
- containerPort: 80
name: web
volumeMounts:
- name: myappdata
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: myappdata
spec:
accessModes: [ "ReadWriteOnce" ]
# storageClassName: "gluster-dynamic"
storageClassName: "stateful-storage"
resources:
requests:
storage: 2Gi
kubectl apply -f stateful-demo.yaml
从上图可以看到 myapp-0 对应绑定了pv003,而pv003对应nfs服务器上的目录为/data/v3, 此时可以在此目录下创建一个 index.html文件,内容为: "this is pv003, index!!!"
访问:
curl 10.244.1.174
得到pv003内index.html对应的内容
注意: 定义pv的资源 storageClassName: stateful-storage 与对应定义StorageClass资源的名称一致。
更多推荐
所有评论(0)