K8S基础-Pod数据持久化
Pod数据持久化kubenetes中的Volume提供了在容器中挂载外部存储的能力Pod需要设置卷来源(spec.volume)和挂载点(spec.containers.volumeMounts)两个信息后才可以使用相应的Volume常用卷本地卷hostPath,emptyDir网络卷nfs,ceph(cephfs,rbd),glusterfs公有云aws,azureK8S资源downwardAP
Pod数据持久化
-
kubenetes中的Volume提供了在容器中挂载外部存储的能力
-
Pod需要设置卷来源(spec.volume)和挂载点(spec.containers.volumeMounts)两个信息后才可以使用相应的Volume
常用卷
-
本地卷
hostPath, emptyDir
-
网络卷
nfs, ceph(cephfs,rbd), glusterfs
-
公有云
aws, azure
-
K8S资源
downwardAPI, configMap, secret
emptyDir
创建一个空卷,挂载到Pod中的容器.Pod删除该卷也会被删除.
应用场景: Pod容器之间的数据共享
类似docker的volume类型
YAML配置
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: write
image: centos
command: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]
volumeMounts:
- name: data
mountPath: /data
- name: read
image: centos
command: ["bash","-c","tail -f /data/hello"]
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
emptyDir: {}
容器数据卷位置:
/var/lib/kubelet/pods/POD_ID/volumes/kubernetes.io~empty-dir/data
hostPath
挂载Node文件系统上文件或者目录到Pod中的容器。
应用场景:Pod中容器需要访问宿主机文件
日志采集agent, 监控agent,容器需要调用本机jdk,maven环境等
一般配合DaemonSet控制器使用
类似docker的bindmount类型
YAML配置文件
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: busybox
image: busybox
args:
- /bin/sh
- -c
- sleep 36000
volumeMounts:
- name: data
mountPath: /data
volumes:
- name: data
hostPath:
path: /tmp
type: Directory
NFS
安装NFS
yum install nfs-utils -y
配置NFS
cat > /etc/exports << EOF
/data/NFS/wwwroot *(rw,sync,no_root_squash)
EOF
启动NFS
systemctl start nfs
挂载验证
# 列出NFS共享目录
showmount -e 192.168.104.200
Export list for 192.168.104.200:
/data/NFS/wwwroot *
# 挂载共享目录
mount -t nfs 192.168.104.200:/data/NFS/wwwroot /mnt/wwwroot/
挂载节点也需要安装nfs-utils
YAML配置文件
nginx-pv.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- name: data
mountPath: /usr/share/nginx/html
volumes:
- name: data
nfs:
server: 192.168.104.200
path: /data/NFS/wwwroot
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
type: NodePort
应用配置
kubectl apply -f nginx-pv.yaml
验证:
在NFS共享目录中创建一个index.html,访问pod的nginx,显示内容为NFS共享目录中的index.html即为生效
PersistentVolume (PV)
**PersistentVolume(PV):**对存储资源创建和使用的抽象,使得存储作为集群中的资源管理
PV供给分为:
-
静态
-
动态
**PersistentVolumeClaim(PVC):**让用户不需要关心具体的Volume实现细节
创建PV
deploy-pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-01
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
nfs:
path: /data/NFS/wwwroot/01
server: 192.168.104.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-02
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
path: /data/NFS/wwwroot/02
server: 192.168.104.200
应用 && 验证:
kubectl apply -f deploy-pvc.yaml
persistentvolume/pv-01 created
persistentvolume/pv-02 created
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-01 10Gi RWX Retain Available 5s
pv-02 5Gi RWX Retain Available 5s
创建PVC来消费PV
nginx-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: pvc-www
---
# 创建PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-www
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
type: NodePort
创建PVC时候不需要和PV关联,之后程序会自动根据大小匹配对应的PV
查看PV消费情况
kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-01 10Gi RWX Retain Available 14m
pv-02 5Gi RWX Retain Bound default/pvc-www 14m
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-www Bound pv-02 5Gi RWX 50s
验证
在NFS共享目录02中创建index.html,如果可以通过容器访问,即成功
一个PV对应一个PVC,PV容量不一定等于PVC
PV动态供给-NFS
Dynamic Provisioning机制工作的核心在于StorageClass的API对象。
StorageClass声明存储插件,用于自动创建PV。
Kubernetes支持动态供给的存储插件:
https://kubernetes.io/docs/concepts/storage/storage-classes
创建动态PV
nfs-client插件部署,插件地址:
https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy
下载deploy下的class.yaml deployment.yaml rbac.yaml
- 修改deployment.yaml的image地址为lizhenliang/nfs-client-provisioner:latest
- 修改NFS地址和共享路径
查看
kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 16m
消费动态PV
nginx-dynamic-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
persistentVolumeClaim:
claimName: pvc-www02
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-www02
spec:
storageClassName: managed-nfs-storage
accessModes:
- ReadWriteMany
resources:
requests:
storage: 3Gi
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
namespace: default
labels:
app: nginx
spec:
selector:
app: nginx
ports:
- port: 80
protocol: TCP
targetPort: 80
type: NodePort
查看结果
kubectl get pv
kubectl get pvc
kubectl get sc
更多推荐
所有评论(0)