k8s安装部署8 - 磁盘管理
k8s学习
·
先安装NFS,参考 Centons7安装NFS
原生方式数据挂载
编写配置文件 vi deploy-dir.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-pv-demo
name: nginx-pv-demo
spec:
replicas: 2
selector:
matchLabels:
app: nginx-pv-demo
template:
metadata:
labels:
app: nginx-pv-demo
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
nfs:
server: 192.168.99.200
path: /nfs/data/nginx-pv
#创建文件夹,之后创建index.html,检查pod中对应的文件
mkdir -p /nfs/data/nginx-pv
kubectl apply -f deploy-dir.yaml
cd /nfs/data/nginx-pv
echo "hello" > index.html
kubectl get pods -A
kubectl exec -it nginx-pv-demo-56bd85668f-p9jwj -- /bin/bash
cat /usr/share/nginx/html/index.html
curl http://127.0.0.1
exit
cd ~
kubectl delete -f deploy-dir.yaml
原生方式数据挂载的缺点
- 需要先手动创建目录
- pod删除后,目录不会自动删除
- 磁盘的使用空间没法做限制
PV & PVC
PV:持久卷(Persistent Volume),将应用需要持久化的数据保存到指定位置
PVC:持久卷申明(Persistent Volume Claim),申明需要使用的持久卷规格
#nfs主节点
mkdir -p /nfs/data/01
mkdir -p /nfs/data/02
mkdir -p /nfs/data/03
创建PV vi pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01-10m
spec:
capacity:
storage: 10M
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/01
server: 192.168.99.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv02-1gi
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/02
server: 192.168.99.200
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv03-3gi
spec:
capacity:
storage: 3Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: /nfs/data/03
server: 192.168.99.200
kubectl apply -f pv.yaml
#查看创建好的PV
kubectl get persistentvolume
#状态都是Available可用的
kubectl get pv
PVC创建与绑定
创建PVC vi pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: nfs
kubectl apply -f pvc.yaml
#Bound状态表示已绑定,CLAIM显示是那个PVC使用到了PV,如 default/nginx-pvc
kubectl get pv
#删除PVC
kubectl delete -f pvc.yaml
#状态变更为Released,Retain策略保留PV中的数据,在Released状态上PV不能重新分配
kubectl get pv
#重新应用,绑定另一个PV
kubectl apply -f pvc.yaml
kubectl get pv
kubectl get pvc
创建Pod绑定PVC
Pod绑定PVC vi deploy-pvc.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx-deploy-pvc
name: nginx-deploy-pvc
spec:
replicas: 2
selector:
matchLabels:
app: nginx-deploy-pvc
template:
metadata:
labels:
app: nginx-deploy-pvc
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html
volumes:
- name: html
persistentVolumeClaim:
claimName: nginx-pvc
kubectl apply -f deploy-pvc.yaml
kubectl get pvc
kubectl get pvc,pv
#在PV中增加文件,在Pod中查看文件变更
echo 123456 > /nfs/data/03/index.html
kubectl get pods -A
kubectl exec -it nginx-deploy-pvc-79fc8558c7-n8d8p -- /bin/bash
cat /usr/share/nginx/html/index.html
exit
#安装json格式化工具
yum install -y epel-release
yum install -y jq
#查看所有Pod使用到那些pvc
kubectl get pods --all-namespaces -o=json | jq -c '.items[] | {name: .metadata.name, namespace: .metadata.namespace, claimName:.spec.volumes[] | select( has ("persistentVolumeClaim") ).persistentVolumeClaim.claimName }'
更多推荐
已为社区贡献16条内容
所有评论(0)