Linux——K8s存储(数据持久化)
K8s存储1.K8s存储主要分为?临时存储、半持久化存储、持久化存储2.emptyDir一般来说emptydir的用途都是用来充当临时存储空间,例如一些不需要数据持久化的微服务,我们都可以用emptydir来当做微服务pod的存储方案。2.1 什么是emptyDir当pod的存储方案设定为emptydir的时候,pod启动时,就会在pod所在节点的磁盘空间开辟出一块空卷,最开始里面是什么都没有的,
K8s存储
1.K8s存储主要分为?
临时存储、半持久化存储、持久化存储
2.emptyDir
一般来说emptydir的用途都是用来充当临时存储空间,例如一些不需要数据持久化的微服务,我们都可以用emptydir来当做微服务pod的存储方案。
2.1 什么是emptyDir
当pod的存储方案设定为emptydir的时候,pod启动时,就会在pod所在节点的磁盘空间开辟出一块空卷,最开始里面是什么都没有的,pod启动后容器产生的数据会存放到那个空卷中。空卷变成了一个临时卷供pod内的容器读取和写入数据,一旦pod容器消失,节点上创建的这个临时卷就会随着pod的销毁而销毁。
2.2 emptyDir的用途
- 充当临时存储空间,当pod内容器产生的数据不需要做持久化存储的时候用emptydir
- 设置检查点以从崩溃事件中恢复未执行完毕的长计算
3.HostPath
3.1 什么是HostPath
- hostPath类型则是映射node文件系统中的文件或者目录到pod里。在使用hostPath类型的存储卷时,也可以设置type字段,支持的类型有文件、目录等。
- HostPath就相当于docker中的-v 目录映射,只不过在k8s中的时候,pod会漂移,当pod漂移到其他node节点的时候,pod不会跨节点的去读取目录。所以说HostPath只能算一种半持久化的存储方式。
3.2 HostPath的用途
- 运行的容器需要访问Docker内部结构时,可以使用hostPath映射服务器目录到容器
4.PV、PVC
-
PV是:是k8s集群的外部存储系统,一般是设定好的存储空间(文件系统中的一个目录)PV是生产者
-
PVC是:如果应用需要用到持久化的时候,可以直接向PV申请空间。PVC是消费者
-
PV和PVC是一一对应关系,当有PV被某个PVC所占用时,会显示banding,其它PVC不能再使用绑定过的PV。但是PVC若没有找到合适的PV时,则会处于pending状态。PVC一旦绑定PV,就相当于是一个存储卷,此时PVC可以被多个Pod所使用。(PVC支不支持被多个Pod访问,取决于访问模型accessMode的定义)。
二、例子
1.emptyDir
1.1 创建yaml文件
[root@master yaml]# vim emptydir.yaml
kind: Pod
apiVersion: v1
metadata:
name: emptydir-consumer
spec:
volumes:
- name: shared-volume
emptyDir: {}
containers:
- name: emptydir
image: busybox
volumeMounts:
- mountPath: /empty_dir
name: shared-volume
args:
- /bin/sh
- -c
- echo "hello world" > /empty_dir/hello.txt; sleep 30000
- name: consumer
image: busybox
volumeMounts:
- mountPath: /consumer_dir
name: shared-volume
args:
- /bin/sh
- -c
- cat /consumer_dir/hello.txt ; sleep 30000
[root@master yaml]# kubectl apply -f emptydir.yaml
pod/emptydir-consumer created
1.2 查看容器日志
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
emptydir 2/2 Running 0 2m18s
[root@master yaml]# kubectl logs emptydir
consumer emptydir
[root@master yaml]# kubectl logs emptydir consumer
hello world
1.3 验证emptyDir原理
查看运行在哪一个节点上
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
emptydir-consumer 2/2 Running 0 87s 10.244.1.3 node02 <none> <none>
在node02节点查看容器的详细信息
PS:04ee0cd5f3c6为节点上创建的pod
[root@node02 ~]# docker inspect 04ee0cd5f3c6
......
"Mounts": [
{
"Type": "bind",
"Source": "/var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume",
"Destination": "/consumer_dir",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
......
[root@node02 ~]# cd /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume
[root@node02 shared-volume]# ls
hello.txt
删除pod,查看节点文件是否存在?
master
[root@master yaml]# ls
emptydir.yaml
[root@master yaml]# kubectl delete -f emptydir.yaml
pod "emptydir-consumer" deleted
node02
[root@node02 ~]# cd /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume
-bash: cd: /var/lib/kubelet/pods/7d0f9cf7-8673-442e-b3c1-46afe6f4fa18/volumes/kubernetes.io~empty-dir/shared-volume: 没有那个文件或目录
2.HostPath
2.1 创建Yaml文件
[root@master yaml]# mkdir -p /data/hostpath
[root@master yaml]# vim hostpath.yaml
kind: Pod
apiVersion: v1
metadata:
name: pod
spec:
volumes:
- name: share-volume
hostPath:
path: "/data/hostpath"
containers:
- name: httpd
image: httpd
volumeMounts:
- mountPath: /usr/share/nginx/html
name: share-volume
args:
- /bin/bash
- -c
- echo "hello httpd" > /usr/share/nginx/html/index.html; sleep 30000
[root@master yaml]# kubectl apply -f hostpath.yaml
pod/hostpath created
2.2 查看pod
查看pod在哪个节点上创建的
[root@master yaml]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod 1/1 Running 0 2m51s 10.244.2.4 node01 <none> <none>
在node01节点查看是否有映射目录文件
[root@node01 ~]# ls /data/hostpath/
hello.txt index.html
2.3 验证HostPath
删除pod
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 5m9s
[root@master yaml]# kubectl delete pod pod
pod "pod" deleted
node01节点查看映射文件是否存在
[root@node01 ~]# ls /data/hostpath/
hello.txt index.html
3.基于NFS创建PV、PVC
master | node01 | node02 | NFS |
---|---|---|---|
192.168.1.40 | 192.168.1.41 | 192.168.142 | 192.168.1.43 |
3.1 安装NFS
PS:注意nfs是每一台服务器都要安装的。
[root@nfs ~]# yum -y install nfs-utils rpcbind
[root@nfs ~]# mkdir /nfsdata
[root@nfs ~]# vim /etc/exports
/nfsdata *(rw,sync,no_root_squash)
[root@nfs ~]# systemctl start nfs
[root@nfs ~]# systemctl start rpcbind
[root@nfs ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@nfs ~]# systemctl enable rpcbind
[root@nfs ~]# showmount -e
Export list for nfs:
/nfsdata *
3.2 创建PV与NFS绑定
[root@master yaml]# vim pv.yaml
kind: PersistentVolume
apiVersion: v1
metadata:
name: pv
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
path: /nfsdata/pv1
server: 192.168.1.43
[root@master yaml]# kubectl apply -f pv.yaml
persistentvolume/pv created
PS:在nfs服务器创建nfs目录
[root@nfs ~]# cd /nfsdata/
[root@nfs nfsdata]# ls
[root@nfs nfsdata]# mkdir pv1
PV所支持的访问模式:
- ReadWriteOnce: PV能以read-write的模式mount到单个节点。
- ReadOnlyMany: PV能以read-only 的模式mount到多个节点。
- ReadWriteMany: PV能以read-write的模式Mount到多个节点。
3.3 创建PVC与PV关联
[root@master yaml]# vim pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: nfs
[root@master yaml]# kubectl apply -f pvc.yaml
persistentvolumeclaim/pvc unchanged
3.4 查看
[root@master yaml]# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv 1Gi RWO Recycle Bound default/pvc nfs 4m47s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pvc Bound pv 1Gi RWO nfs 2m24s
PS:STATUS为Bound说明这个pvc已经与pv绑定了
3.5 创建Pod引用PVC
[root@master yaml]# vim pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: pod
spec:
volumes:
- name: share-data
persistentVolumeVlaim:
vlaimName: pvc
containers:
- name: pod
image: busybox
args:
- /bin/sh
- -c
- sleep 30000
volumeMounts:
- mountPath: "/data"
name: share-data
[root@master yaml]# kubectl apply -f pod.yaml
pod/pod created
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 5s
3.6 验证存储是否正常
NFS
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# echo "hello persistenVolume" > test.txt
Master
PS:/data/test.txt为容器内挂在存储的目录
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 100s
[root@master yaml]# kubectl exec pod cat /data/test.txt
hello persistenVolume
4.PV的空间回收
spec:
......
persistentVolumeReclaimPolicy: Recycle
......
PV空间的回收策略
- Recycle:会清除数据,自动回收。
- Retain:需要手动清理回收。
- Delete:云存储专用的回收空间使用命令。
[root@master yaml]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv 1Gi RWO Recycle Bound default/pvc nfs 23m
验证pv回收策略
4.1 删除pod、pvc资源
[root@master yaml]# kubectl delete pod pod
pod "pod" deleted
[root@master yaml]# kubectl delete pvc pvc
persistentvolumeclaim "pvc" deleted
4.2 查看PV的释放过程
Bound —关联 Recycle—释放 Available—可用
[root@master yaml]# kubectl get pv -w
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON
pv 1Gi RWO Recycle Bound default/pvc nfs 33m
pv 1Gi RWO Recycle Released default/pvc nfs 33m
pv 1Gi RWO Recycle Released nfs 33m
pv 1Gi RWO Recycle Available nfs 33m
4.3 NFS查看
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
4.4 验证Retain策略
更改pc的yaml文件
[root@master yaml]# vim pv.yaml
......
persistentVolumeReclaimPolicy: Retain
......
再次运行pv和pod的yaml文件
[root@master yaml]# kubectl apply -f pv.yaml
persistentvolume/pv created
[root@master yaml]# kubectl apply -f pod.yaml
pod/pod created
创建对应的资源,再尝试删除PVC,和Pod,验证PV目录下,数据是否还会存在
master
[root@master yaml]# kubectl get pod
NAME READY STATUS RESTARTS AGE
pod 1/1 Running 0 66s
[root@master yaml]# kubectl exec pod touch /data/test.txt
nfs
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
test.txt
再次删除Pod,PVC
[root@master yaml]# kubectl delete pod pod
pod "pod" deleted
[root@master yaml]# kubectl delete pvc pvc
persistentvolumeclaim "pvc" deleted
验证PV目录下存放的数据
[root@nfs pv1]# pwd
/nfsdata/pv1
[root@nfs pv1]# ls
test.txt
5.自动创建PV、PVC
PS:使用K8s和NFS部署自动创建PV和PVC,名称空间为test,容器为mysql,镜像为5.7
环境:
master | node01 | node02 | NFS |
---|---|---|---|
192.168.1.40 | 192.168.1.41 | 192.168.1.42 | 192.168.1.43 |
-
storageclass:能够自动的创建PV
-
volumeClaimTemplates:能够自动创建PVC
5.1 开启NFS
此步骤根据3.1步骤完成
5.2 开启rbac权限
RBAC基于角色的访问控制–全拼Role-Based Access Control
[root@master yaml]# vim rbac-rolebind.yaml
kind: Namespace
apiVersion: v1
metadata:
name: test
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: test
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get","create","list", "watch","update"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
namespace: test #如没有名称空间需要添加这个default默认否则报错
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
[root@master yaml]# kubectl apply -f rbac-rolebind.yaml
namespace/test created
serviceaccount/nfs-provisioner created
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-runner created
clusterrolebinding.rbac.authorization.k8s.io/run-nfs-provisioner created
5.3 创建nfs的pod资源
[root@master yaml]# vim nfs-deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nfs-client-provisioner
namespace: test
spec:
replicas: 1
strategy:
type: Recreate #重置
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccount: nfs-provisioner #指定账户
containers:
- name: nfs-client-provisioner
image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME #容器内置变量
value: test-www #变量名字
- name: NFS_SERVER
value: 192.168.1.43
- name: NFS_PATH #指定NFS共享目录
value: /nfsdata
volumes: #以下为指定挂载到容器内的NFS路径和IP
- name: nfs-client-root
nfs:
server: 192.168.1.43
path: /nfsdata
[root@master yaml]# kubectl apply -f nfs-deployment.yaml
deployment.extensions/nfs-client-provisioner created
PS:nfs-client-provisioner这个镜像的作用,它通过k8s集群内置的NFS驱动,挂载远端的NFS服务器到本地目录,然后将自身作为storageprovisioner,然后关联到storageclass资源。
.4 创建stprageclass资源
[root@master yaml]# vim storageclass.yaml
kind: StorageClass
metadata:
name: storageclass
namespace: test
provisioner: test-www ##与nfs的deployment资源的env环境变量value值相同
reclaimPolicy: Retain #回收策略
[root@master yaml]# kubectl apply -f storageclass.yaml
storageclass.storage.k8s.io/storageclass created
5.5 创建Pod资源
PS:在pod资源中添加volumeClaimTemplate字段,实现自动创建pvc服务
[root@master yaml]# vim mysql.yaml
apiVersion: v1
kind: Service
metadata:
name: mysql-svc
namespace: test
labels:
app: mysql-svc
spec:
type: NodePort
ports:
- name: mysql
port: 3306
selector:
app: mysql-pod
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: mysql-statefulset
namespace: test
spec:
serviceName: mysql-svc
replicas: 1
selector:
matchLabels:
app: mysql-pod
template:
metadata:
labels:
app: mysql-pod
spec:
containers:
- name: mysql
image: mysql:5.7
env:
- name: MYSQL_ROOT_PASSWORD
value: 123.com
volumeMounts:
- name: share-mysql
mountPath: /var/lib/mysql
volumeClaimTemplates: #这个字段会自动执行创建PVC
- metadata:
name: share-mysql
annotations: #这是是指定storageclass,名称要和storageclass设置的一样一致
volume.beta.kubernetes.io/storage-class: storageclass
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
[root@master yaml]# kubectl apply -f mysql.yaml
service/mysql-svc created
statefulset.apps/mysql-statefulset created
5.6 查看pod、pv、pvc
[root@master yaml]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Running 0 6m9s
[root@master yaml]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7 100Mi RWO Delete Bound test/share-mysql-mysql-statefulset-0 storageclass 2m31s
[root@master yaml]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
share-mysql-mysql-statefulset-0 Bound pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7 100Mi RWO storageclass 7m28s
5.7 查看是否有持久化目录
[root@nfs nfsdata]# pwd
/nfsdata
[root@nfs nfsdata]# ls
test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7
5.8 验证数据存储
master
[root@master yaml]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Running 0 11m
[root@master yaml]# kubectl exec -it -n test mysql-statefulset-0 bash
root@mysql-statefulset-0:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> create database test;
Query OK, 1 row affected (0.10 sec)
nfs
[root@nfs test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7]# pwd
/nfsdata/test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7
[root@nfs test-share-mysql-mysql-statefulset-0-pvc-d5d9079e-3f24-475b-86f5-e0ded774e7f7]# ls
auto.cnf client-cert.pem ibdata1 ibtmp1 private_key.pem server-key.pem
ca-key.pem client-key.pem ib_logfile0 mysql public_key.pem sys
ca.pem ib_buffer_pool ib_logfile1 performance_schema server-cert.pem test
5.9 删除pod资源,重新创建之后数据是否存在
[root@master yaml]# kubectl get pod -n test
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Running 0 16m
[root@master yaml]# kubectl delete pod -n test mysql-statefulset-0
pod "mysql-statefulset-0" deleted
root@master yaml]# kubectl get pod -n test -w
NAME READY STATUS RESTARTS AGE
mysql-statefulset-0 1/1 Terminating 0 49s
mysql-statefulset-0 0/1 Terminating 0 51s
mysql-statefulset-0 0/1 Terminating 0 52s
mysql-statefulset-0 0/1 Terminating 0 52s
mysql-statefulset-0 0/1 Pending 0 0s
mysql-statefulset-0 0/1 Pending 0 0s
mysql-statefulset-0 0/1 ContainerCreating 0 0s
mysql-statefulset-0 1/1 Running 0 1s
5.10 再次登录查看数据是否存在
[root@master yaml]# kubectl exec -it -n test mysql-statefulset-0 bash
root@mysql-statefulset-0:/# mysql -u root -p123.com
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.32 MySQL Community Server (GPL)
Copyright (c) 2000, 2020, Oracle and/or its affiliates. All rights reserved.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+--------------------+
5 rows in set (0.01 sec)
更多推荐
所有评论(0)