【Kubernetes】K8S的PV+PVC+NFS的实践
pv的3种访问策略 :ROXRWXRWOpv的3种回收策略:保留:pvc被删除后,pv里任然保留pvc曾经的数据,需要手工去删除回收:被废弃了,不使用这个策略了删除:删除的是什么?删除动作会将 PersistentVolume 对象从 Kubernetes 中移除。
文章目录
使用nfs的数据流程
pod–>volume–>pvc–>pv–>nfs
pv的3种访问策略 :
ROX
RWX
RWO
pv的3种回收策略:
保留:pvc被删除后,pv里任然保留pvc曾经的数据,需要手工去删除
回收:被废弃了,不使用这个策略了
删除: 删除的是什么? 删除动作会将 PersistentVolume 对象从 Kubernetes 中移除。
实验
1.搭建好nfs服务器
在做nfs机器上,依次执行:
yum install nfs-utils -y
service nfs restart
2.设置共享目录
[root@nfs /]# vim etc/exports
内容如下:
/web 192.168.1.0/24(rw,no_root_squash,sync)
3.新建共享目录和index.html网页
[root@nfs /]# mkdir -p /web
[root@nfs /]# cd /web
[root@nfs web]# vim index.html
[root@nfs web]# cat index.html
welcome to jiangda website
3-10-18:00
3-10-18:45
[root@nfs web]# echo "3-23-22:00" >>index.html
4.刷新nfs或者重新输出共享目录
[root@nfs web]# exportfs -r
[root@nfs web]# exportfs -a
[root@nfs web]# exportfs -v
/web 192.168.1.0/24(sync,wdelay,hide,no_subtree_check,sec=sys,rw,secure,root_squash,all_squash)
[root@nfs web]# service nfs restart
Redirecting to /bin/systemctl restart nfs.service
exportfs -a 输出所有共享目录
exportfs -v 显示输出的共享目录
exportfs -r 重新输出所有的共享目录
service nfs restart 重启nfs服务
每个node节点都需要安装nfs-utils工具,不然不能挂载nfs的共享目录:
打开所有会话,执行:
[root@jdmaster hpa]# yum install nfs-utils -y
测试你的node能否访问nfs,并且挂载nfs共享目录,这一步不是必须的,只是测试:
[root@jdmaster hpa]# mkdir /sc
[root@jdmaster hpa]# mount 192.168.1.130:/web /sc
mount: 文件系统类型错误、选项错误、192.168.2.130:/sc/web 上有坏超级块、
缺少代码页或助手程序,或其他错误
(对某些文件系统(如 nfs、cifs) 您可能需要
一款 /sbin/mount.<类型> 助手程序)
有些情况下在 syslog 中可以找到一些有用信息- 请尝试
dmesg | tail 这样的命令看看。
挂载成功的效果:
[root@jdmaster hpa]# df|grep web
192.168.1.130:/web 17811456 3647488 14163968 21% /sc
[root@jdmaster hpa]# cd /sc/
[root@jdmaster sc]# ls
index.html
[root@jdmaster sc]# cat index.html
welcome to jiangda website
3-10-18:00
3-10-18:45
3-23-22:00
5.创建pv使用nfs服务器上的共享目录
[root@jdmaster ~]# mkdir pv
[root@jdmaster ~]# cd pv
[root@jdmaster pv]# vim nfs-pv.yaml
nfs-pv.yaml 内容:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jd-nginx-pv
labels:
type: jd-nginx-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: nfs #pv对应的名字
nfs:
path: "/web" #nfs共享的目录
server: 192.168.1.130 #nfs服务器的ip地址
readOnly: false #访问模式
执行:
[root@jdmaster pv]# kubectl apply -f nfs-pv.yaml
persistentvolume/jd-nginx-pv created
[root@jdmaster pv]# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jd-nginx-pv 5Gi RWX Retain Available nfs 16s
6.创建pvc使用pv
[root@jdmaster pv]# vim pvc-nfs.yaml
[root@jdmaster pv]# kubectl apply -f pvc-nfs.yaml
persistentvolumeclaim/jd-nginx-pvc created
[root@jdmaster pv]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jd-nginx-pvc Bound jd-nginx-pv 5Gi RWX nfs 8s
pvc-nfs.yaml 内容:
apiVersion: v1
kind: PersistentVolume
metadata:
name: jd-nginx-pv
labels:
type: jd-nginx-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
storageClassName: nfs #pv对应的名字
nfs:
path: "/web" #nfs共享的目录
server: 192.168.1.130 #nfs服务器的ip地址
readOnly: false #访问模式
[root@jdmaster pv]# cat pvc-nfs.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jd-nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs #使用nfs类型的pv
7.创建pod使用pvc
[root@jdmaster pv]# vim pod-nfs.yaml
[root@jdmaster pv]# kubectl apply -f pod-nfs.yaml
pod/sc-pv-pod-nfs created
[root@jdmaster pv]# kubectl get pod -o wide|grep nfs
sc-pv-pod-nfs 1/1 Running 0 26s 10.244.1.25 jdnode-2 <none> <none>
pod-nfs.yaml 内容:
apiVersion: v1
kind: Pod
metadata:
name: sc-pv-pod-nfs
spec:
volumes:
- name: jd-pv-storage-nfs
persistentVolumeClaim:
claimName: jd-nginx-pvc
containers:
- name: jd-pv-container-nfs
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: jd-pv-storage-nfs
8.测试访问
[root@jdmaster pv]# curl 10.244.1.25
welcome to jiangda website
3-10-18:00
3-10-18:45
3-23-22:00
成功:
升级化(deployment和service升级改良)
1.升级使用deployment去部署pod
修改 pod-nfs.yaml 文件
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: jd-nginx
spec:
replicas: 5
selector:
matchLabels:
app: jd-nginx
template:
metadata:
labels:
app: jd-nginx
spec:
volumes:
- name: jd-pv-storage-nfs-1
persistentVolumeClaim:
claimName: jd-nginx-pvc
containers:
- name: jd-pv-container-nfs-1
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: jd-pv-storage-nfs-1
执行:
[root@jdmaster pv]# kubectl delete -f pod-nfs.yaml
deployment.apps "nginx-deployment" deleted
[root@jdmaster pv]# vim pod-nfs.yaml
[root@jdmaster pv]# kubectl apply -f pod-nfs.yaml
deployment.apps/nginx-deployment created
成功:
[root@jdmaster pv]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-566db6676f-5hfxk 1/1 Running 0 12s
nginx-deployment-566db6676f-7rkdk 1/1 Running 0 12s
nginx-deployment-566db6676f-l5lgq 1/1 Running 0 12s
nginx-deployment-566db6676f-swqmp 1/1 Running 0 12s
nginx-deployment-566db6676f-zwdgh 1/1 Running 0 12s
[root@jdmaster pv]#
2.升级创建service 发布pod,采用NodePort类型,外面的机器也可以访问pod
[root@jdmaster pv]# vim my_service.yaml
my_service.yaml 内容如下:
apiVersion: v1
kind: Service
metadata:
name: my-nginx-nfs
labels:
run: my-nginx-nfs
spec:
type: NodePort
ports:
- port: 8070
targetPort: 80
protocol: TCP
name: http
selector:
app: jd-nginx #对应的deployment控制器的名字
执行:
[root@jdmaster pv]# kubectl apply -f my_service.yaml
service/my-nginx-nfs created
[root@jdmaster pv]# kubectl describe svc my-nginx-nfs
Name: my-nginx-nfs
Namespace: default
Labels: run=my-nginx-nfs
Annotations: <none>
Selector: app=jd-nginx
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.1.156.135
IPs: 10.1.156.135
Port: http 8070/TCP
TargetPort: 80/TCP
NodePort: http 31742/TCP
Endpoints: 10.244.1.33:80,10.244.1.34:80,10.244.1.35:80 + 2 more...
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
service启动成功:
[root@jdmaster pv]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-nginx-nfs NodePort 10.1.156.135 <none> 8070:31742/TCP 2m
浏览器访问192.168.1.7:31742成功:
更多推荐
所有评论(0)