k8s 验证 nfs keepalived 高可用性
第一步,选择2台机器 做高可用第一台:10.1.234.101第二台:10.1.234.102vip : 10.1.234.100[root@k8s-master1 nfs]# ping 10.1.234.100PING 10.1.234.100 (10.1.234.100) 56(84) bytes of data.64 bytes from 10.1.234.100: icmp_seq=1 t
·
第一步,选择2台机器 做高可用
第一台:10.1.234.101
第二台:10.1.234.102
vip : 10.1.234.100
[root@k8s-master1 nfs]# ping 10.1.234.100
PING 10.1.234.100 (10.1.234.100) 56(84) bytes of data.
64 bytes from 10.1.234.100: icmp_seq=1 ttl=64 time=0.358 ms
64 bytes from 10.1.234.100: icmp_seq=2 ttl=64 time=0.361 ms
nfs deployment ip 地址用 vip
[root@k8s-master1 nfs]# cat deployment.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: nfs-client-provisioner
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value: 10.1.234.100
- name: NFS_PATH
value: /ifs/kubernetes
volumes:
- name: nfs-client-root
nfs:
server: 10.1.234.100
path: /ifs/kubernetes
部署应用
[root@k8s-master1 nfs]# kubectl get pods,pvc -n edu
NAME READY STATUS RESTARTS AGE
pod/wenruo-crm-64dbc5d8c8-5q9pq 1/1 Running 0 24m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/wenruo-crm-pvc Bound pvc-938bcb8a-c431-4b09-80f7-7cabe70a195a 5Gi RWX managed-nfs-storage 24m
[root@k8s-master1 configmap_nginx]# curl wenruo-crm.rpdns.com
10
现在一切正常:
关掉:10.1.234.101
再次访问就会卡在这里
[root@k8s-master1 configmap_nginx]# curl wenruo-crm.rpdns.com
curl: (52) Empty reply from server
再次访问 404,503
[root@k8s-master1 configmap_nginx]# curl wenruo-crm.rpdns.com
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.17.9</center>
</body>
</html>
[root@k8s-master1 configmap_nginx]# curl wenruo-crm.rpdns.com
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body>
<center><h1>503 Service Temporarily Unavailable</h1></center>
<hr><center>nginx/1.15.5</center>
</body>
</html>
我们再看看 vip,正常通信
[root@k8s-master1 configmap_nginx]# ping 10.1.234.100
PING 10.1.234.100 (10.1.234.100) 56(84) bytes of data.
64 bytes from 10.1.234.100: icmp_seq=1 ttl=64 time=0.459 ms
64 bytes from 10.1.234.100: icmp_seq=2 ttl=64 time=0.244 ms
10.1.234.102 网页文件是否同步,确认已同步
[root@localhost kubernetes]# cat /ifs/kubernetes/edu-wenruo-crm-pvc-pvc-938bcb8a-c431-4b09-80f7-7cabe70a195a/index.html
10
现在我在启动 10.1.234.101
由于我配置的10.1.234.101 keepalive 是master 起来后会抢占 vip
再次测试,已恢复
[root@k8s-master1 configmap_nginx]# curl wenruo-crm.rpdns.com
10
结论: nfs keepalive 对k8s 无效
更多推荐
已为社区贡献29条内容
所有评论(0)