K8S基础-有状态应用
有状态应用和无状态应用区别无状态应用Node1故障,在Node2上自动拉起可以继续提供服务,即为无状态应用应用场景:Nginx, 微服务,jar有状态应用Node1故障,在Node2上自动拉起不可以继续提供服务,即为有状态应用应用场景:Mysql主从,zookeeper集群,etcd集群需要考虑问题:数据持久化IP地址启动顺序在k8s中,有状态应用一般指的是分布式应用程序StatefulSet控制
·
有状态应用和无状态应用区别
- 无状态应用
Node1故障,在Node2上自动拉起可以继续提供服务,即为无状态应用
应用场景: Nginx, 微服务,jar
- 有状态应用
Node1故障,在Node2上自动拉起不可以继续提供服务,即为有状态应用
应用场景: Mysql主从,zookeeper集群,etcd集群
需要考虑问题:
- 数据持久化
- IP地址
- 启动顺序
在k8s中,有状态应用一般指的是分布式应用程序
StatefulSet控制器概述
-
部署有状态应用
-
解决Pod独立生命周期,保持Pod启动顺序和唯一性
-
稳定,唯一的网络标识符,持久存储
-
有序,优雅的部署和扩展、删除和终止
-
有序,滚动更新
应用场景:分布式应用
稳定的网络ID
- headless Service
标准service (normal service)
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
无头service (headless service)
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
clusterIP: None
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
示例: StatefulSet控制器+headless service
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-sts
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx-headless"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: http
验证
kubectl get svc
kubectl get ep
# 解析验证
kubectl run -it --image busybox:1.28.4 --rm dns-test /bin/sh
# nslookup nginx-headless
Server: 10.0.0.10
Address 1: 10.0.0.10
Name: nginx-headless
Address 1: 10.244.1.51
Address 2: 10.244.2.95
访问地址
# 解析格式
<pod_name>.<service_name>.<ns_name>.svc.cluster.local
# 例如:
ping nginx-sts-1.nginx-headless.default.svc.cluster.local
PING nginx-sts-1.nginx-headless.default.svc.cluster.local (10.244.1.51): 56 data bytes
64 bytes from 10.244.1.51: seq=0 ttl=62 time=0.613 ms
64 bytes from 10.244.1.51: seq=1 ttl=62 time=0.329 ms
64 bytes from 10.244.1.51: seq=2 ttl=62 time=0.293 ms
稳定的存储
apiVersion: v1
kind: Service
metadata:
name: nginx-headless
spec:
clusterIP: None
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: nginx-sts
spec:
selector:
matchLabels:
app: nginx
serviceName: "nginx-headless"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
name: http
volumeMounts:
- name: pvc-www03
mountPath: /usr/share/nginx/html
volumeClaimTemplates:
- metadata:
name: pvc-www03
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 1Gi
验证:
kubectl get pvc
kubectl get pv
查看NFS共享目录,分别为两个pod各自创建了一个存储目录, 也就是说每个pod的数据存储是独立的
其中一个pod的数据目录创建一个文件1.txt,然后删除此pod,等待pod自动拉起,查看数据目录的数据是否存在,如果存在,表示pod已经持久化存储
更多推荐
已为社区贡献11条内容
所有评论(0)