k8s部署consul集群
Consul是由HashiCorp基于Go语言开发的支持多数据中心分布式高可用的服务发布和注册服务软件,采用Raft算法保证服务的一致性,且支持健康检查。但是在kubernetes里,当节点发生故障或者资源不足时,会根据策略杀掉节点的一些pod转而生成新的pod,而新生成的pod的ip地址和名称(hash值)都发生了变化。这时候我们如何保证新的pod和原有的pod的唯一标识不变呢?stateful
k8s以StatefulSet方式部署consul集群:
public-service-ns.yaml
apiVersion: v1
kind: Namespace
metadata:
name: public-service
consul-server.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: consul
namespace: public-service
spec:
rules:
- host: consul.lzxlinux.com
http:
paths:
- path: /
backend:
serviceName: consul-ui
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: consul-ui
namespace: public-service
labels:
app: consul
component: server
spec:
selector:
app: consul
component: server
ports:
- name: http
port: 80
targetPort: 8500
---
apiVersion: v1
kind: Service
metadata:
name: consul-dns
namespace: public-service
labels:
app: consul
component: dns
spec:
selector:
app: consul
ports:
- name: dns-tcp
protocol: TCP
port: 53
targetPort: dns-tcp
- name: dns-udp
protocol: UDP
port: 53
targetPort: dns-udp
---
apiVersion: v1
kind: Service
metadata:
name: consul-server
namespace: public-service
labels:
app: consul
component: server
spec:
selector:
app: consul
component: server
ports:
- name: http
port: 8500
targetPort: 8500
- name: dns-tcp
protocol: TCP
port: 8600
targetPort: dns-tcp
- name: dns-udp
protocol: "UDP"
port: 8600
targetPort: dns-udp
- name: serflan-tcp
protocol: TCP
port: 8301
targetPort: 8301
- name: serflan-udp
protocol: UDP
port: 8301
targetPort: 8302
- name: serfwan-tcp
protocol: TCP
port: 8302
targetPort: 8302
- name: serfwan-udp
protocol: UDP
port: 8302
targetPort: 8302
- name: server
port: 8300
targetPort: 8300
publishNotReadyAddresses: true
clusterIP: None
---
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-server-config
namespace: public-service
data:
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: consul-server
namespace: public-service
spec:
selector:
matchLabels:
app: consul
component: server
minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: consul-server
namespace: public-service
spec:
serviceName: consul-server
replicas: 3
updateStrategy:
type: RollingUpdate
selector:
matchLabels:
app: consul
component: server
template:
metadata:
labels:
app: consul
component: server
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "componment"
operator: In
values:
- server
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: consul:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8500
name: http
- containerPort: 8600
name: dns-tcp
protocol: TCP
- containerPort: 8600
name: dns-udp
protocol: UDP
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-server"
- "-advertise=$(POD_IP)"
- "-bind=0.0.0.0"
- "-bootstrap-expect=3"
- "-datacenter=dc1"
- "-config-dir=/consul/userconfig"
- "-data-dir=/consul/data"
- "-disable-host-node-id"
- "-domain=cluster.local"
- "-retry-join=consul-server-0.consul-server.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-server-1.consul-server.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-server-2.consul-server.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
- "-ui"
resources:
limits:
cpu: "100m"
memory: "128Mi"
requests:
cpu: "100m"
memory: "128Mi"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
volumeMounts:
- name: data
mountPath: /consul/data
- name: user-config
mountPath: /consul/userconfig
volumes:
- name: user-config
configMap:
name: consul-server-config
- name: data
emptyDir: {}
securityContext:
fsGroup: 1000
# volumeClaimTemplates:
# - metadata:
# name: data
# spec:
# accessModes:
# - ReadWriteMany
# storageClassName: "gluster-heketi-2"
# resources:
# requests:
# storage: 10Gi
consul-client.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: consul-client-config
namespace: public-service
data:
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: consul
namespace: public-service
spec:
selector:
matchLabels:
app: consul
component: client
template:
metadata:
labels:
app: consul
component: client
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "componment"
operator: In
values:
- client
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: consul:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8500
name: http
- containerPort: 8600
name: dns-tcp
protocol: TCP
- containerPort: 8600
name: dns-udp
protocol: UDP
- containerPort: 8301
name: serflan
- containerPort: 8302
name: serfwan
- containerPort: 8300
name: server
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- "agent"
- "-advertise=$(POD_IP)"
- "-bind=0.0.0.0"
- "-datacenter=dc1"
- "-config-dir=/consul/userconfig"
- "-data-dir=/consul/data"
- "-disable-host-node-id=true"
- "-domain=cluster.local"
- "-retry-join=consul-server-0.consul-server.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-server-1.consul-server.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-server-2.consul-server.$(NAMESPACE).svc.cluster.local"
- "-client=0.0.0.0"
resources:
limits:
cpu: "50m"
memory: "32Mi"
requests:
cpu: "50m"
memory: "32Mi"
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- consul leave
volumeMounts:
- name: data
mountPath: /consul/data
- name: user-config
mountPath: /consul/userconfig
volumes:
- name: user-config
configMap:
name: consul-client-config
- name: data
emptyDir: {}
securityContext:
fsGroup: 1000
# volumeClaimTemplates:
# - metadata:
# name: data
# spec:
# accessModes:
# - ReadWriteMany
# storageClassName: "gluster-heketi-2"
# resources:
# requests:
# storage: 10Gi
PodDisruptionBudget:
k8s可以为每个应用程序创建 PodDisruptionBudget
对象(PDB)。PDB 将限制在同一时间因资源干扰导致的复制应用程序中宕机的 pod 数量。
可以通过两个参数来配置PodDisruptionBudget
:
MinAvailable:表示最小可用Pod数,表示应用Pod集群处于运行状态的最小Pod数量,或者是运行状态的Pod数同总Pod数的最小百分比
MaxUnavailable:表示最大不可用Pod数,表示应用Pod集群处于不可用状态的最大Pod数,或者是不可用状态的Pod数同总Pod数的最大百分比
需要注意的是,MinAvailable
参数和MaxUnavailable
参数只能同时配置一个。
部署:
kubectl apply -f public-service-ns.yaml
kubectl apply -f consul-server.yaml
kubectl get svc -n public-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
consul-dns ClusterIP 10.110.235.63 <none> 53/TCP,53/UDP 85s
consul-server ClusterIP None <none> 8500/TCP,8600/TCP,8600/UDP,8301/TCP,8301/UDP,8302/TCP,8302/UDP,8300/TCP 85s
consul-ui ClusterIP 10.98.220.223 <none> 80/TCP 85s
kubectl get pod -n public-service
NAME READY STATUS RESTARTS AGE
consul-server-0 1/1 Running 0 110s
consul-server-1 1/1 Running 0 107s
consul-server-2 1/1 Running 0 92s
查看集群状态:
kubectl exec -n public-service consul-server-0 -- consul members
Node Address Status Type Build Protocol DC Segment
consul-server-0 172.10.135.17:8301 alive server 1.8.3 2 dc1 <all>
consul-server-1 172.10.104.11:8301 alive server 1.8.3 2 dc1 <all>
consul-server-2 172.10.166.136:8301 alive server 1.8.3 2 dc1 <all>
访问ui:
添加hosts:consul.lzxlinux.com
,访问consul.lzxlinux.com/ui
。
可以看到:consul-server-0
是leader
,集群状态正常。
加入client:
kubectl apply -f consul-client.yaml
kubectl get pod -n public-service
NAME READY STATUS RESTARTS AGE
consul-8wx22 1/1 Running 0 40s
consul-glmgs 1/1 Running 0 10s
consul-server-0 1/1 Running 0 30m
consul-server-1 1/1 Running 0 30m
consul-server-2 1/1 Running 0 30m
consul-vxbj7 1/1 Running 0 61s
kubectl exec -n public-service consul-server-0 -- consul members
Node Address Status Type Build Protocol DC Segment
consul-server-0 172.10.135.17:8301 alive server 1.8.3 2 dc1 <all>
consul-server-1 172.10.104.11:8301 alive server 1.8.3 2 dc1 <all>
consul-server-2 172.10.166.136:8301 alive server 1.8.3 2 dc1 <all>
consul-8wx22 172.10.166.138:8301 alive client 1.8.3 2 dc1 <default>
consul-glmgs 172.10.135.19:8301 alive client 1.8.3 2 dc1 <default>
consul-vxbj7 172.10.104.13:8301 alive client 1.8.3 2 dc1 <default>
至此,consul集群(3 server、3client)部署完成。
k8s部署consul
一. 前言
Consul是由HashiCorp基于Go语言开发的支持多数据中心分布式高可用的服务发布和注册服务软件,采用Raft算法保证服务的一致性,且支持健康检查。但是在kubernetes里,当节点发生故障或者资源不足时,会根据策略杀掉节点的一些pod转而生成新的pod,而新生成的pod的ip地址和名称(hash值)都发生了变化。这时候我们如何保证新的pod和原有的pod的唯一标识不变呢?statefulset可以做到,他能保证pod具有唯一的网络标识。
二. Statefulset
deployment来管理pod容器的副本数量,一个应用的所有Pod是完全一样的。所以,它们互相之间没有顺序,也无所谓运行在哪台宿主机上。需要的时候,Deployment就可以通过Pod模板创建新的Pod;不需要的时候,Deployment就可以“杀掉”任意一个Pod。但是,在实际的场景中,并不是所有的应用都可以满足这样的要求。尤其是分布式应用,它的多个实例之间,往往有依赖关系,比如:主从关系、主备关系。还有就是数据存储类应用,它的多个实例,往往都会在本地磁盘上保存一份数据。而这些实例一旦被杀掉,即使重建出来,实例与数据之间的对应关系也已经丢失,从而导致应用失败。所以,这种实例之间有不对等关系,以及实例对外部数据有依赖关系的应用,就被称为“有状态应用”(Stateful Application)。
三. 编排stafulset.yaml
piVersion: apps/v1
kind: StatefulSet
metadata:
name: consul
spec:
serviceName: consul
replicas: 3
selector:
matchLabels:
app: consul
template:
metadata:
labels:
app: consul
spec:
terminationGracePeriodSeconds: 10
containers:
- name: consul
image: consul:latest
args:
- "agent"
- "-server"
- "-bootstrap-expect=3"
- "-ui"
- "-data-dir=/consul/data"
- "-bind=0.0.0.0"
- "-client=0.0.0.0"
- "-advertise=$(PODIP)"
- "-retry-join=consul-0.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-1.consul.$(NAMESPACE).svc.cluster.local"
- "-retry-join=consul-2.consul.$(NAMESPACE).svc.cluster.local"
- "-domain=cluster.local"
- "-disable-host-node-id"
env:
- name: PODIP
valueFrom:.
fieldRef:
fieldPath: status.podIP
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 8500
name: ui-port
- containerPort: 8443
name: https-port
通过以上定义文件我们可以看到,里面最核心的部分是–retry-join后面的dns规则,由于我们指定了pod名称为consul,并且有三个实例,因此它们的名称为consul-0,consul-1,consul-2,即便有节点失败,起来以后名称仍然是固定的,这样不管新起的podIp是多少,通过dns都能够正确解析到它.
四. 创建服务
apiVersion: v1
kind: Service
metadata:
name: consul
labels:
name: consul
spec:
type: NodePort
ports:
- name: http
port: 8500
nodePort: 30850
targetPort: 8500
- name: https
port: 8443
nodePort: 30851
targetPort: 8443
selector:
app: consul
五. 页面访问
更多推荐
所有评论(0)