zookeeper(zk)在微服务应用中通常作为注册中心使用,内部自行选举leader,配置部署非常方便。

zk在k8s中部署需要注意事项:
1)zk集群是内部自行选举leader,少数服从多数原则,所以至少部署3个实列;
2)zk需要持久化数据存储,所以需要配置PV;
3)zk部署为有状态负载方式;
4)为了达到集群负载效果,zk实列不能部署到同一台主机(节点)上,所以使用强制反亲合(podAntiAffinity),保证各个pod在不同主机上进行部署。

zk部署脚本
1、创建PV
vi zk-pv.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk1
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk2
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk3
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/var/lib/zookeeper"
  persistentVolumeReclaimPolicy: Recycle

执行部署PV脚本
kubectl apply -f zk-pv.yaml

2、zk实列部署
vi zookeeper.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: default
  labels:
    app: zk
spec:
  selector:
    app: zk
  clusterIP: None
  ports:
  - name: server
    port: 2888
  - name: leader-election
    port: 3888
--- 
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: default
  labels:
    app: zk
spec:
  selector:
    app: zk
  type: NodePort
  ports:
  - name: client
    port: 2181
    nodePort: 31811
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: default
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: default
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: "zk-hs"
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk # has to match .spec.selector.matchLabels
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - {key: app, operator: In, values: ["zk"]}
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: zk
        imagePullPolicy: Always
        image: chaotingge/zookeeper:kubernetes-zookeeper1.0-3.4.10
        resources:
          requests:
            memory: "500Mi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
        --servers=3 \
        --data_dir=/var/lib/zookeeper/data \
        --data_log_dir=/var/lib/zookeeper/data/log \
        --conf_dir=/opt/zookeeper/conf \
        --client_port=2181 \
        --election_port=3888 \
        --server_port=2888 \
        --tick_time=2000 \
        --init_limit=10 \
        --sync_limit=5 \
        --heap=512M \
        --max_client_cnxns=60 \
        --snap_retain_count=3 \
        --purge_interval=12 \
        --max_session_timeout=40000 \
        --min_session_timeout=4000 \
        --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
  volumeClaimTemplates:
  - metadata:
      name: datadir
      annotations:
        volume.beta.kubernetes.io/storage-class: "anything"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi

执行脚本
kubectl apply -f zookeeper.yaml

核心脚本解释:

apiVersion: apps/v1
kind: StatefulSet  # 将 Deployment调整为有状态的StatefulSet部署方式
metadata:
  name: zk
  namespace: default
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: "zk-hs"
  replicas: 3 # 集群部署至少3个实列
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:                # 配置亲和性
        podAntiAffinity:       # 强制使用反亲和 保证实列部署在不同主机上
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - {key: app, operator: In, values: ["zk"]} # 匹配规则 只要app=zk的都不能部署在一起
            topologyKey: "kubernetes.io/hostname"
      containers:
      - name: zk
        imagePullPolicy: Always
        image: chaotingge/zookeeper:kubernetes-zookeeper1.0-3.4.10
        resources:
          requests:

验证集群是否成功:

# 验证集群是否成功,登录pod容器
kubectl exec -it <pod实列名称> /bin/sh
# 验证当前zk实列是主节点(leader)还是从节点(follower)
zkServer.sh status
# 登录zkCli客户端
zkCli.sh
# 创建一个节点 查看各个节点是否能同步到数据
create /test-zk Yang douya
# 在其他节点执行查看命令,验证是否通过/test-zk获取到数据Yang douya
get /test-zk
Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐