创建本地目录用于zookeeper持久化

创建3个持久化文件mkdir -p /Users/renzhengxin/IdeaProjects/k8s/zookeeper/v34/data/{pv1,pv2,pv3}

创建持久卷

创建3个持久卷定义vim zookeeper-pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper34-pv1
spec:
  storageClassName: zookeeper34
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/Users/renzhengxin/IdeaProjects/k8s/zookeeper/v34/data/pv1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper34-pv2
spec:
  storageClassName: zookeeper34
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/Users/renzhengxin/IdeaProjects/k8s/zookeeper/v34/data/pv2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: zookeeper34-pv3
spec:
  storageClassName: zookeeper34
  capacity:
    storage: 200M
  accessModes:
    - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  hostPath:
    path: "/Users/renzhengxin/IdeaProjects/k8s/zookeeper/v34/data/pv3"

执行创建持久卷:kubectl create -f zookeeper-pv.yaml
查看持久卷:kubectl get pv
在这里插入图片描述

创建Headless服务

创建服务定义vim zookeeper-svc.yaml

apiVersion: v1
kind: Service
metadata:
  name: zookeeper34-svc
  labels:
    app: zookeeper34
spec:
  selector:
    app: zookeeper34
  clusterIP: None
  ports:
  - name: server
    port: 2888
  - name: leader-election
    port: 3888

执行创建服务:kubectl create -f zookeeper-svc.yaml
查看服务:kubectl get svc
在这里插入图片描述

配置PodDisruptionBudget

创建PodDisruptionBudget定义vim zookeeper-pdb.yaml

apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zookeeper34-pdb
spec:
  selector:
    matchLabels:
      app: zookeeper34
  maxUnavailable: 1

执行创建pdb:kubectl create -f zookeeper-pdb.yaml
查看pdb:kubectl get pdb
在这里插入图片描述

创建StatefulSet集群

创建StatefulSet定义vim zookeeper-sts.yaml

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zookeeper34-sts
spec:
  selector:
    matchLabels:
      app: zookeeper34 # has to match .spec.template.metadata.labels
  serviceName: zookeeper34-svc
  replicas: 3 # by default is 1
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: Parallel
  template:
    metadata:
      labels:
        app: zookeeper34 # has to match .spec.selector.matchLabels
    spec:
      containers:
      - name: zookeeper34
        imagePullPolicy: IfNotPresent
        image: mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10
        resources:
          requests:
            memory: "500Mi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
        --servers=3 \
        --data_dir=/var/lib/zookeeper/data \
        --data_log_dir=/var/lib/zookeeper/data/log \
        --conf_dir=/opt/zookeeper/conf \
        --client_port=2181 \
        --election_port=3888 \
        --server_port=2888 \
        --tick_time=2000 \
        --init_limit=10 \
        --sync_limit=5 \
        --heap=512M \
        --max_client_cnxns=60 \
        --snap_retain_count=3 \
        --purge_interval=12 \
        --max_session_timeout=40000 \
        --min_session_timeout=4000 \
        --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: zookeeper-data
          mountPath: /var/lib/zookeeper
  volumeClaimTemplates:
  - metadata:
      name: zookeeper-data
      annotations:
        volume.beta.kubernetes.io/storage-class: "zookeeper34"
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 200M

执行创建StatefulSet:kubectl create -f zookeeper-sts.yaml
查看StatefulSet:kubectl get sts
查看Pod:kubectl get po
查看PV:kubectl get pv
在这里插入图片描述

创建集群访问入口

创建服务定义vim zookeeper-access-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: zookeeper34-access-service
  labels:
    name: zookeeper34-access-service
spec:
  selector:
    app: zookeeper34
  type: NodePort
  ports:
  - name: client
    targetPort: 2181
    port: 2181
    nodePort: 30811

执行创建服务:kubectl create -f zookeeper-access-service.yaml
查看服务:kubectl get svc
在这里插入图片描述

访问集群

如本地已安装zookeeper客户端,使用以下命令:
./zkCli.sh -server localhost:30811
测试创建节点:
create -e /test222 eee
get /test222
在这里插入图片描述
在这里插入图片描述

参考文档:
k8s部署zookeeper集群

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐