k8s官网详细讲解

运行 ZooKeeper,一个分布式协调系统

准备yaml文件

zookeeper-pv.yaml

kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk1
  namespace: zookeeper
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/nfs/zookeeper"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk2
  namespace: zookeeper
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/nfs/zookeeper"
  persistentVolumeReclaimPolicy: Recycle
---
kind: PersistentVolume
apiVersion: v1
metadata:
  name: k8s-pv-zk3
  namespace: zookeeper
  annotations:
    volume.beta.kubernetes.io/storage-class: "anything"
  labels:
    type: local
spec:
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/nfs/zookeeper"
  persistentVolumeReclaimPolicy: Recycle

zookeeper-pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: zookeeper-pvc
  namespace: zookeeper
spec:
  storageClassName: nfs-storage
  resources:
    requests:
      storage: 2Gi        #设置 pvc 存储资源大小
  accessModes:
  - ReadWriteOnce
  selector:
    matchLabels:
      app: zookeeper-pv           #根据 Label 选择对应 PV

zookeeper.yaml

下面的镜像地址替换为了国内地址。
由原来的registry.k8s.io/kubernetes-zookeeper:1.0-3.4.1改为了mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  namespace: zookeeper
  labels:
    app: zk
spec:
  ports:
  - port: 2888
    name: server
  - port: 3888
    name: leader-election
  clusterIP: None
  selector:
    app: zk
---
apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  namespace: zookeeper
  labels:
    app: zk
spec:
  ports:
  - port: 2181
    name: client
  selector:
    app: zk
---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
  namespace: zookeeper
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
  namespace: zookeeper
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                    - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
      - name: kubernetes-zookeeper
        imagePullPolicy: Always
        image: "mirrorgooglecontainers/kubernetes-zookeeper:1.0-3.4.10"
        resources:
          requests:
            memory: "1Gi"
            cpu: "0.5"
        ports:
        - containerPort: 2181
          name: client
        - containerPort: 2888
          name: server
        - containerPort: 3888
          name: leader-election
        command:
        - sh
        - -c
        - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "zookeeper-ready 2181"
          initialDelaySeconds: 10
          timeoutSeconds: 5
        volumeMounts:
        - name: datadir
          mountPath: /var/lib/zookeeper
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
  volumeClaimTemplates:
  - metadata:
      name: datadir
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 10Gi

执行创建

kubectl create -f zookeeper-pv.yaml
kubectl create -f zookeeper-pvc.yaml
kubectl create -f zookeeper.yaml

查看创建的结果

kubectl get all -n zookeeper

稍等片刻后等待镜像拉取完成后,三个pod都running了就OK了。
在这里插入图片描述

如果发现pod拉不起来
在这里插入图片描述
log一下,看看错误日志

╰─# kubectl logs pod/zk-0 -n zookeeper                       
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/var/lib/zookeeper/data
dataLogDir=/var/lib/zookeeper/data/log
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=60
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk-0.zk-hs.zookeeper.svc.cluster.local:2888:3888
server.2=zk-1.zk-hs.zookeeper.svc.cluster.local:2888:3888
server.3=zk-2.zk-hs.zookeeper.svc.cluster.local:2888:3888
Creating ZooKeeper log4j configuration
mkdir: cannot create directory '/var/lib/zookeeper/data': Permission denied
chown: cannot access '/var/lib/zookeeper/data': No such file or directory
mkdir: cannot create directory '/var/lib/zookeeper/data': Permission denied
chown: invalid group: 'zookeeper:USER'
/usr/bin/start-zookeeper: line 176: /var/lib/zookeeper/data/myid: No such file or directory

提示没有权限,无法创建目录。那么应该是本地持久化出问题了,去检查nfs的目录是否有权限。

chmod -R 777 /nfs/zookeeper

重新apply一下zookeeper.yaml

kubectl delete -f zookeeper.yaml -n zookeeper
kubectl apply -f zookeeper.yaml

验证集群是否可用

测试写入数据

kubectl exec -it -n zookeeper zk-1 bin/bash
zookeeper@zk-1:/$ zkCli.sh
[zk: localhost:2181(CONNECTED) 0] create /test abc
Created /test

进入另一个节点,查看数据是否存在

kubectl exec -it -n zookeeper zk-0 bin/bash
zookeeper@zk-0:/$ zkCli.sh 
[zk: localhost:2181(CONNECTED) 0] get /test
abc
cZxid = 0x100000007
ctime = Tue Jan 09 15:38:36 UTC 2024
mZxid = 0x100000007
mtime = Tue Jan 09 15:38:36 UTC 2024
pZxid = 0x100000007
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 3
numChildren = 0
[zk: localhost:2181(CONNECTED) 1] 

可用获取到数据证明节点部署正常。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐