K8S 部署Zookeeper集群
1.创建PVpv.yaml文件内容如下:---apiVersion: v1kind: PersistentVolumemetadata:name: zkpv01spec:accessModes:- ReadWriteManystorageClassName: nfscapacity:storage: 5GipersistentVolumeReclaimPolicy: Retainnfs:path:
·
1.创建PV
pv.yaml文件内容如下:
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zkpv01
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/nfsDir/zookeeper/zk01
server: 10.211.55.20
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zkpv02
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/nfsDir/zookeeper/zk02
server: 10.211.55.20
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: zkpv03
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
capacity:
storage: 5Gi
persistentVolumeReclaimPolicy: Retain
nfs:
path: /data/nfsDir/zookeeper/zk03
server: 10.211.55.20
2.创建Heardliness 服务
---
apiVersion: v1
kind: Service
metadata:
name: zk-hs
namesapce: zookeeper
labels:
app: zks
spec:
selector:
app: zk
clusterIP: None
type: ClusterIP
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
---
apiVersion: v1
kind: Service
metadata:
name: zk-cs
namespace: zookeeper
labels:
app: zk
spec:
selector:
app: zk
type: NodePort
ports:
- name: client
port: 2181
nodePort: 32181
3.创建PodDisruptionBudget 控制器
Pod Disruption Budget (pod 中断 预算) 简称PDB,含义其实是终止pod前通过 labelSelector 机制获取正常运行的pod数目的限制,目的是对自愿中断的保护措施
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
namespace: zookeeper
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
4.部署zookeeper集群
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk-pod
namespace: zookeeper
spec:
selector:
matchLabels:
app: zk
serviceName: 'zk-hs'
replicas: 3
updateStrategy:
type: RollingUpdate
podManagementPolicy: Parallel
template:
metadata:
labels:
app: zk
spec:
containers:
- name: zookeeper
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
resources:
requests:
cpu: '0.5'
memory: "500Mi"
ports:
- name: client-port
containerPort: 2181
- name: server-port
containerPort: 2888
- name: leader-port
containerPort: 3888
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/data/zookeeper/data \
--data_log_dir=/data/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /data/zookeeper/
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes:
- ReadWriteMany
storageClassName: nfs
resources:
requests:
storage: 3Gi
5.查看zookeeper服务状态
[root@k8s-master01 ~]# kubectl get sts -n zookeeper
NAME READY AGE
zk-pod 3/3 31h
[root@k8s-master01 ~]# kubectl get pv,pvc -n zookeeper -o wide
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
persistentvolume/zkpv01 5Gi RWX Retain Bound zookeeper/datadir-zk-pod-0 nfs 31h Filesystem
persistentvolume/zkpv02 5Gi RWX Retain Bound zookeeper/datadir-zk-pod-1 nfs 31h Filesystem
persistentvolume/zkpv03 5Gi RWX Retain Bound zookeeper/datadir-zk-pod-2 nfs 31h Filesystem
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
persistentvolumeclaim/datadir-zk-pod-0 Bound zkpv01 5Gi RWX nfs 31h Filesystem
persistentvolumeclaim/datadir-zk-pod-1 Bound zkpv02 5Gi RWX nfs 31h Filesystem
persistentvolumeclaim/datadir-zk-pod-2 Bound zkpv03 5Gi RWX nfs 31h Filesystem
[root@k8s-master01 ~]# kubectl get pods -n zookeeper -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
zk-pod-0 1/1 Running 1 29h 10.244.1.56 k8s-node01 <none> <none>
zk-pod-1 1/1 Running 1 31h 10.244.2.100 k8s-node02 <none> <none>
zk-pod-2 1/1 Running 1 31h 10.244.1.58 k8s-node01 <none> <none>
[root@k8s-master01 ~]# kubectl get svc -n zookeeper -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
zk-cs NodePort 10.96.230.134 <none> 2181:32181/TCP 36h app=zk
zk-hs ClusterIP None <none> 2888/TCP,3888/TCP 36h app=zk
更多推荐
已为社区贡献1条内容
所有评论(0)