zookeeper daemonset方式部署至k8s
提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档文章目录前言一、改造准备1.新建目录2.新建配置文件二、新建DaemonSet yaml文件1.第一台node节点配置2.第二台node节点配置3.第三台node节点配置三.总结注意点如下1.监听的ip,由于pod没有zk1/zk2/zk3的主机名,因此监听地址需要修改2.nodeSelector配置,需要指定机器运行前言k8s改造z
·
文章目录
前言
k8s改造zookeeper部署方式
改造zookeeper的Statefulset方式至DaemonSet。
一、改造准备
先在需要部署zookeeper容器的node节点新建数据及日志存放目录
1.新建目录
mkdir -p /tmp/data/
mkdir -p /tmp/datalog
2.新建配置文件
第1台:vi /tmp/dp/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=0.0.0.0:2888:3888
server.2=zk2:2888:3888
server.3=zk3:2888:3888
第2台:vi /tmp/dp/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk1:2888:3888
server.2=0.0.0.0:2888:3888
server.3=zk3:2888:3888
第3台:vi /tmp/dp/zoo.cfg
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk1:2888:3888
server.2=zk2:2888:3888
server.3=0.0.0.0:2888:3888
二、新建DaemonSet yaml文件
1.第一台node节点配置
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: zk1
name: zk1
namespace: default
spec:
# replicas: 1
selector:
matchLabels:
app: zk1
# strategy:
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
template:
metadata:
labels:
app: zk1
spec:
nodeSelector:
kubernetes.io/hostname: fat2master.fat2master
tolerations:
- key:
operator: Exists
value:
containers:
- env:
- name: ZOO_MY_ID
value: '1'
# - name: ZOO_SERVERS
# value: server.1=zk1:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
image: zookeeper:3.4.10
imagePullPolicy: Always
# nodeSelector:
# kubernetes.io/hostname: fat2master.fat2master
name: zk1
resources:
requests:
memory: "2Gi"
cpu: "1"
ports:
- name: http
containerPort: 2181
- name: server
containerPort: 2888
- name: leader-election
containerPort: 3888
volumeMounts:
- mountPath: /data
name: data
- mountPath: /datalog
name: log
- mountPath: /conf/zoo.cfg
name: conf
resources:
requests:
cpu: "1000m"
memory: "2048Mi"
limits:
cpu: "1000m"
memory: "2048Mi"
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: data
hostPath:
path: /tmp/data
- name: log
hostPath:
path: /tmp/log
- name: conf
hostPath:
path: /tmp/dp/zoo.cfg
type: File
---
apiVersion: v1
kind: Service
metadata:
name: zk1
labels:
app: zk1
spec:
ports:
- port: 2181
name: client
- port: 2888
name: server
- port: 3888
name: leader-election
selector:
app: zk1
2.第二台node节点配置
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: zk2
name: zk2
namespace: default
spec:
# replicas: 1
selector:
matchLabels:
app: zk2
# strategy:
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
template:
metadata:
labels:
app: zk2
spec:
nodeSelector:
kubernetes.io/hostname: fat2slave1.fat2slave1
tolerations:
- key:
operator: Exists
value:
containers:
- env:
- name: ZOO_MY_ID
value: '2'
# - name: ZOO_SERVERS
# value: server.1=zk2:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
image: zookeeper:3.4.10
imagePullPolicy: Always
# nodeSelector:
# kubernetes.io/hostname: fat2master.fat2master
name: zk2
resources:
requests:
memory: "2Gi"
cpu: "1"
ports:
- name: http
containerPort: 2181
- name: server
containerPort: 2888
- name: leader-election
containerPort: 3888
volumeMounts:
- mountPath: /data
name: data
- mountPath: /datalog
name: log
- mountPath: /conf/zoo.cfg
name: conf
resources:
requests:
cpu: "1000m"
memory: "2048Mi"
limits:
cpu: "1000m"
memory: "2048Mi"
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: data
hostPath:
path: /tmp/data
- name: log
hostPath:
path: /tmp/log
- name: conf
hostPath:
path: /tmp/dp/zoo.cfg
type: File
---
apiVersion: v1
kind: Service
metadata:
name: zk2
labels:
app: zk2
spec:
ports:
- port: 2181
name: client
- port: 2888
name: server
- port: 3888
name: leader-election
selector:
app: zk2
3.第三台node节点配置
apiVersion: apps/v1
kind: DaemonSet
metadata:
labels:
app: zk3
name: zk3
namespace: default
spec:
# replicas: 1
selector:
matchLabels:
app: zk3
# strategy:
# rollingUpdate:
# maxSurge: 1
# maxUnavailable: 0
# type: RollingUpdate
template:
metadata:
labels:
app: zk3
spec:
nodeSelector:
kubernetes.io/hostname: fat2slave2.fat2slave2
tolerations:
- key:
operator: Exists
value:
containers:
- env:
- name: ZOO_MY_ID
value: '3'
# - name: ZOO_SERVERS
# value: server.1=zk3:2888:3888;2181 server.2=zk2:2888:3888;2181 server.3=zk3:2888:3888;2181
image: zookeeper:3.4.10
imagePullPolicy: Always
# nodeSelector:
# kubernetes.io/hostname: fat2master.fat2master
name: zk3
resources:
requests:
memory: "2Gi"
cpu: "1"
ports:
- name: http
containerPort: 2181
- name: server
containerPort: 2888
- name: leader-election
containerPort: 3888
volumeMounts:
- mountPath: /data
name: data
- mountPath: /datalog
name: log
- mountPath: /conf/zoo.cfg
name: conf
resources:
requests:
cpu: "1000m"
memory: "2048Mi"
limits:
cpu: "1000m"
memory: "2048Mi"
dnsPolicy: ClusterFirst
restartPolicy: Always
volumes:
- name: data
hostPath:
path: /tmp/data
- name: log
hostPath:
path: /tmp/log
- name: conf
hostPath:
path: /tmp/dp/zoo.cfg
type: File
---
apiVersion: v1
kind: Service
metadata:
name: zk3
labels:
app: zk3
spec:
ports:
- port: 2181
name: client
- port: 2888
name: server
- port: 3888
name: leader-election
selector:
app: zk3
三.总结
注意点如下
1.监听的ip,由于pod没有zk1/zk2/zk3的主机名,因此监听地址需要修改
#This file was autogenerated DO NOT EDIT
clientPort=2181
dataDir=/data
dataLogDir=/datalog
tickTime=2000
initLimit=10
syncLimit=5
maxClientCnxns=2000
minSessionTimeout=4000
maxSessionTimeout=40000
autopurge.snapRetainCount=3
autopurge.purgeInteval=12
server.1=zk1:2888:3888
server.2=zk2:2888:3888
server.3=0.0.0.0:2888:3888
如上最后一行,由于是在第三台节点部署,因此ip需要写成0.0.0.0,其它两个节点类似
2.nodeSelector配置,需要指定机器运行
nodeSelector:
kubernetes.io/hostname: fat2master.fat2master
更多推荐
已为社区贡献6条内容
所有评论(0)