k8s部署有状态(StatefulSet)zk-kafka集群
k8s部署有状态(StatefulSet)zk-kafka集群一共是五台服务器:功能IPnode-1192.168.10.201node-2192.168.10.202node-3192.168.10.203node-4192.168.10.204NFS192.168.10.151一,创建fns共享存储:1.五台服务器安装nfs:...
k8s部署有状态(StatefulSet)zk-kafka集群
一共是五台服务器:
功能 | IP |
---|---|
node-1 | 192.168.10.201 |
node-2 | 192.168.10.202 |
node-3 | 192.168.10.203 |
node-4 | 192.168.10.204 |
NFS | 192.168.10.151 |
一,创建fns共享存储:
1.五台服务器安装nfs:
yum -y install rpcbind nfs-utils
systemctl enable nfs-server; systemctl enable rpcbind ;system restart nfs-server ;systemctl restart nfs-server
2. NFS151上创建共享目录:
vim /etc/exports
/data/zk/data1 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/zk/data2 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/zk/data3 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/kafka/data1 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/kafka/data2 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
/data/kafka/data3 192.168.0.1/16(rw,no_root_squash,no_all_squash,sync)
为每一个节点都单独创建了一个目录。
二,创建zk集群相关
1,创建namespace:
vim namespaces.yaml
apiVersion: v1
kind: Namespace
metadata:
name: zk-kafka
labels:
name: zk-kafka
- 创建zk的pv:
vim zk_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: zk-kafka
name: zk-data1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.10.151
path: /data/zk/data1
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: zk-kafka
name: zk-data2
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.10.151
path: /data/zk/data2
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: zk-kafka
name: zk-data3
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.10.151
path: /data/zk/data3
3.创建zk集群:
vim zk.yaml
apiVersion: v1
kind: Service
metadata:
namespace: zk-kafka
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
namespace: zk-kafka
name: zk-cs
labels:
app: zk
spec:
type: NodePort
ports:
- port: 2181
targetPort: 2181
name: client
nodePort: 2181
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
namespace: zk-kafka
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: zk-kafka
name: zok
spec:
serviceName: zk-hs
replicas: 3
selector:
matchLabels:
app: zk
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
apiVersion: v1
kind: Service
metadata:
namespace: zk-kafka
name: zk-hs
labels:
app: zk
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: Service
metadata:
namespace: zk-kafka
name: zk-cs
labels:
app: zk
spec:
type: NodePort
ports:
- port: 2181
targetPort: 2181
name: client
nodePort: 2181
selector:
app: zk
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
namespace: zk-kafka
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: zk-kafka
name: zok
spec:
serviceName: zk-hs
replicas: 3
selector:
matchLabels:
app: zk
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: kubernetes-zookeeper
imagePullPolicy: Always
image: leolee32/kubernetes-library:kubernetes-zookeeper1.0-3.4.10
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
command:
- sh
- -c
- "start-zookeeper \
--servers=3 \
--data_dir=/var/lib/zookeeper/data \
--data_log_dir=/var/lib/zookeeper/data/log \
--conf_dir=/opt/zookeeper/conf \
--client_port=2181 \
--election_port=3888 \
--server_port=2888 \
--tick_time=2000 \
--init_limit=10 \
--sync_limit=5 \
--heap=512M \
--max_client_cnxns=60 \
--snap_retain_count=3 \
--purge_interval=12 \
--max_session_timeout=40000 \
--min_session_timeout=4000 \
--log_level=INFO"
readinessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
livenessProbe:
exec:
command:
- sh
- -c
- "zookeeper-ready 2181"
initialDelaySeconds: 10
timeoutSeconds: 5
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
···等zok0-2都启动之后开始创建Kafka集群···
三,创建kafka集群:
1.创建Kafka-pv:
vim kafka_pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: zk-kafka
name: kafka-data1
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.10.151
path: /data/kafka/data1
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: zk-kafka
name: kafka-data2
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.10.151
path: /data/kafka/data2
---
apiVersion: v1
kind: PersistentVolume
metadata:
namespace: zk-kafka
name: kafka-data3
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
nfs:
server: 192.168.10.151
path: /data/kafka/data3
2.创建kafka集群:
vim kafka.yaml
apiVersion: v1
kind: Service
metadata:
namespace: zk-kafka
name: kafka-hs
labels:
app: kafka
spec:
ports:
- port: 1099
name: jmx
clusterIP: None
selector:
app: kafka
---
apiVersion: v1
kind: Service
metadata:
namespace: zk-kafka
name: kafka-cs
labels:
app: kafka
spec:
type: NodePort
ports:
- port: 9092
targetPort: 9092
name: client
nodePort: 9092
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
namespace: zk-kafka
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
maxUnavailable: 1
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: zk-kafka
name: kafoka
spec:
serviceName: kafka-hs
replicas: 3
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8skafka
imagePullPolicy: Always
image: leey18/k8skafka
resources:
requests:
memory: "1Gi"
cpu: "0.5"
ports:
- containerPort: 9092
name: client
- containerPort: 1099
name: jmx
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9092 \
--override zookeeper.connect=zok-0.zk-hs.zk-kafka.svc.cluster.local:2181,zok-1.zk-hs.zk-kafka.svc.cluster.local:2181,zok-2.zk-hs.zk-kafka.svc.cluster.local:2181 \
--override log.dirs=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads=10 \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds=300 \
--override leader.imbalance.per.broker.percentage=10 \
--override log.flush.interval.messages=9223372036854775807 \
--override log.flush.offset.checkpoint.interval.ms=60000 \
--override log.flush.scheduler.interval.ms=9223372036854775807 \
--override log.retention.bytes=-1 \
--override log.retention.hours=168 \
--override log.roll.hours=168 \
--override log.roll.jitter.hours=0 \
--override log.segment.bytes=1073741824 \
--override log.segment.delete.delay.ms=60000 \
--override message.max.bytes=1000012 \
--override min.insync.replicas=1 \
--override num.io.threads=8 \
--override num.network.threads=3 \
--override num.recovery.threads.per.data.dir=1 \
--override num.replica.fetchers=1 \
--override offset.metadata.max.bytes=4096 \
--override offsets.commit.required.acks=-1 \
--override offsets.commit.timeout.ms=5000 \
--override offsets.load.buffer.size=5242880 \
--override offsets.retention.check.interval.ms=600000 \
--override offsets.retention.minutes=1440 \
--override offsets.topic.compression.codec=0 \
--override offsets.topic.num.partitions=50 \
--override offsets.topic.replication.factor=3 \
--override offsets.topic.segment.bytes=104857600 \
--override queued.max.requests=500 \
--override quota.consumer.default=9223372036854775807 \
--override quota.producer.default=9223372036854775807 \
--override replica.fetch.min.bytes=1 \
--override replica.fetch.wait.max.ms=500 \
--override replica.high.watermark.checkpoint.interval.ms=5000 \
--override replica.lag.time.max.ms=10000 \
--override replica.socket.receive.buffer.bytes=65536 \
--override replica.socket.timeout.ms=30000 \
--override request.timeout.ms=30000 \
--override socket.receive.buffer.bytes=102400 \
--override socket.request.max.bytes=104857600 \
--override socket.send.buffer.bytes=102400 \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms=6000 \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms=600000 \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries=3 \
--override controlled.shutdown.retry.backoff.ms=5000 \
--override controller.socket.timeout.ms=30000 \
--override default.replication.factor=1 \
--override fetch.purgatory.purge.interval.requests=1000 \
--override group.max.session.timeout.ms=300000 \
--override group.min.session.timeout.ms=6000 \
--override inter.broker.protocol.version=0.10.2-IV0 \
--override log.cleaner.backoff.ms=15000 \
--override log.cleaner.dedupe.buffer.size=134217728 \
--override log.cleaner.delete.retention.ms=86400000 \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size=524288 \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms=0 \
--override log.cleaner.threads=1 \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes=4096 \
--override log.index.size.max.bytes=10485760 \
--override log.message.timestamp.difference.max.ms=9223372036854775807 \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms=300000 \
--override max.connections.per.ip=2147483647 \
--override num.partitions=1 \
--override producer.purgatory.purge.interval.requests=1000 \
--override replica.fetch.backoff.ms=1000 \
--override replica.fetch.max.bytes=1048576 \
--override replica.fetch.response.max.bytes=10485760 \
--override reserved.broker.max.id=1000 "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: kafkadatadir
mountPath: /var/lib/kafka
readinessProbe:
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9092"
volumeClaimTemplates:
- metadata:
name: kafkadatadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
等pod启动完毕就ok了。
更多推荐
所有评论(0)