Kubernetes上Kafka应用数据恢复 —— YS1000实战系列
如何为部署在k8s中的kafka数据上保险,进一步加强数据保护,挽救数据于黑客攻击、恶意篡改等事件? 本文使用velero商业软件——骥步科技YS1000免费版进行测试
目录
一、引言
Kafka是由LinkedIn公司采用Scala语言开发的一个多分区、多副本的分布式消息系统,可以架设在k8s平台上基于ZooKeeper协调,达到高吞吐、持久化、水平扩展、支持流数据处理等多种特性,因而被广泛使用在互联网大数据和金融等领域。并且越来越多的开源分布式处理系统如Cloudera、Storm、Spark、Flink等都支持与Kafka集成。
Kafka的消息持久化功能和多副本机制,有效地降低了数据丢失的风险,因此也可以作为长期的数据存储系统来使用。那么如何为部署在k8s中的kafka数据上保险,进一步加强数据保护,挽救数据于黑客攻击、恶意篡改等事件,本文使用基于velero的商业化产品YS1000(免费版)带大家一窥究竟。
二、实验环境
Kubernetes版本
kubectl get nodes
NAME STATUS ROLES AGE VERSION
remote-master Ready master 84d v1.18.9
worker-2 Ready <none> 84d v1.18.9
使用helm安装YS1000免费版,详见
https://github.com/jibutech/helm-charts/blob/main/README.md
使用手册详见https://github.com/jibutech/docs/blob/main/user_guide/%E9%93%B6%E6%95%B0%E5%A4%9A%E4%BA%91%E6%95%B0%E6%8D%AE%E7%AE%A1%E5%AE%B62.0%E7%89%88%E4%BD%BF%E7%94%A8%E8%AF%B4%E6%98%8E%E4%B9%A6.md
安装完成后查看YS1000版本
helm list -A
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
qiming-operator-1637891889 qiming-migration 1 2021-11-26 09:58:10.840736168 +0800 CST deployed qiming-operator-2.1.0 2.1.0
三、部署zookeeper
本文中我们将使用statefulset在kafka-test的namespace中创建2副本的zookeeper和kafka。
第一步,使用kubectl创建kafka-test
kubectl create namespace kafka-test
第二步,在kafka-test中部署zookeeper
kubectl create namespace kafka-test
zookeeper-deployment.yaml 内容供参考,具体参数如namespace,replicas和storageClassName等根据需要修改
---
apiVersion: v1
kind: Service
metadata:
name: zk-svc
labels:
app: zk-svc
spec:
ports:
- port: 2888
name: server
- port: 3888
name: leader-election
clusterIP: None
selector:
app: zk
---
apiVersion: v1
kind: ConfigMap
metadata:
name: zk-cm
data:
jvm.heap: "1G"
tick: "2000"
init: "10"
sync: "5"
client.cnxns: "60"
snap.retain: "3"
purge.interval: "0"
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: zk-pdb
spec:
selector:
matchLabels:
app: zk
minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: zk
spec:
serviceName: zk-svc
replicas: 2
selector:
matchLabels:
app: zk
template:
metadata:
labels:
app: zk
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
containers:
- name: k8szk
imagePullPolicy: IfNotPresent
image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8szk:v3
resources:
requests:
memory: "2Gi"
cpu: "500m"
ports:
- containerPort: 2181
name: client
- containerPort: 2888
name: server
- containerPort: 3888
name: leader-election
env:
- name : ZK_REPLICAS
value: "2"
- name : ZK_HEAP_SIZE
valueFrom:
configMapKeyRef:
name: zk-cm
key: jvm.heap
- name : ZK_TICK_TIME
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_INIT_LIMIT
valueFrom:
configMapKeyRef:
name: zk-cm
key: tick
- name : ZK_MAX_CLIENT_CNXNS
valueFrom:
configMapKeyRef:
name: zk-cm
key: client.cnxns
- name: ZK_SNAP_RETAIN_COUNT
valueFrom:
configMapKeyRef:
name: zk-cm
key: snap.retain
- name: ZK_PURGE_INTERVAL
valueFrom:
configMapKeyRef:
name: zk-cm
key: purge.interval
- name: ZK_CLIENT_PORT
value: "2181"
- name: ZK_SERVER_PORT
value: "2888"
- name: ZK_ELECTION_PORT
value: "3888"
command:
- sh
- -c
- zkGenConfig.sh && zkServer.sh start-foreground
readinessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 30
timeoutSeconds: 10
livenessProbe:
exec:
command:
- "zkOk.sh"
initialDelaySeconds: 30
timeoutSeconds: 10
volumeMounts:
- name: datadir
mountPath: /var/lib/zookeeper
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi
storageClassName: managed-nfs-storage
第三步,验证zookeeper集群和域名。
for i in {0..1}; do kubectl -n kafka-test exec zk-$i -- hostname -f; done
zk-0.zk-svc.kafka-test.svc.cluster.local
zk-1.zk-svc.kafka-test.svc.cluster.local
第四步,暴露zookeeper外部服务。
for i in {0..1}; do kubectl -n kafka-test label pod zk-$i zkInst=$i; done
for i in {0..1}; do kubectl -n kafka-test expose po zk-$i --port=2181 --target-port=2181 --name=zk-$i --selector=zkInst=$i --type=NodePort; done
第五步,等待zk pod都ready后,进入pod查看zookeeper是否处于正常服务状态
kubectl -n kafka-test get pod
NAME READY STATUS RESTARTS AGE
zk-0 1/1 Running 0 18m
zk-1 1/1 Running 0 18m
kubectl -n kafka-test exec -it zk-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
zookeeper@zk-0:/$ echo stat|nc 127.0.0.1 2181
Zookeeper version: 3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
Clients:
/10.100.199.209:47010[1](queued=0,recved=33475,sent=33481)
/127.0.0.1:41870[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 0/0/64
Received: 46663
Sent: 46668
Connections: 2
Outstanding: 0
Zxid: 0x400000096
Mode: follower
Node count: 135
zookeeper@zk-0:/$
四、部署kafka
等待zookeeper服务正常后,开始部署kafka。
第一步,在kafka-test中部署kafka
kubectl -n kafka-test apply -f ./kafka-deployment.yaml
kafka-deployment.yaml 内容供参考,具体参数如namespace,replicas和storageClassName等根据需要修改
---
apiVersion: v1
kind: Service
metadata:
name: kafka-svc
labels:
app: kafka
spec:
ports:
- port: 9093
name: server
clusterIP: None
selector:
app: kafka
---
apiVersion: policy/v1beta1
kind: PodDisruptionBudget
metadata:
name: kafka-pdb
spec:
selector:
matchLabels:
app: kafka
minAvailable: 2
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: kafka
spec:
serviceName: kafka-svc
replicas: 2
selector:
matchLabels:
app: kafka
template:
metadata:
labels:
app: kafka
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- kafka
topologyKey: "kubernetes.io/hostname"
podAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- zk
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 300
containers:
- name: k8skafka
imagePullPolicy: IfNotPresent
image: registry.cn-hangzhou.aliyuncs.com/jaxzhai/k8skafka:v1
resources:
requests:
memory: "1Gi"
cpu: 500m
ports:
- containerPort: 9093
name: server
command:
- sh
- -c
- "exec kafka-server-start.sh /opt/kafka/config/server.properties --override broker.id=${HOSTNAME##*-} \
--override listeners=PLAINTEXT://:9093 \
--override zookeeper.connect=zk-0.zk-svc.kafka-test.svc.cluster.local:2181,zk-1.zk-svc.kafka-test.svc.cluster.local:2181 \
--override log.dirs=/var/lib/kafka \
--override auto.create.topics.enable=true \
--override auto.leader.rebalance.enable=true \
--override background.threads=10 \
--override compression.type=producer \
--override delete.topic.enable=false \
--override leader.imbalance.check.interval.seconds=300 \
--override leader.imbalance.per.broker.percentage=10 \
--override log.flush.interval.messages=10 \
--override log.flush.interval.ms=100 \
--override log.flush.offset.checkpoint.interval.ms=6000 \
--override log.flush.scheduler.interval.ms=600 \
--override log.retention.bytes=-1 \
--override log.retention.hours=168 \
--override log.roll.hours=168 \
--override log.roll.jitter.hours=0 \
--override log.segment.bytes=1073741824 \
--override log.segment.delete.delay.ms=60000 \
--override message.max.bytes=1000012 \
--override min.insync.replicas=1 \
--override num.io.threads=8 \
--override num.network.threads=3 \
--override num.recovery.threads.per.data.dir=1 \
--override num.replica.fetchers=1 \
--override offset.metadata.max.bytes=4096 \
--override offsets.commit.required.acks=-1 \
--override offsets.commit.timeout.ms=5000 \
--override offsets.load.buffer.size=5242880 \
--override offsets.retention.check.interval.ms=600000 \
--override offsets.retention.minutes=1440 \
--override offsets.topic.compression.codec=0 \
--override offsets.topic.num.partitions=50 \
--override offsets.topic.replication.factor=3 \
--override offsets.topic.segment.bytes=104857600 \
--override queued.max.requests=500 \
--override quota.consumer.default=9223372036854775807 \
--override quota.producer.default=9223372036854775807 \
--override replica.fetch.min.bytes=1 \
--override replica.fetch.wait.max.ms=500 \
--override replica.high.watermark.checkpoint.interval.ms=5000 \
--override replica.lag.time.max.ms=10000 \
--override replica.socket.receive.buffer.bytes=65536 \
--override replica.socket.timeout.ms=30000 \
--override request.timeout.ms=30000 \
--override socket.receive.buffer.bytes=102400 \
--override socket.request.max.bytes=104857600 \
--override socket.send.buffer.bytes=102400 \
--override unclean.leader.election.enable=true \
--override zookeeper.session.timeout.ms=6000 \
--override zookeeper.set.acl=false \
--override broker.id.generation.enable=true \
--override connections.max.idle.ms=600000 \
--override controlled.shutdown.enable=true \
--override controlled.shutdown.max.retries=3 \
--override controlled.shutdown.retry.backoff.ms=5000 \
--override controller.socket.timeout.ms=30000 \
--override default.replication.factor=1 \
--override fetch.purgatory.purge.interval.requests=1000 \
--override group.max.session.timeout.ms=300000 \
--override group.min.session.timeout.ms=6000 \
--override inter.broker.protocol.version=0.10.2-IV0 \
--override log.cleaner.backoff.ms=15000 \
--override log.cleaner.dedupe.buffer.size=134217728 \
--override log.cleaner.delete.retention.ms=86400000 \
--override log.cleaner.enable=true \
--override log.cleaner.io.buffer.load.factor=0.9 \
--override log.cleaner.io.buffer.size=524288 \
--override log.cleaner.io.max.bytes.per.second=1.7976931348623157E308 \
--override log.cleaner.min.cleanable.ratio=0.5 \
--override log.cleaner.min.compaction.lag.ms=0 \
--override log.cleaner.threads=1 \
--override log.cleanup.policy=delete \
--override log.index.interval.bytes=4096 \
--override log.index.size.max.bytes=10485760 \
--override log.message.timestamp.difference.max.ms=9223372036854775807 \
--override log.message.timestamp.type=CreateTime \
--override log.preallocate=false \
--override log.retention.check.interval.ms=300000 \
--override max.connections.per.ip=2147483647 \
--override num.partitions=1 \
--override producer.purgatory.purge.interval.requests=1000 \
--override replica.fetch.backoff.ms=1000 \
--override replica.fetch.max.bytes=1048576 \
--override replica.fetch.response.max.bytes=10485760 \
--override reserved.broker.max.id=1000 "
env:
- name: KAFKA_HEAP_OPTS
value : "-Xmx512M -Xms512M"
- name: KAFKA_OPTS
value: "-Dlogging.level=INFO"
volumeMounts:
- name: datadir
mountPath: /var/lib/kafka
readinessProbe:
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
livenessProbe:
initialDelaySeconds: 10
timeoutSeconds: 5
exec:
command:
- sh
- -c
- "/opt/kafka/bin/kafka-broker-api-versions.sh --bootstrap-server=localhost:9093"
securityContext:
runAsUser: 1000
fsGroup: 1000
volumeClaimTemplates:
- metadata:
name: datadir
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 5Gi
storageClassName: managed-nfs-storage
第二步,暴露kafka外部服务。
for i in {0..1}; do kubectl -n kafka-test label pod kafka-$i kafkaInst=$i; done
for i in {0..1}; do kubectl -n kafka-test expose po kafka-$i --port=9093 --target-port=9093 --name=kafka-$i --selector=kafkaInst=$i --type=NodePort; done
第三步,等待kafka pod都ready后,分别进入两个pod验证kafka日志系统是否正常工作
kubectl -n kafka-test get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kafka-0 1/1 Running 0 18s 10.100.199.224 remote-master <none> <none>
kafka-1 1/1 Running 0 10s 10.100.133.219 worker-2 <none> <none>
zk-0 1/1 Running 0 27m 10.100.199.230 remote-master <none> <none>
zk-1 1/1 Running 0 27m 10.100.133.254 worker-2 <none> <none>
kubectl -n kafka-test exec -it kafka-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
kafka@kafka-0:/$ kafka-console-producer.sh --topic test --broker-list localhost:9093
aaaa
bbb
cc
dd
eee
fffggg^H^H^H
hhh
iii
jjj
kkk
lll
mmm
nnn
^Ckafka@kafka-0:/var/lib/kafka$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093 --from-beginning
aaaa
bbb
cc
dd
eee
fffggg
hhh
iii
jjj
kkk
lll
mmm
nnn
^CProcessed a total of 13 messages
kubectl -n kafka-test exec -it kafka-1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
kafka@kafka-1:/$ kafka-console-producer.sh --topic test --broker-list localhost:9093
ooooo
ppppp
qqq
rrrr
sss
ttt
^Ckafka@kafka-1:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093 --from-beginning
aaaa
bbb
cc
dd
eee
fffggg
hhh
iii
jjj
kkk
lll
mmm
nnn
ooooo
ppppp
qqq
rrrr
sss
ttt
^CProcessed a total of 19 messages
五、对kafka进行备份和恢复
第一步,使用以下cmd获得url和token,在浏览器上登陆YS1000
export NODE_PORT=$(kubectl get --namespace qiming-migration -o jsonpath="{.spec.ports[0].nodePort}" services ui-service-default )
export NODE_IP=$(kubectl get nodes --namespace qiming-migration -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
export SECRET=$(kubectl -n qiming-migration get secret | (grep qiming-operator |grep -v helm || echo "$_") | awk '{print $1}')
export TOKEN=$(kubectl -n qiming-migration describe secrets $SECRET |grep token: | awk '{print $2}')
echo $TOKEN
第二步,点击左侧集群应用备份,再点击“创建应用备份”,在wizard中选择kafka-test命名空间创建按需的备份计划.
备份计划生成后点击右侧备份,执行一次备份操作。
第三步,等待备份任务成功后,在集群中删除kafka-test这个namespace。
kubectl delete ns kafka-test
第四步,在YS1000前端页面中点击左侧集群应用恢复,再点击“创建应用恢复任务”,在wizard中选择之前创建的备份。
恢复计划生成后点击右侧的激活,执行一次恢复操作。
第五步,等待恢复任务成功后,检查kafka-test中pod的恢复情况,并进入kafka pod检查数据。
kubectl -n kafka-test get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kafka-0 1/1 Running 3 5m5s 10.100.199.209 remote-master <none> <none>
kafka-1 1/1 Running 4 5m5s 10.100.133.243 worker-2 <none> <none>
zk-0 1/1 Running 0 5m5s 10.100.199.236 remote-master <none> <none>
zk-1 1/1 Running 0 5m4s 10.100.133.199 worker-2 <none> <none>
kubectl -n kafka-test exec -it kafka-1 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
kafka@kafka-1:/$ kafka-console-consumer.sh --topic test --bootstrap-server localhost:9093 --from-beginning
aaaa
bbb
cc
dd
eee
fffggg
hhh
iii
jjj
kkk
lll
mmm
nnn
ooooo
ppppp
qqq
rrrr
sss
ttt
^CProcessed a total of 19 messages
六、小结
随着云原生生态的越发成熟,越来越多的企业将有状态的工作负载部署到k8s集群上,本文利用YS1000银数多云数据管家, 简化了多副本kafka和zookeeper在k8s容器中的备份和恢复操作, 并成功恢复了kafka持久卷的数据,帮助企业高效管理容器应用的数据备份和恢复工作,有效减轻容器平台的运维成本,可助力企业达成云原生应用灾备、迁移和 DevOps 等目标,从而保证了企业云原生应用的弹性和可靠性。
更多推荐
所有评论(0)