kafka使用_使用helm在Kubernetes上部署kafka
作者:青蛙小白,原文:https://blog.frognew.com/2019/07/use-helm-install-kafka-on-k8s.html1.配置helm chart repokafka的helm chart还在孵化当中,使用前需要添加incubator的repo:helm repo add incubator http://storage.googleapis.com...
作者:青蛙小白,原文:
https://blog.frognew.com/2019/07/use-helm-install-kafka-on-k8s.html
1.配置helm chart repo
kafka的helm chart还在孵化当中,使用前需要添加incubator的repo:helm repo add incubator http://storage.googleapis.com/kubernetes-charts-incubator
。
肉身在国内需要设置azure提供的镜像库地址:
helm repo add stable http://mirror.azure.cn/kubernetes/chartshelm repo add incubator http://mirror.azure.cn/kubernetes/charts-incubatorhelm repo listNAME URL stable http://mirror.azure.cn/kubernetes/charts local http://127.0.0.1:8879/charts incubator http://mirror.azure.cn/kubernetes/charts-incubator
2.创建Kafka和Zookeeper的Local PV
2.1 创建Kafka的Local PV
这里的部署环境是本地的测试环境,存储选择Local Persistence Volumes。首先,在k8s集群上创建本地存储的StorageClasslocal-storage.yaml
:
apiVersion: storage.k8s.io/v1kind: StorageClassmetadata: name: local-storageprovisioner: kubernetes.io/no-provisionervolumeBindingMode: WaitForFirstConsumerreclaimPolicy: Retain
kubectl apply -f local-storage.yaml storageclass.storage.k8s.io/local-storage created
这里要在node1、node2这两个k8s节点上部署3个kafka的broker节点,因此先在node1、node2上创建这3个kafka broker节点的Local PV
kafka-local-pv.yaml
:
apiVersion: v1kind: PersistentVolumemetadata: name: datadir-kafka-0spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/data-0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1---apiVersion: v1kind: PersistentVolumemetadata: name: datadir-kafka-1spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/data-1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node2---apiVersion: v1kind: PersistentVolumemetadata: name: datadir-kafka-2spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/data-2 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node2
kubectl apply -f kafka-local-pv.yaml
根据上面创建的local pv,在node1上创建目录
/home/kafka/data-0
,在node2上创建目录
/home/kafka/data-1
和
/home/kafka/data-2
。
# node1mkdir -p /home/kafka/data-0# node2mkdir -p /home/kafka/data-1mkdir -p /home/kafka/data-2
2.2 创建Zookeeper的Local PV
这里要在node1、node2这两个k8s节点上部署3个zookeeper节点,因此先在node1、node2上创建这3个zookeeper节点的Local PVzookeeper-local-pv.yaml
:
apiVersion: v1kind: PersistentVolumemetadata: name: data-kafka-zookeeper-0spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/zkdata-0 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node1---apiVersion: v1kind: PersistentVolumemetadata: name: data-kafka-zookeeper-1spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/zkdata-1 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node2---apiVersion: v1kind: PersistentVolumemetadata: name: data-kafka-zookeeper-2spec: capacity: storage: 5Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain storageClassName: local-storage local: path: /home/kafka/zkdata-2 nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - node2
kubectl apply -f zookeeper-local-pv.yaml
根据上面创建的local pv,在node1上创建目录
/home/kafka/zkdata-0
,在node2上创建目录
/home/kafka/zkdata-1
和
/home/kafka/zkdata-2
。
# node1mkdir -p /home/kafka/zkdata-0# node2mkdir -p /home/kafka/zkdata-1mkdir -p /home/kafka/zkdata-2
3.部署Kafka
编写kafka chart的vaule文件kafka-values.yaml
:
replicas: 3tolerations:- key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule- key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedulepersistence: storageClass: local-storage size: 5Gizookeeper: persistence: enabled: true storageClass: local-storage size: 5Gi replicaCount: 3 image: repository: gcr.azk8s.cn/google_samples/k8szk tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule - key: node-role.kubernetes.io/master operator: Exists effect: PreferNoSchedule
- 安装过程需要使用到
gcr.io/google_samples/k8szk:v3
等docker镜像,切换成使用azure的GCR Proxy Cache:gcr.azk8s.cn
。
helm install --name kafka --namespace kafka -f kafka-values.yaml incubator/kafka
最后需要确认所有的pod都处于running状态:
kubectl get pod -n kafka -o wideNAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESkafka-0 1/1 Running 0 12m 10.244.0.61 node1 <none> <none>kafka-1 1/1 Running 0 6m3s 10.244.1.12 node2 <none> <none>kafka-2 1/1 Running 0 2m26s 10.244.1.13 node2 <none> <none>kafka-zookeeper-0 1/1 Running 0 12m 10.244.1.9 node2 <none> <none>kafka-zookeeper-1 1/1 Running 0 11m 10.244.1.10 node2 <none> <none>kafka-zookeeper-2 1/1 Running 0 11m 10.244.1.11 node2 <none> <none>kubectl get statefulset -n kafkaNAME READY AGEkafka 3/3 22mkafka-zookeeper 3/3 22mkubectl get service -n kafkaNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEkafka ClusterIP 10.102.8.192 <none> 9092/TCP 31mkafka-headless ClusterIP None <none> 9092/TCP 31mkafka-zookeeper ClusterIP 10.110.43.203 <none> 2181/TCP 31mkafka-zookeeper-headless ClusterIP None <none> 2181/TCP,3888/TCP,2888/TCP 31m
可以看到当前kafka的helm chart,采用StatefulSet的形式部署了kafka和zookeeper,而我们通过Local PV的形式,将
kafka-0
调度到node1上,将
kafka-1
和
kafka-2
调度到node2上。
4.安装后的测试
在k8s集群内运行下面的客户端Pod,访问kafka broker进行测试:apiVersion: v1kind: Podmetadata: name: testclient namespace: kafkaspec: containers: - name: kafka image: confluentinc/cp-kafka:5.0.1 command: - sh - -c - "exec tail -f /dev/null"
创建并进入testclient容器内:
kubectl apply -f testclient.yamlkubectl -n kafka exec testclient -it sh
查看kafka相关命令:
ls /usr/bin/ | grep kafkakafka-aclskafka-broker-api-versionskafka-configskafka-console-consumerkafka-console-producerkafka-consumer-groupskafka-consumer-perf-testkafka-delegation-tokenskafka-delete-recordskafka-dump-logkafka-log-dirskafka-mirror-makerkafka-preferred-replica-electionkafka-producer-perf-testkafka-reassign-partitionskafka-replica-verificationkafka-run-classkafka-server-startkafka-server-stopkafka-streams-application-resetkafka-topicskafka-verifiable-consumerkafka-verifiable-producer
创建一个Topic test1:
kafka-topics --zookeeper kafka-zookeeper:2181 --topic test1 --create --partitions 1 --replication-factor 1
查看的Topic:
kafka-topics --zookeeper kafka-zookeeper:2181 --listtest1
5.总结
当前基于Helm官方仓库的chartincubator/kafka
在k8s上部署的kafka,使用的镜像是
confluentinc/cp-kafka:5.0.1
。 即部署的是Confluent公司提供的kafka版本。Confluent Platform Kafka(简称CP Kafka)提供了一些Apache Kafka没有的高级特性,例如跨数据中心备份、Schema注册中心以及集群监控工具等。CP Kafka目前分为免费版本和企业版两种,免费版除了Apache Kafka的标准组件外还包含Schema注册中心和Rest Proxy。
Confluent Platform and Apache Kafka Compatibility中给出了Confluent Kafka和Apache Kafka的版本对应关系,可以看出这里安装的cp 5.0.1对应Apache Kafka的2.0.x。
进入一个broker容器中,查看:
ls /usr/share/java/kafka | grep kafkakafka-clients-2.0.1-cp1.jarkafka-log4j-appender-2.0.1-cp1.jarkafka-streams-2.0.1-cp1.jarkafka-streams-examples-2.0.1-cp1.jarkafka-streams-scala_2.11-2.0.1-cp1.jarkafka-streams-test-utils-2.0.1-cp1.jarkafka-tools-2.0.1-cp1.jarkafka.jarkafka_2.11-2.0.1-cp1-javadoc.jarkafka_2.11-2.0.1-cp1-scaladoc.jarkafka_2.11-2.0.1-cp1-sources.jarkafka_2.11-2.0.1-cp1-test-sources.jarkafka_2.11-2.0.1-cp1-test.jarkafka_2.11-2.0.1-cp1.jar
可以看到对应apache kafka的版本号是
2.11-2.0.1
,前面
2.11
是Scala编译器的版本,Kafka的服务器端代码是使用Scala语言开发的,后边
2.0.1
是Kafka的版本。 即CP Kafka 5.0.1是基于Apache Kafka 2.0.1的。
参考
- Zookeeper Helm Chart
- Kafka Helm Chart
- GCR Proxy Cache 帮助
- Confluent Platform and Apache Kafka Compatibility
--end--
K8S培训推荐
Kubernetes线下实战培训,采用3+1+1新的培训模式(3天线下实战培训,1年内可免费再次参加,每期前10名报名,可免费参加价值3600元的线上直播班;),资深一线讲师,实操环境实践,现场答疑互动,培训内容覆盖:Kubernetes集群搭建、Kubernetes设计、Pod、常用对象操作,Kuberentes调度系统、QoS、Helm、网络、存储、CI/CD、日志监控等。点击查看更多课程信息!成都:8月16-18日
推荐阅读
人均年薪80万以上,Docker 入坑不亏?
超炫酷的Docker终端UI,想看哪里点哪里
使用 kubeadm 安装最新 Kubernetes 1.15 版本
2019年已过半,精选往期 Kubernetes 干货文章
50个你必须了解的Kubernetes面试问题
Promethues监控从VM迁移K8S实录
避坑:Kubernetes中pull私有镜像
一份非常完整、详细的MySQL规范
Docker stop或者Docker kill为何不能停止容器
Docker中Image、Container与Volume的迁
国内拉取google Kubernetes镜像
更多推荐
所有评论(0)