K8S包管理器helm3安装kafka
k8s helm3 安装 kafka
·
一、安装K8S包管理器helm3
访问GitHub - helm/helm: The Kubernetes Package Manager下载helm v3.9.0,将helm-v3.9.0-linux-amd64 .tar.gz解压至自定义的安装位置,例如/usr/local/helm,创建软链接,命令如下
ln -s /usr/bin/helm /usr/local/helm/helm
如果想安装helm插件,可以使用以下命令查看插件安装位置,将helm插件解压到插件安装位置即可
helm env
添加bitnami charts仓库,命令如下
helm repo add bitnami https://charts.bitnami.com/bitnami
二、安装kafaka
直接到官方仓库Artifact Hub,搜索到bitnami/kafka,按照官方说明执行出现错误download错误,即使安装成功,pod状态也是一直pending,经过摸索,具体的安装步骤如下,能够成功在k8s集群中部署kafka,
下载kafka-17.2.6.tgz并解压
wget https://charts.bitnami.com/bitnami/kafka-17.2.6.tgz&&tar -zxf kafka-17.2.6.tgz
修改解压目录下的values.yaml
podSecurityContext: enabled: true fsGroup: 1001 runAsUser: 1001//添加这一行,允许1001的用户运行pod
添加两个持久卷(pv),yaml文件分别为
kafka-pv-volume.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: kafka-pv-volume labels: type: local spec: #storageClassName: manual claimRef: namespace: default #此处为kafka安装产生的pvc名称,使用kubectl get pvc -A获取即可 name: data-mykafka-0 capacity: storage: 8Gi accessModes: - ReadWriteOnce hostPath: path: "/usr/local/kafka/data"
kafka-zookeeper-pv-volume.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: kafka-zookeeper-pv-volume labels: type: local spec: #storageClassName: manual claimRef: namespace: default #此处为kafka安装产生的pvc名称,使用kubectl get pvc -A获取即可 name: data-mykafka-zookeeper-0 capacity: storage: 8Gi accessModes: - ReadWriteOnce hostPath: path: "/usr/local/kafka/zookeeper/data"
执行命令,创建持久卷
kubectl create -f kafka-pv-volume.yaml kubectl create -f kafka-zookeeper-pv-volume.yaml
执行命令,将映射目录的权限给1001用户
chown -R 1001:1001 /usr/local/kafka/data/ chown -R 1001:1001 /usr/local/kafka/zookeeper/
最后执行,即可使kafaka在k8s集群上运行成功
[root@k8s-master charts]# helm install mykafka kafka Release "mykafka" has been upgraded. Happy Helming! NAME: mykafka LAST DEPLOYED: Tue Jun 14 19:07:25 2022 NAMESPACE: default STATUS: deployed REVISION: 5 TEST SUITE: None NOTES: CHART NAME: kafka CHART VERSION: 17.2.6 APP VERSION: 3.2.0 ** Please be patient while the chart is being deployed ** Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster: mykafka.default.svc.cluster.local Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster: mykafka-0.mykafka-headless.default.svc.cluster.local:9092 To create a pod that you can use as a Kafka client run the following commands: kubectl run mykafka-client --restart='Never' --image docker.io/bitnami/kafka:3.2.0-debian-11-r3 --namespace default --command -- sleep infinity kubectl exec --tty -i mykafka-client --namespace default -- bash PRODUCER: kafka-console-producer.sh \ --broker-list mykafka-0.mykafka-headless.default.svc.cluster.local:9092 \ --topic test CONSUMER: kafka-console-consumer.sh \ --bootstrap-server mykafka.default.svc.cluster.local:9092 \ --topic test \ --from-beginning
总结:bitnami/kafaka没有创建持久卷,需要先创建kafka需要的持久卷,另外将持久卷映射的目录权限赋予对应的用户,当使用kubectl describe pods 展示状态一直不变时,可以使用kubectl logs [pod_name]来查看具体什么原因导致pod挂起。
更多推荐
已为社区贡献1条内容
所有评论(0)