1.环境准备

Zookeeper组件Docker镜像制作参考:https://blog.csdn.net/Happy_Sunshine_Boy/article/details/107249542
根据以上方法进行Zookeeper镜像制作时,需要先对Zookeeper的tar包进行处理,放开Prometheus端口:

vim apache-zookeeper-3.6.1-bin/conf/zoo_sample.cfg

metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
metricsProvider.httpPort=7000
metricsProvider.exportJvmInfo=true

在这里插入图片描述

2.k8s部署ZK

书写zookeeper.yaml完成zookeeper组件在k8s集群中的部署,暴露监控数据端口:7000:

---
apiVersion: v1
kind: Namespace
metadata:
  name: zookeeper
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper
  namespace: zookeeper
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zookeeper
  minReadySeconds: 1
  progressDeadlineSeconds: 60
  revisionHistoryLimit: 5
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  template:
    metadata:
      name: zookeeper
      labels:
        app: zookeeper
    spec:
      containers:
        - name: zookeeper
          image: zookeeper:3.6.1
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 7000		  #暴露监控数据端口:7000
          resources:
            limits:
              cpu: 10m                    #限制cpu最大的数量
              memory: 500Mi               #限制最大内存的大小
            requests:
              cpu: 10m                    #限制cpu最小的数量
              memory: 400Mi               #限制最小内存的大小
          volumeMounts:                   #挂载时间配置文件,使pod的时间和本地一致
            - name: tz-config
              mountPath: /etc/localtime
      volumes:
        - name: tz-config
          hostPath:
            path: /usr/share/zoneinfo/Asia/Shanghai


3.k8s部署zk-svc

书写zookeeper-scv.yaml文件,完成zookeeper service的部署,访问监控数据的service:

---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
  labels:
    app: zookeeper
  namespace: zookeeper
spec:
  type: NodePort
  ports:
    - port: 2181
      name: "2181"
      nodePort: 30005
    - port: 7000
      name: "7000"
      nodePort: 30007
  selector:
    app: zookeeper


4.ServiceMonitor

  1. 书写prometheus-ServiceMonitor-zookeeper.yaml
  2. prometheus监控文件读取上一步创建的service暴露的监控数据
  3. 这里的selector需要匹配上一步创建出来的service
  4. endpoints的port只能配置为service中的命名端口,不能使用数字,只能使用字符串名称
  5. 需要确保prometheus对象的serviceMonitorSelector和serviceMonitorNamespaceSelector匹配这一步创建出的ServiceMonitor对象
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: zookeeper
  labels:
    team: zookeeper
  namespace: monitoring
spec:
  endpoints:
    - port: "7000"
  namespaceSelector:
    matchNames:
      - zookeeper
  selector:
    matchLabels:
      app: zookeeper


5.修改prometheus权限

书写clusterRolebinding.yaml 赋予prometheus有权限访问别的命名空间下的监控数据,否则 monitoring下的ServiceMonitor,无法监控 zookeeper下的zookeeper服务。。。

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: prometheus-k8s
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: prometheus-k8s
    namespace: monitoring


6.效果

6.1 Prometheus

在这里插入图片描述

6.2 Grafana

使用模板:10465
该模板使用时,需要修改指定数据源,补充监控项,不然都是demo数据。
在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐