前言:

之前都是用单机docker部署Prometheus和其中的各种exporter,最近突发奇想尝试试下在k8s集群中部署prometheus,因为之前使用docker部署过,大概思路是这样的把其中的Prometheus配置文件创建为configmap挂载到pod中或者使用hostpath,将Prometheus服务写成yaml文件,而node-exporter创建资源对象为DaemonSet,因为需要每个工作节点都需要监控所以我们使用DaemonSet对象,DaemonSet 会确保在集群中的每个节点上运行一个 Pod。大概的思路就是这样接下来让我实践吧,少年!!

一.首先创建configMap,将prometheus.yml配置文件创建为configMap,具体配置从哪里来可以先拉取个容器把配置文件拷贝出来,我这里因为之前有配置文件直接创建configMap。

[root@k8s-master01@15:21 ~]#☠️  ls
node-exporter.yaml  prometheus-service.yaml  prometheus.yml
[root@k8s-master01@15:21 ~]#☠️  kubectl create cm cm-1 --from-file=./prometheus.yml
[root@k8s-master01@15:22 ~]#☠️  kubectl get cm
NAME               DATA   AGE
cm-1               1      69m
kube-root-ca.crt   1      70d

[root@k8s-master01@15:22 ~]#☠️  cat prometheus.yml      #Prometheus的初始配置文件
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
#    - static_configs:
#        - targets:
#            - 192.168.1.129:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
#  - "/usr/local/*.yml"
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ["localhost:9090"]


二.我们编写Prometheus的yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: prometheus-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: prometheus
  template:
    metadata:
      labels:
        app: prometheus
    spec:
      containers:
        - name: prometheus
          image: prom/prometheus
          ports:
            - containerPort: 9090
          volumeMounts:
            - mountPath: /etc/Prometheus/    #挂载到容器的路径
              name: cm-1                     #与下面的name相对应
      volumes:
        - name: cm-1      #没有要求,随便起名称
          configMap:
            name: cm-1      #引用的刚刚创建的configmap

---
apiVersion: v1
kind: Service
metadata:
  name: prometheus-service
spec:
  type: NodePort
  selector:
    app: prometheus
  ports:
    - protocol: TCP
      port: 9090
      targetPort: 9090
      nodePort: 30000

 接下来我们直接进行构建,可以看到已经Running,同时service也创建完成了,

 我们访问的话可以:IP:30000

[root@k8s-master01@15:24 ~]#☠️  kubectl apply -f prometheus-service.yaml
[root@k8s-master01@15:38 ~]#☠️  kubectl get pod -owide
NAME                                     READY   STATUS    RESTARTS   AGE   IP             NODE           NOMINATED NODE   READINESS GATES
prometheus-deployment-86b976cb98-695l9   1/1     Running   0          77m   10.244.79.69   k8s-worker01   <none>           <none>
[root@k8s-master01@15:38 ~]#☠️  kubectl get svc
NAME                 TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)          AGE
kubernetes           ClusterIP   10.96.0.1     <none>        443/TCP          70d
prometheus-service   NodePort    10.96.86.93   <none>        9090:30000/TCP   77m
[root@k8s-master01@15:38 ~]#☠️

三.  我们将Prometheus部署完成了,采集监控主机数据的node-exporter也开始部署,因为我们有很多的工作节点,需要都进行监控,但是我们部署node-exporter的pod时候,无法提前知道在哪个工作节点上运行,所以在这里我们使用k8s的DaemonSet资源对象

[root@k8s-master01@15:39 ~]#☠️  cat node-exporter.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: kube-prometheus
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-exporter
  namespace: kube-prometheus
  labels:
    name: node-exporter
spec:
  selector:
    matchLabels:
     name: node-exporter
  template:
    metadata:
      labels:
        name: node-exporter
    spec:
      tolerations:
      - key: "node-role.kubernetes.io/master"
        operator: "Exists"
        effect: "NoSchedule"
      hostPID: true
      hostIPC: true
      hostNetwork: true # hostNetwork hostIPC hostPID都为True时,表示这个Pod里的所有容器会直接使用宿主机的网络,直接与宿主机进行IPC(进程间通信通信,可以看到宿主机里正在运行的所有进程。
                        # 加入了hostNetwork:true会直接将我们的宿主机的9100端口映射出来,从而不需要创建service在我们的宿主机上就会有一个 9100的端口
      containers:
      - name: node-exporter
        image: prom/node-exporter:v1.3.0
        ports:
        - containerPort: 9100
        resources:
          requests:
            cpu: 0.15   # 这个容器运行至少需要0.15核cpu
        securityContext:
          privileged: true # 开启特权模式
        args:
        - --path.procfs=/host/proc    # 配置挂载宿主机(node节点)的路径
        - --path.sysfs=/host/sys      # 配置挂载宿主机(node节点)的路径
        - --path.rootfs=/host
        volumeMounts:                           # 将主机/dev /proc /sys 这些目录挂在到容器中,这是因为我们采集的很多节点数据都是通过这些文件来获取系统信息的。
        - name: dev
          mountPath: /host/dev
          readOnly: true
        - name: proc
          mountPath: /host/proc
          readOnly: true
        - name: sys
          mountPath: /host/sys
          readOnly: true
        - name: rootfs
          mountPath: /host
          readOnly: true
      volumes:
      - name: proc
        hostPath:
          path: /proc
      - name: dev
        hostPath:
          path: /dev
      - name: sys
        hostPath:
          path: /sys
      - name: rootfs
        hostPath:
          path: /
[root@k8s-master01@15:45 ~]#☠️  kubectl apply -f node-exporter.yaml
[root@k8s-master01@15:49 ~]#☠️  kubectl get pod -n   kube-prometheus  -owide
NAME                  READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
node-exporter-cqzl4   1/1     Running   0          86m   192.168.1.131   k8s-worker02   <none>           <none>
node-exporter-kqnhp   1/1     Running   0          86m   192.168.1.130   k8s-worker01   <none>           <none>
node-exporter-trl4z   1/1     Running   0          86m   192.168.1.132   k8s-worker03   <none>           <none>


可以看到node-exporter运行到不同的工作节点上了

接下来我们只需要更新prometheus的配置文件就行了,将工作节点的ip加入文件中即可

[root@k8s-master01@15:53 ~]#☠️  kubectl edit cm cm-1  #因为使用的命令行创建的configmap,可以使用这种办法更新,
[root@k8s-master01@15:54 ~]#☠️  kubectl apply -f prometheus-service.yaml   #更新完configmap,再将Prometheus的yaml重新apply下即可

就先写到这里吧,之后会持续更新其他的exporter部署,还有grafana部署。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐