上一篇通过prometheus实现k8s hpa自定义指标 (一),部署了prometheus和k8s-prometheus-adapter,实现基于prometheus的hpa自定义指标扩缩容。通过安装组件和演示应用podinfo,最后再配置podinfo的hpa yaml文件,再配合我们的压测程序就能实现pod自动扩缩容。虽然k8s的hpa controller能够实现我们的预期,但是在跟着教程验证时,我们需要理解这个过程。如果问题回到最原始的状态,即我开发了一个应用,我在应用了定义了一个prometheus metric指标,在应用里我维护我这个指标的值,然后暴露端口和路径给prometheus搜集,k8s-prometheus-adapter插件将作为一个中间的桥梁将prometheus的数据提供给hpa controller做逻辑判断。实际应该如分析的那样,但是并不完全是,adapter插件会对部分metric名称做重命名和过滤部分prometheus中的metric数据,如标签带key为container_name并且value为POD的metric(这部分将在第三节中分析)。

创建我的项目用来验证基于特定metric的hpa,这里我的metric名称还是http_request_total,当然也可以使用其它名称,这个由应用开发者自己设定。在github上,我找到了一个使用spring boot实现prometheus指标收集示例程序gokayhuz/PrometheusDemo,来看程序代码,它设置了一些prometheus的metric指标,通过切面获取每个访问请求,给指标设定数值。

/**
 * Copyright 2003-2017 Monitise Group Limited. All Rights Reserved.
 * Save to the extent permitted by law, you may not use, copy, modify,
 * distribute or create derivative works of this material or any part
 * of it without the prior written consent of Monitise Group Limited.
 * Any reproduction of this material must contain this notice.
 */

package com.monitise.prometheus_demo.aspects;

import io.prometheus.client.Counter;
import io.prometheus.client.Summary;
import org.aspectj.lang.annotation.After;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.springframework.stereotype.Component;

import javax.servlet.http.HttpServletRequest;

@Aspect
@Component
@SuppressWarnings("unused")
public class PrometheusAspects {

    private static final Counter requestCounter = Counter.build()
            .help("Total number of requests")
            .labelNames("endpoint", "method")
            .name("http_requests_total").register();
            //.name("num_of_requests").register();

    private static final Summary totalTime = Summary.build()
            .name("mw_processing_time")
            .help("Stores the time the request spent in the MW")
            .register();

    private static final Summary backendTime = Summary.build()
            .name("backend_processing_time")
            .help("Stores the time the request spent in the backend")
            .register();

    private Summary.Timer mwTimer;
    private Summary.Timer backendTimer;

    @Before("@annotation(org.springframework.web.bind.annotation.RequestMapping) && "
            + "args(request,..) && "
            + "within(com.monitise.prometheus_demo..*)")
    private void beforeControllerMethod(HttpServletRequest request) {
        requestCounter.labels(request.getRequestURI(), request.getMethod()).inc();
        mwTimer = totalTime.startTimer();
    }

    @After("@annotation(org.springframework.web.bind.annotation.RequestMapping) && "
            + "args(request,..) && "
            + "within(com.monitise.prometheus_demo..*)")
    private void exitControllerMethod(HttpServletRequest request) {
        mwTimer.observeDuration();
    }

    @Before("execution(* com.monitise.prometheus_demo.service.*.*(..))")
    private void startBackendTimer() {
        backendTimer = backendTime.startTimer();
    }

    @After("within(com.monitise.prometheus_demo.service..*)")
    public void observeBackendTimer() {
        backendTimer.observeDuration();
    }
}

这里我将requestCounter metric的name更改为http_requests_total,目的是给每个请求通过uri和请求方法打标签,统计这些uri的请求方法和标签的请求个数。

查看程序的controller

package com.monitise.prometheus_demo;

import com.monitise.prometheus_demo.service.RemoteService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.ResponseBody;
import org.springframework.web.bind.annotation.RestController;

import javax.servlet.http.HttpServletRequest;
import java.util.Arrays;
import java.util.Random;

@RestController
public class DemoController {

    private static Random random = new Random();

    @Autowired
    private RemoteService remoteService;

    @RequestMapping(value = "/sort", method = RequestMethod.GET)
    @ResponseBody
    public String sortNumbers(HttpServletRequest request) {
        int upperLimit = (int)(Math.random() * 10_000_000);

        int[] array = new int[upperLimit];
        for (int i = 0; i < upperLimit; i++) {
            array[i] = (int)(10_000 * Math.random());
        }
        Arrays.sort(array);
        System.out.printf("sorted %d integers\n", upperLimit);
        return "sorted " + upperLimit + " integers";
    }

    @RequestMapping(value = "/test")
    public String nowork(HttpServletRequest request) {
        System.out.println("test");
        return "test";
    }

    @RequestMapping(value = "/dowork", method = {RequestMethod.GET})
    @ResponseBody
    public String work(HttpServletRequest request) throws InterruptedException {
        final int MAX_SLEEP_TIME = 1_000;
        int timeToSleep = random.nextInt(MAX_SLEEP_TIME);

        // simulate some processing
        Thread.sleep(timeToSleep);

        // make a remote call
        remoteService.someRemoteCall();

        // simulate some post processing after remote call
        Thread.sleep(timeToSleep);
        System.out.println("dowork");
        return "request";
    }
}

这里我们可以访问应用的url完成相应的metric值统计。

在application.properties里面,也做了相应的修改(实际情况没必要,这里主要是测试k8s的prometheus抓取)

//management.port: 8081
//management.security.enabled: false

//endpoints.prometheus.path: prometheus-metrics

spring.application.name=mydemo
management.port: 8888
management.security.enabled: false

endpoints.prometheus.path: polarwu

应用程序开发完成后,将其编译成prometheus_demo-0.0.1-SNAPSHOT.jar,然后打包成镜像,如下Dockerfile所示:

FROM registry.cn-hangzhou.aliyuncs.com/java-jdk/openjdk:jdk8
ADD ./prometheus_demo-0.0.1-SNAPSHOT.jar /root
EXPOSE 8888
EXPOSE 8080
WORKDIR /root
CMD ["java","-jar","prometheus_demo-0.0.1-SNAPSHOT.jar"]

通过docker build命令创建镜像:

docker build -t promethues-demo:v0 .

然后生成deployment和service yaml文件

apiVersion: extensions/v1beta1
kind: Deployment
#kind: DaemonSet
metadata:
  name: tomcat-deployment
spec:
  replicas: 2
  template:
    metadata:
      annotations:
        prometheus.io/scrape: 'true'
        prometheus.io/path: '/polarwu'
        prometheus.io/port: '8888'
      labels:
        app: tomcat
    spec:
      containers:
      - name: tomcat
        image: promethues-demo:v0
        ports:
        - containerPort: 8080
          name: tomcat-normal
        - containerPort: 8888
          name: tomcat-monitor
--- 
apiVersion: v1
kind: Service
metadata:
  labels:
    name: tomcat
  name: tomcat
  namespace: default
spec:
  ports:
  - name: tomcat-normal
    port: 8080
    protocol: TCP
    targetPort: 8080
  - name: tomcat-monitor
    port: 8888
    protocol: TCP
    targetPort: 8888
  type: NodePort 
  selector:
    app: tomcat

这里我们关注deployment的template.metadata,里面有个annotation,我们在这里通过注解告诉prometheus服务发现,用于发现我们的应用并收集数据。这里我的注解如下所示:

prometheus.io/scrape: 'true'
prometheus.io/path: '/polarwu'
prometheus.io/port: '8888'

prometheus.io/scrape 值设置为true后,prometheus将发现该应用并开始搜集,这里我指定了path为/polarwu(默认为/metrics),端口为8888,这里是和我们的应用程序对应着的,即我们在application.properties的相关参数。
之后我们创建hpa yaml文件,如下所示:

---
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
  name: tomcat
spec:
  scaleTargetRef:
    apiVersion: extensions/v1beta1
    kind: Deployment
    name: tomcat-deployment
  minReplicas: 1
  maxReplicas: 10
  metrics:
  - type: Pods
    pods:
      metricName: http_requests
      targetAverageValue: 10

这里的我们的metrics的type类型为Pods,metricName为http_requests,至于为什么不是我们在程序中定义的http_requests_total,这个将在第三节中介绍。
然后生成tomcat的hpa yaml:

[root@localhost tomcat]# kubectl create -f tomcat-hpa-custom.yaml 
horizontalpodautoscaler "tomcat" created

观察我们的tomcat hpa:

[root@localhost tomcat]# kubectl get hpa
NAME      REFERENCE                      TARGETS          MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   <unknown> / 10   1         10        0          12s
[root@localhost tomcat]# 
[root@localhost tomcat]# kubectl get hpa
NAME      REFERENCE                      TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   0 / 10    1         10        1          1m

tomcat hpa在刚开始创建时target还是未知的,这里面有prometheus的数据拉取和hpa控制器的定时逻辑有关,这里target值为0,主要是因为我们并没有产生请求。
这时,使用第一节介绍的rakyll/hey工具做测试,每秒28个请求:

root@0e1c525c6c48:/go/src/github.com/rakyll# hey -n 10000 -q 7 -c 4 http://10.99.204.196:8080/test

[root@localhost tomcat]# kubectl get hpa 
NAME      REFERENCE                      TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   11879m / 10   1         10        1          24m

[root@localhost podinfo]# kubectl get hpa
NAME      REFERENCE                      TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   10776m / 10   1         10        2          29m

[root@localhost podinfo]# 
[root@localhost podinfo]# kubectl get hpa
NAME      REFERENCE                      TARGETS       MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   11338m / 10   1         10        3          30m

[root@localhost podinfo]# kubectl get hpa
NAME      REFERENCE                      TARGETS      MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   8288m / 10   1         10        3          31m

随着负载不断增加,hpa Controller检测到负载已经超出了我们设定的期望值,并开始扩容,直到request平均请求在10个以下并保持。
随后停止hey负载测试程序,再观察tomcat的pod个数,我们发现它缩容到1个

[root@localhost podinfo]# kubectl get hpa
NAME      REFERENCE                      TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
tomcat    Deployment/tomcat-deployment   0 / 10    1         10        1          2h

整个过程符合我们逾期,我们自定义指标的hpa正常工作。

总结

本节从代码开发者的角度分析了要使用户自定义metric的hpa生效,我们应该做那些方面的工作,总结起来有以下几点:

  1. 在代码中引入prometheus客户端sdk,将prometheus相关代码嵌入到我们的应用代码里完成相应的业务逻辑;
  2. 将代码编译成可执行文件并打包成镜像将代码编译成可执行文件并打包成镜像
  3. 编写deployment和service等相关yaml文件,并在注解中配置 ;prometheus.io/scrape 为true,如果是自定义路径和port还要配置相应的注解编写deployment和service等相关yaml文件,并在注解中配置 prometheus.io/scrape 为true,如果是自定义路径和port还要配置相应的注解;
  4. 编写deployment相关的hpa yaml文件,等待hpa Controller执行相关业务逻辑实现制定扩缩容。编写deployment相关的hpa yaml文件,等待hpa Controller执行相关业务逻辑实现制定扩缩容。

在第三节中,将分析k8s-prometheus-adapter源码和hpa controller的rest client部分,理解adapter插件从prometueus中获取的metric指标是如何经过过滤和重命名的,同时关注hpa yaml文件中关于type为resource、pod、object在adapter中是怎样体现的,除此之外,还将分析hpa controller的rest client部分,作为k8s controller-manager之hpa源码分析的hpa自定义指标的补充。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐