api k8s restful 创建pods_k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容
Pod自动扩容/缩容:HPA介绍 Horizontal Pod Autoscaler(HPA,Pod水平自动伸缩):根据资源利用率或者自定义指标自动调整Deployment的Pod副本数量,提供应用并发。HPA不适于无法缩放的对象,例如DaemonSet。Pod自动扩容/缩容:HPA基本工作原理 Kubernetes 中的 Metrics Server 持续采集所有 Pod 副本的指标数据。HPA
Pod自动扩容/缩容:HPA介绍
Horizontal Pod Autoscaler(HPA,Pod水平自动伸缩):根据资源利用率或者自定义指
标自动调整Deployment的Pod副本数量,提供应用并发。HPA不适于无法缩放的对象,例
如DaemonSet。
Pod自动扩容/缩容:HPA基本工作原理
Kubernetes 中的 Metrics Server 持续采集所有 Pod 副本的指标数据。HPA 控制器通过 Metrics-Server 的 API(聚合 API)获取这些数据,基于用户定义的扩缩容规则进行计算,得到目标 Pod 副本数量。当目标 Pod 副本数量与当前副本数量不同时,HPA 控制器就像 Pod 的Deployment控制器发起scale 操作,调整 Pod 的副本数量,完成扩缩容操作。
Pod自动扩容/缩容:使用HPA前提条件
使用HPA,确保满足以下条件:
• 启用Kubernetes API聚合层
• 相应的API已注册:
• 对于资源指标(例如CPU、内存),将使用metrics.k8s.io API,一般由metrics-server提供。
• 对于自定义指标(例如QPS),将使用custom.metrics.k8s.io API,由相关适配器(Adapter)服务提供。
已知适配器列表:
https://github.com/kubernetes/metrics/blob/master/IMPLEMENTATIONS.md#custom-metrics-api
Kubernetes API聚合层:
在 Kubernetes 1.7 版本引入了聚合层,允许第三方应用程序通过将自己注册到kube-apiserver上,仍然通过 API Server 的 HTTP URL 对新的 API 进行访问和操作。为了实现这个机制,Kubernetes 在 kube-apiserver 服务中引入了一个API 聚合层(API Aggregation Layer),用于将扩展 API 的访问请求转发到用户服务的功能。
启用聚合层:
如果你使用kubeadm部署的,默认已开启。
如果你使用二进制方式部署的话,需要在kubeAPIServer中添加启动参数,增加以下配置:
# vim /opt/kubernetes/cfg/kube-apiserver.conf
..--requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --proxy-client-cert-file=/opt/kubernetes/ssl/server.pem --proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem --requestheader-allowed-names=kubernetes --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --enable-aggregator-routing=true ...
Pod自动扩容/缩容: 基于资源指标
Metrics Server:是一个数据聚合器,从kubelet收集资源指标,并通过Metrics API在Kubernetes apiserver暴露,以供HPA使用。
项目地址:https://github.com/kubernetes-sigs/metrics-server
Metrics Server部署:
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
[root@master-1 yaml]# vim components.yaml
apiVersion: v1kind: ServiceAccountmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-serverrbac.authorization.k8s.io/aggregate-to-admin: "true"rbac.authorization.k8s.io/aggregate-to-edit: "true"rbac.authorization.k8s.io/aggregate-to-view: "true"name: system:aggregated-metrics-readerrules:- apiGroups:- metrics.k8s.ioresources:- pods- nodesverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:labels:k8s-app: metrics-servername: system:metrics-serverrules:- apiGroups:- ""resources:- pods- nodes- nodes/stats- namespaces- configmapsverbs:- get- list- watch---apiVersion: rbac.authorization.k8s.io/v1kind: RoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server-auth-readernamespace: kube-systemroleRef:apiGroup: rbac.authorization.k8s.iokind: Rolename: extension-apiserver-authentication-readersubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: metrics-server:system:auth-delegatorroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:auth-delegatorsubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:labels:k8s-app: metrics-servername: system:metrics-serverroleRef:apiGroup: rbac.authorization.k8s.iokind: ClusterRolename: system:metrics-serversubjects:- kind: ServiceAccountname: metrics-servernamespace: kube-system---apiVersion: v1kind: Servicemetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:ports:- name: httpsport: 443protocol: TCPtargetPort: httpsselector:k8s-app: metrics-server---apiVersion: apps/v1kind: Deploymentmetadata:labels:k8s-app: metrics-servername: metrics-servernamespace: kube-systemspec:selector:matchLabels:k8s-app: metrics-serverstrategy:rollingUpdate:maxUnavailable: 0template:metadata:labels:k8s-app: metrics-serverspec:containers:- args:- --cert-dir=/tmp- --secure-port=4443- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname- --kubelet-use-node-status-port- --kubelet-insecure-tls#不验证kubelet提供的https证书image: feixiangkeji974907/metrics-server:v0.4.1#官方给的镜像地址国内无法下载,我下载后传到了dockerhub上了imagePullPolicy: IfNotPresentlivenessProbe:failureThreshold: 3httpGet:path: /livezport: httpsscheme: HTTPSperiodSeconds: 10name: metrics-serverports:- containerPort: 4443name: httpsprotocol: TCPreadinessProbe:failureThreshold: 3httpGet:path: /readyzport: httpsscheme: HTTPSperiodSeconds: 10securityContext:readOnlyRootFilesystem: truerunAsNonRoot: truerunAsUser: 1000volumeMounts:- mountPath: /tmpname: tmp-dirnodeSelector:kubernetes.io/os: linuxpriorityClassName: system-cluster-criticalserviceAccountName: metrics-servervolumes:- emptyDir: {}name: tmp-dir---apiVersion: apiregistration.k8s.io/v1kind: APIServicemetadata:labels:k8s-app: metrics-servername: v1beta1.metrics.k8s.iospec:group: metrics.k8s.iogroupPriorityMinimum: 100insecureSkipTLSVerify: trueservice:name: metrics-servernamespace: kube-systemversion: v1beta1versionPriority: 100
[root@master-1 yaml]# kubectl apply -f components.yaml
[root@master-1 yaml]# kubectl get all -n kube-system
NAME READY STATUS RESTARTS AGEpod/coredns-5ffbfd976d-txzpq 1/1 Running 5 7d3hpod/kube-flannel-ds-amd64-5tnrj 1/1 Running 11 7d3hpod/kube-flannel-ds-amd64-fx7vc 1/1 Running 11 7d3hpod/kube-flannel-ds-amd64-kwfxn 1/1 Running 2 5dpod/kube-flannel-ds-amd64-l4g4t 1/1 Running 7 7d3hpod/metrics-server-664489867d-5zwpf 1/1 Running 20 7d3hNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kube-dns ClusterIP 10.0.0.2 53/UDP,53/TCP,9153/TCP 7d3hservice/metrics-server ClusterIP 10.0.0.42 443/TCP 7d3hNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/kube-flannel-ds-amd64 4 4 4 4 4 7d3hNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/coredns 1/1 1 1 7d3hdeployment.apps/metrics-server 1/1 1 1 7d3hNAME DESIRED CURRENT READY AGEreplicaset.apps/coredns-5ffbfd976d 1 1 1 7d3hreplicaset.apps/metrics-server-664489867d 1 1 1 7d3h
测试metrics-server 是否可用:
[root@master-1 yaml]# kubectl get apiservice | grep metric
v1beta1.metrics.k8s.io kube-system/metrics-server True 15s
[root@master-1 yaml]# kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes
{"kind":"NodeMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/nodes"},"items":[{"metadata":{"name":"master-1","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/master-1","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:35Z","window":"30s","usage":{"cpu":"433676959n","memory":"1127920Ki"}},{"metadata":{"name":"node-1","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/node-1","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:38Z","window":"30s","usage":{"cpu":"211126782n","memory":"595564Ki"}},{"metadata":{"name":"node-2","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/node-2","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:35Z","window":"30s","usage":{"cpu":"170548141n","memory":"366576Ki"}},{"metadata":{"name":"node-3","selfLink":"/apis/metrics.k8s.io/v1beta1/nodes/node-3","creationTimestamp":"2021-01-03T16:03:35Z"},"timestamp":"2021-01-03T16:02:32Z","window":"30s","usage":{"cpu":"120488736n","memory":"425804Ki"}}]}
[root@master-1 yaml]# kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
{"kind":"PodMetricsList","apiVersion":"metrics.k8s.io/v1beta1","metadata":{"selfLink":"/apis/metrics.k8s.io/v1beta1/pods"},"items":[{"metadata":{"name":"web-test-5cdbd79b55-87pqt","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-87pqt","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:28Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"6420Ki"}}]},{"metadata":{"name":"web-test-5cdbd79b55-p54nq","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-p54nq","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:35Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"5160Ki"}}]},{"metadata":{"name":"web-test-5cdbd79b55-r9swh","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-r9swh","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:27Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"6040Ki"}}]},{"metadata":{"name":"web-test-5cdbd79b55-t8pcx","namespace":"default","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/web-test-5cdbd79b55-t8pcx","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:34Z","window":"30s","containers":[{"name":"nginx","usage":{"cpu":"0","memory":"5996Ki"}}]},{"metadata":{"name":"nginx-ingress-controller-766fb9f77-zzwmp","namespace":"ingress-nginx","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/ingress-nginx/pods/nginx-ingress-controller-766fb9f77-zzwmp","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:30Z","window":"30s","containers":[{"name":"nginx-ingress-controller","usage":{"cpu":"21249411n","memory":"96340Ki"}}]},{"metadata":{"name":"coredns-5ffbfd976d-txzpq","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/coredns-5ffbfd976d-txzpq","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:30Z","window":"30s","containers":[{"name":"coredns","usage":{"cpu":"7583457n","memory":"19704Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-5tnrj","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-5tnrj","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:39Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"4707209n","memory":"16228Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-fx7vc","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-fx7vc","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:34Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"4736628n","memory":"16096Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-kwfxn","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-kwfxn","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:30Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"5035329n","memory":"19940Ki"}}]},{"metadata":{"name":"kube-flannel-ds-amd64-l4g4t","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/kube-flannel-ds-amd64-l4g4t","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:33Z","window":"30s","containers":[{"name":"kube-flannel","usage":{"cpu":"5185835n","memory":"15388Ki"}}]},{"metadata":{"name":"metrics-server-664489867d-5zwpf","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/metrics-server-664489867d-5zwpf","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:39Z","window":"30s","containers":[{"name":"metrics-server","usage":{"cpu":"7712269n","memory":"26064Ki"}}]},{"metadata":{"name":"prometheus-5fc97df657-w4qcm","namespace":"kube-system","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kube-system/pods/prometheus-5fc97df657-w4qcm","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:26Z","window":"30s","containers":[{"name":"prometheus-server-configmap-reload","usage":{"cpu":"0","memory":"2844Ki"}},{"name":"prometheus-server","usage":{"cpu":"15569447n","memory":"124600Ki"}}]},{"metadata":{"name":"dashboard-metrics-scraper-6b4884c9d5-g8dx8","namespace":"kubernetes-dashboard","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kubernetes-dashboard/pods/dashboard-metrics-scraper-6b4884c9d5-g8dx8","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:23Z","window":"30s","containers":[{"name":"dashboard-metrics-scraper","usage":{"cpu":"224169n","memory":"12228Ki"}}]},{"metadata":{"name":"kubernetes-dashboard-7f99b75bf4-ljhtn","namespace":"kubernetes-dashboard","selfLink":"/apis/metrics.k8s.io/v1beta1/namespaces/kubernetes-dashboard/pods/kubernetes-dashboard-7f99b75bf4-ljhtn","creationTimestamp":"2021-01-03T16:04:02Z"},"timestamp":"2021-01-03T16:03:33Z","window":"30s","containers":[{"name":"kubernetes-dashboard","usage":{"cpu":"1175979n","memory":"21788Ki"}}]}]}
也可以使用kubectl top访问Metrics API:
#查看Node资源消耗
[root@master-1 yaml]# kubectl top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%master-1 584m 14% 1101Mi 19%node-1 219m 5% 567Mi 9%node-2 175m 4% 363Mi 6%node-3 146m 3% 408Mi 2%
#查看Pod资源消耗:
[root@master-1 yaml]# kubectl top pod
NAME CPU(cores) MEMORY(bytes)web-test-5cdbd79b55-x6snq 0m 2Mi
模拟部署一个nginx pod 测试
#创建deployment
[root@master-1 yaml]# kubectl create deployment web --image=nginx --dry-run=client -o yaml > deployment.yaml
[root@master-1 yaml]# vim deployment.yaml
#修改yaml,修改副本数为2,增加resources.requests.cpu
.....spec:replicas: 2.....spec:containers:- image: nginxname: nginxresources:requests:cpu: "250m"........
#创建service
[root@master-1 yaml]# kubectl expose deployment web --port=80 --target-port=80
[root@master-1 yaml]# kubectl get pods,service,deployment
NAME READY STATUS RESTARTS AGEpod/web-6fc98bbd69-dbbxx 1/1 Running 0 21spod/web-6fc98bbd69-xzrbm 1/1 Running 0 3m15sNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kubernetes ClusterIP 10.0.0.1 443/TCP 7d4hservice/web ClusterIP 10.0.0.205 80/TCP 7d3hNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/web 2/2 2 2 7m46s
创建HPA,设置指标
[root@master-1 yaml]# kubectl autoscale deployment web --min=2 --max=5 --cpu-percent=80
horizontalpodautoscaler.autoscaling/web autoscaled
[root@master-1 yaml]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEweb Deployment/web 0%/80% 2 52 39s
说明:为名为web的deployment创建一个HPA对象,目标CPU使用率为80%,副本数量配置为2到5之间。
压力测试
使用ac 压力测试
总10w请求,并发1000,从1循环到100
[root@master-1 yaml]# for i in {1..100}
> do
> ab -n 100000 -c 1000 http://10.0.0.205/index.html
> done
观察扩容状态:
另起一个终端,查看CPU 使用率 ,以及扩容状态
#可以看到CPU 使用率超过了191%, pod也成功扩容到了5个
[root@master-1 ~]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEweb Deployment/web 191%/80% 2 5 4 5m43s
[root@master-1 ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATESweb-6fc98bbd69-dbbxx 1/1 Running 0 9m8s 10.244.0.13 node-2 web-6fc98bbd69-lplz8 1/1 Running 0 2m51s 10.244.0.14 node-2 web-6fc98bbd69-q5pcs 1/1 Running 0 3m6s 10.244.1.30 node-1 web-6fc98bbd69-vncjr 1/1 Running 0 3m6s 10.244.3.15 node-3 web-6fc98bbd69-xzrbm 1/1 Running 0 12m 10.244.2.28 master-1 web-test-5cdbd79b55-x6snq 1/1 Running 0 22m 10.244.3.14 node-3
Pod自动扩容/缩容:
冷却周期
在弹性伸缩中,冷却周期是不能逃避的一个话题, 由于评估的度量标准是动态特性,副本的数量可能会不断波动,
造成丢失流量,所以不应该在任意时间扩容和缩容。
在 HPA 中,为缓解该问题,默认有一定控制:
- --horizontal-pod-autoscaler-downscale-delay 当前操作完成后等待多次时间才能执行缩容操作,默认5分钟
- --horizontal-pod-autoscaler-upscale-delay 当前操作完成后等待多长时间才能执行扩容操作,默认3分钟
可以通过调整kube-controller-manager组件启动参数调整,一般使用默认参数即可
为了方便测试,我们先停止ab压力测试,记录下时间
[root@master-1 yaml]# dateMon Jan 4 00:44:16 CST 2021
等待冷却周期(默认5分钟)
[root@master-1 ~]# dateMon Jan 4 00:49:30 CST 2021
[root@master-1 ~]# kubectl get podsNAME READY STATUS RESTARTS AGEweb-6fc98bbd69-vncjr 1/1 Running 0 25mweb-6fc98bbd69-xzrbm 1/1 Running 0 34m
可以看到 pod 又变回来原先的2个副本数
HPA 指标也显示正常
[root@master-1 ~]# kubectl get hpaNAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGEweb Deployment/web 0%/80% 2 5 2 31m
至此,k8s 弹性伸缩之 Pod基于HPA_实现自动扩容/缩容 到此结束
更多推荐
所有评论(0)