kubernetes监控集群资源状况,Metrice Server聚合器
监控集群资源状况,Metrice Server聚合器查看集群资源状态[root@k8s-master ~]# kubectl cluster-infoKubernetes master is running at https://192.168.226.128:6443KubeDNS is running at https://192.168.226.128:6443/api/v1/namespa
·
监控集群资源状况,Metrice Server聚合器
-
查看集群资源状态
[root@k8s-master ~]# kubectl cluster-info Kubernetes master is running at https://192.168.226.128:6443 KubeDNS is running at https://192.168.226.128:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
-
查看更多集群信息:
[root@k8s-master ~]# kubectl cluster-info dump
-
查看资源信息:
#kubectl describe <资源> <名称> [root@k8s-master ~]# kubectl describe pods nginx Name: nginx-f89759699-crt4d Namespace: default Priority: 0 Node: k8s-node1/192.168.226.129 Start Time: Thu, 23 Jul 2020 09:23:05 +0800 Labels: app=nginx pod-template-hash=f89759699 Annotations: cni.projectcalico.org/podIP: 10.244.36.72/32 cni.projectcalico.org/podIPs: 10.244.36.72/32 ...........
-
查看master组件状态
[root@k8s-master ~]# kubectl get cs NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health":"true"}
-
查看node状态
[root@k8s-master ~]# kubectl get node NAME STATUS ROLES AGE VERSION k8s-master Ready master 5d23h v1.18.0 k8s-node1 Ready <none> 5d23h v1.18.0 k8s-node2 Ready <none> 5d23h v1.18.06
-
查看所有资源
[root@k8s-master ~]# kubectl api-resources
如何监控集群资源利用率:
k8s官方推荐的方式是使用Metrics-server+cAdvisor监控集群资源消耗。
Metrics server是一个集群范围的资源使用情况的数据聚合器,作为一个应用部署在集群中。Metrics server 从每个节点上kubelet api 收集指标,通过kubernetes聚合器注册在Master APIServer中。
部署Metrics
-
下载metrice的yaml文件(具体使用说明也在git中)
[root@k8s-master ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
-
修改yaml文件在配置文件中拉取镜像的默认地址是国外网站拉取比较慢,可选择科学上网或者改为国内源(详情见git说明)
[root@k8s-master ~]# vim metrics-server.yaml ........... imagePullPolicy: IfNotPresent args: - --cert-dir=/tmp - --secure-port=4443 - --kubelet-insecure-tls - --kubelet-preferred-address-types=InternalIP ports: ...............
-
在配置文件中需要选择修改下图中的参数
- 在官方说明中请留意Horizontal Pod Autoscaler (pod水平扩展)和 Vertical Pod Autoscaler (pod横向扩展)
-
创建
[root@k8s-master ~]# kubectl apply -f metrics-server.yaml clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created serviceaccount/metrics-server created deployment.apps/metrics-server created service/metrics-server created clusterrole.rbac.authorization.k8s.io/system:metrics-server created clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
-
查看metric-server的状态
[root@k8s-master ~]# kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE calico-kube-controllers-578894d4cd-9vg49 1/1 Running 3 6d1h calico-node-ghxvg 1/1 Running 2 5d15h calico-node-ntssh 1/1 Running 2 5d15h calico-node-tj49h 1/1 Running 2 5d15h coredns-7ff77c879f-758pq 1/1 Running 3 6d1h coredns-7ff77c879f-7xnjk 1/1 Running 3 6d1h etcd-k8s-master 1/1 Running 3 6d1h kube-apiserver-k8s-master 1/1 Running 3 6d1h kube-controller-manager-k8s-master 1/1 Running 4 6d1h kube-proxy-8rhh2 1/1 Running 3 6d1h kube-proxy-8w7nr 1/1 Running 3 6d1h kube-proxy-s5v76 1/1 Running 3 6d1h kube-scheduler-k8s-master 1/1 Running 4 6d1h metrics-server-7875f8bf59-tszbw 1/1 Running 0 11m
[root@k8s-master ~]# kubectl get apiservice NAME SERVICE AVAILABLE AGE v1. Local True 6d1h v1.admissionregistration.k8s.io Local True 6d1h v1.apiextensions.k8s.io Local True 6d1h v1.apps Local True 6d1h v1.authentication.k8s.io Local True 6d1h v1.authorization.k8s.io Local True 6d1h v1.autoscaling Local True 6d1h v1.batch Local True 6d1h v1.coordination.k8s.io Local True 6d1h v1.crd.projectcalico.org Local True 5d4h v1.networking.k8s.io Local True 6d1h v1.rbac.authorization.k8s.io Local True 6d1h v1.scheduling.k8s.io Local True 6d1h v1.storage.k8s.io Local True 6d1h v1beta1.admissionregistration.k8s.io Local True 6d1h v1beta1.apiextensions.k8s.io Local True 6d1h v1beta1.authentication.k8s.io Local True 6d1h v1beta1.authorization.k8s.io Local True 6d1h v1beta1.batch Local True 6d1h v1beta1.certificates.k8s.io Local True 6d1h v1beta1.coordination.k8s.io Local True 6d1h v1beta1.discovery.k8s.io Local True 6d1h v1beta1.events.k8s.io Local True 6d1h v1beta1.extensions Local True 6d1h v1beta1.metrics.k8s.io kube-system/metrics-server True 12m v1beta1.networking.k8s.io Local True 6d1h v1beta1.node.k8s.io Local True 6d1h v1beta1.policy Local True 6d1h v1beta1.rbac.authorization.k8s.io Local True 6d1h v1beta1.scheduling.k8s.io Local True 6d1h v1beta1.storage.k8s.io Local True 6d1h v2beta1.autoscaling Local True 6d1h v2beta2.autoscaling Local True 6d1h
-
查看node的资源
[root@k8s-master ~]# kubectl top node k8s-node1
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
k8s-node1 82m 4% 387Mi 20%
- 查看pod的资源
[root@k8s-master ~]# kubectl top pod nginx-f89759699-crt4d
NAME CPU(cores) MEMORY(bytes)
nginx-f89759699-crt4d 0m 7Mi
[root@k8s-master ~]# kubectl top pods
NAME CPU(cores) MEMORY(bytes)
aliang-666-74689c47f4-7bzrz 0m 3Mi
nginx-f89759699-crt4d 0m 7Mi
- 完结
更多推荐
已为社区贡献15条内容
所有评论(0)