一、k8s的UI访问界面dashboard

在dashboard中,虽然可以做到创建、删除、修改资源等操作,但通常情况下,只会把它当做监控k8s集群的软件。
1.GitHub主页搜索"dashboard"
直接远程运行对应的yaml文件,不过可以看下yaml文件内都有什么内容,然后将svc资源类型更改为NodePort,所以先将yaml文件下载到本地。

[root@master jiankong]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
[root@master jiankong]# vim recommended.yaml
...
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
...

运行后查看对应的svc暴露端口,注意这个是基于https的访问。

[root@master jiankong]# kubectl apply -f recommended.yaml
[root@master jiankong]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-76679bc5b9-7xl96   1/1     Running   0          57s
kubernetes-dashboard-7f9fd5966c-7s7vb        1/1     Running   0          57s
[root@master jiankong]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.100.248.104   <none>        8000/TCP        60s
kubernetes-dashboard        NodePort    10.106.19.193    <none>        443:30843/TCP   60s

2.基于token的方法登录dashboard
(1)创建一个dashboard的管理用户

[root@master jiankong]# kubectl create serviceaccount dashboard-admin -n kube-system

(2)绑定用户为集群管理用户

[root@master jiankong]# kubectl create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

(3)获取token

[root@master jiankong]# kubectl get secrets -n kube-system | grep dashboard-admin
dashboard-admin-token-sj7fr                      kubernetes.io/service-account-token   3      106s
[root@master jiankong]# kubectl describe secrets -n kube-system dashboard-admin-token-sj7fr
Name:         dashboard-admin-token-sj7fr
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: c729286e-6580-4ad1-8c91-ce0002bb203a

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc2o3ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzcyOTI4NmUtNjU4MC00YWQxLThjOTEtY2UwMDAyYmIyMDNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.N6TmR7V7osgUs9gxC3kWwlPPJqEEdzDBsPJY71swx5DpTgSVj3MO4o3aMid2Q_ctCSLKFTxwkXobrqMoH3T4Q05ddWgkWnZ8yoiaBaNRnMOJ2p8FnEyETuUysybjt3MZ8HvWFHidPVk3zlQEIyiNH7nPcJZQsXMTrqoIQC44XOig7Apx9Mg8CK19GC8YeAwD8ePC9mqjYhPthV3LisRPEk3Qxu8SnnRUb1RlbX26JGqiaVZDKdTvUg8C1NUAFIUnQnRF5c6v1Tu-2Dz7TqUH5a2CYiU3tmLx9k3o0rYMQrBmjZPVDpoWxqGkUJ0ry-KVVzvbhX1daRACGJVSQi7dIw

(4)在浏览器上使用token登录

https://192.168.229.187:30843

在这里插入图片描述
在这里插入图片描述
注意:如果使用的是旧版本的dashboard,使用谷歌浏览器登录,可能会失败,需要换成其它的浏览器,比如:火狐。
3.基于kubeconfig配置文件的方法登录dashboard
(1)获取token

[root@master jiankong]# kubectl get secrets -n kube-system | grep dashboard-admin
dashboard-admin-token-sj7fr                      kubernetes.io/service-account-token   3      106s
[root@master jiankong]# kubectl describe secrets -n kube-system dashboard-admin-token-sj7fr
Name:         dashboard-admin-token-sj7fr
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: c729286e-6580-4ad1-8c91-ce0002bb203a

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc2o3ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYzcyOTI4NmUtNjU4MC00YWQxLThjOTEtY2UwMDAyYmIyMDNhIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.N6TmR7V7osgUs9gxC3kWwlPPJqEEdzDBsPJY71swx5DpTgSVj3MO4o3aMid2Q_ctCSLKFTxwkXobrqMoH3T4Q05ddWgkWnZ8yoiaBaNRnMOJ2p8FnEyETuUysybjt3MZ8HvWFHidPVk3zlQEIyiNH7nPcJZQsXMTrqoIQC44XOig7Apx9Mg8CK19GC8YeAwD8ePC9mqjYhPthV3LisRPEk3Qxu8SnnRUb1RlbX26JGqiaVZDKdTvUg8C1NUAFIUnQnRF5c6v1Tu-2Dz7TqUH5a2CYiU3tmLx9k3o0rYMQrBmjZPVDpoWxqGkUJ0ry-KVVzvbhX1daRACGJVSQi7dIw

(2)生成kubeconfig配置文件

设置一个环境变量代表获取的token。
[root@master jiankong]# DASH_TOKEN=$(kubectl get secrets -n kube-system dashboard-admin-token-sj7fr -o jsonpath={.data.token} | base64 -d)

将k8s集群的配置信息写入kubeconfig配置文件中。
[root@master jiankong]# kubectl config set-cluster kubernetes --server=192.168.229.187:6443 --kubeconfig=/root/.dashboard-admin.conf
[root@master jiankong]# kubectl config set-credentials dashboard-admin --token=$DASH_TOKEN --kubeconfig=/root/.dashboard-admin.conf
[root@master jiankong]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/.dashboard-admin.conf
[root@master jiankong]# kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/root/.dashboard-admin.conf

(3)将生成的/root/.dashboard-admin.conf的配置文件导出并保存。

zhangjiedeMacBook-Pro:~ zhangjie$ scp -r root@192.168.229.187:/root/jiankong/config.conf /Users/zhangjie/Desktop

(4)浏览器选择kubeconfig的登录方式,然后导入配置文件即可。
在这里插入图片描述
在这里插入图片描述

二、scope相关介绍

1.github上搜索scope
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
2.安装scope

[root@master jiankong]# kubectl apply -f "https://cloud.weave.works/k8s/scope.yaml?k8s-version=$(kubectl version | base64 | tr -d '\n')"
[root@master jiankong]# kubectl get namespaces
[root@master jiankong]# kubectl get svc -n weave
[root@master jiankong]# kubectl edit svc -n weave weave-scope-app
...
spec:
  clusterIP: 10.96.76.173
  ports:
  - name: app
    port: 80
    protocol: TCP
    targetPort: 4040
  selector:
    app: weave-scope
    name: weave-scope-app
    weave-cloud-component: scope
    weave-scope-component: app
  sessionAffinity: None
  type: NodePort
...
[root@master jiankong]# kubectl get svc -n weave
NAME              TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
weave-scope-app   NodePort   10.96.76.173   <none>        80:30026/TCP   7m37s

3.浏览器访问

192.168.229.187:30026

在这里插入图片描述

三、prometheus

1.获取压缩包

[root@master jiankong]# wget https://github.com/prometheus-operator/kube-prometheus/archive/v0.3.0.tar.gz
MetricsServer:k8s集群资源使用情况的聚合器,收集数据给k8s集群内使用。
如kubectl,hpa,scheduler;

Prometheus Operator:是一个系统检测和警报工具箱,用来存储监控数据;

Prometheus node-exporter:收集k8s集群资源的数据,指定告警规则;

Prometheus:收集apiserver,scheduler,controller-manager,kubelet组件的数据,通过http协议传输;

Grafana:可视化数据统计和监控平台;

2.解压压缩包

[root@master jiankong]# tar zxf v0.3.0.tar.gz
[root@master jiankong]# cd kube-prometheus-0.3.0/
[root@master kube-prometheus-0.3.0]# ls
build.sh            go.sum                 manifests
code-of-conduct.md  hack                   NOTICE
DCO                 jsonnet                OWNERS
docs                jsonnetfile.json       README.md
example.jsonnet     jsonnetfile.lock.json  scripts
examples            kustomization.yaml     sync-to-internal-registry.jsonnet
experimental        LICENSE                tests
go.mod              Makefile               test.sh
[root@master kube-prometheus-0.3.0]# cd manifests/
[root@master manifests]# ls
alertmanager-alertmanager.yaml
alertmanager-secret.yaml
alertmanager-serviceAccount.yaml
alertmanager-serviceMonitor.yaml
alertmanager-service.yaml
grafana-dashboardDatasources.yaml
grafana-dashboardDefinitions.yaml
grafana-dashboardSources.yaml
grafana-deployment.yaml
grafana-serviceAccount.yaml
grafana-serviceMonitor.yaml
grafana-service.yaml
kube-state-metrics-clusterRoleBinding.yaml
kube-state-metrics-clusterRole.yaml
kube-state-metrics-deployment.yaml
kube-state-metrics-roleBinding.yaml
kube-state-metrics-role.yaml
kube-state-metrics-serviceAccount.yaml
kube-state-metrics-serviceMonitor.yaml
kube-state-metrics-service.yaml
node-exporter-clusterRoleBinding.yaml
node-exporter-clusterRole.yaml
node-exporter-daemonset.yaml
node-exporter-serviceAccount.yaml
node-exporter-serviceMonitor.yaml
node-exporter-service.yaml
prometheus-adapter-apiService.yaml
prometheus-adapter-clusterRoleAggregatedMetricsReader.yaml
prometheus-adapter-clusterRoleBindingDelegator.yaml
prometheus-adapter-clusterRoleBinding.yaml
prometheus-adapter-clusterRoleServerResources.yaml
prometheus-adapter-clusterRole.yaml
prometheus-adapter-configMap.yaml
prometheus-adapter-deployment.yaml
prometheus-adapter-roleBindingAuthReader.yaml
prometheus-adapter-serviceAccount.yaml
prometheus-adapter-service.yaml
prometheus-clusterRoleBinding.yaml
prometheus-clusterRole.yaml
prometheus-operator-serviceMonitor.yaml
prometheus-prometheus.yaml
prometheus-roleBindingConfig.yaml
prometheus-roleBindingSpecificNamespaces.yaml
prometheus-roleConfig.yaml
prometheus-roleSpecificNamespaces.yaml
prometheus-rules.yaml
prometheus-serviceAccount.yaml
prometheus-serviceMonitorApiserver.yaml
prometheus-serviceMonitorCoreDNS.yaml
prometheus-serviceMonitorKubeControllerManager.yaml
prometheus-serviceMonitorKubelet.yaml
prometheus-serviceMonitorKubeScheduler.yaml
prometheus-serviceMonitor.yaml
prometheus-service.yaml
setup

3.修改yaml文件,更改为nodePort的暴露方式。

[root@master manifests]# vim grafana-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: http
    port: 3000
    targetPort: http
  selector:
    app: grafana
[root@master manifests]# vim prometheus-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9090
    targetPort: web
  selector:
    app: prometheus
    prometheus: k8s
  sessionAffinity: ClientIP
[root@master manifests]# vim alertmanager-service.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    alertmanager: main
  name: alertmanager-main
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9093
    targetPort: web
  selector:
    alertmanager: main
    app: alertmanager
  sessionAffinity: ClientIP

4.将setup目录中的yaml文件全部运行是运行manifests目录中yaml文件的基础环境配置。有可能因为目录内yaml文件过多而不能一次性全部运行,所以多运行几遍。

[root@master manifests]# kubectl apply -f setup/
[root@master manifests]# cd ..
[root@master kube-prometheus-0.3.0]# kubectl apply -f manifests/

5.部署成功后,可以运行一条命令,查看资源使用情况(MetricsServer必须部署成功)。

[root@master kube-prometheus-0.3.0]# kubectl top node
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
master   505m         25%    1554Mi          42%
node1    564m         28%    1482Mi          40%
node2    432m         21%    1349Mi          36%

6.查看映射端口

[root@master kube-prometheus-0.3.0]# kubectl get svc -n monitoring
NAME                    TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
alertmanager-main       NodePort    10.96.137.18    <none>        9093:30229/TCP               8m10s
alertmanager-operated   ClusterIP   None            <none>        9093/TCP,9094/TCP,9094/UDP   8m10s
grafana                 NodePort    10.111.12.62    <none>        3000:32270/TCP               8m10s
kube-state-metrics      ClusterIP   None            <none>        8443/TCP,9443/TCP            8m9s
node-exporter           ClusterIP   None            <none>        9100/TCP                     8m9s
prometheus-adapter      ClusterIP   10.100.13.144   <none>        443/TCP                      8m9s
prometheus-k8s          NodePort    10.104.140.58   <none>        9090:30894/TCP               8m7s
prometheus-operated     ClusterIP   None            <none>        9090/TCP                     8m8s
prometheus-operator     ClusterIP   None            <none>        8080/TCP                     10m

7.浏览器访问grafana

192.168.229.187:32270

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
8.从grafana官网搜索监控模板并导入
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
选择文件导入
在这里插入图片描述
在这里插入图片描述

四、HPA的相关介绍

根据当前Pod资源的使用率,比如说CPU、磁盘、内存等进行副本Pod的动态扩容与缩容。
前提条件:系统应该能获取到当前Pod的资源使用情况(指可以执行kubectl top pod命令,并且能够得到反馈信息)。
heapster:这个组件之前是集成在k8s集群的,不过在1.12版本之后被移除了。如果还想使用此功能,应该部署metricsServer这个k8s集群资源使用情况的聚合器。
这里使用一个测试镜像,这个镜像基于php-apache制作的docker镜像,包含了一些可以运行cpu密集计算任务的代码。
1.模拟运行一个Deployment资源对象和SVC资源对象,等会要对它进行HPA控制。

[root@master ~]# kubectl run php-apache --image=mirrorgooglecontainers/hpa-example --requests=cpu=200m --expose --port=80
[root@master ~]# kubectl get pod
NAME                          READY   STATUS    RESTARTS   AGE
php-apache-55447747db-hpd2f   1/1     Running   0          13s
[root@master ~]# kubectl top pod php-apache-55447747db-hpd2f
NAME                          CPU(cores)   MEMORY(bytes)
php-apache-55447747db-hpd2f   0m           17Mi

2.创建HPA控制器

[root@master ~]# kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10
[root@master ~]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   0%/50%    1         10        1          97s

3.创建一个应用,用来不停的访问刚刚创建的php-apache的svc资源。

[root@master ~]# kubectl run -i --tty load-generator --image=busybox /bin/sh
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ #

4.进入Pod内执行此命令,用来模拟访问php-apache的svc资源。

/ # while true; do wget -q -O- http://php-apache.default.svc.cluster.local ;done

上述Deployment对php-apache这个SVC不停的访问,查看对应的HPA资源它的CPU使用率明显升高,这时再去观察php-apache这个Deployment,就会看到它的Pod数量也会不停的增加,直到设置的HPA资源的最大Pod数量。

[root@master ~]# kubectl get hpa
NAME         REFERENCE               TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
php-apache   Deployment/php-apache   97%/50%   1         10        2          92m
[root@master ~]# kubectl get deployments. -w
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
load-generator   1/1     1            1           87m
php-apache       2/2     2            2           104m

当然Pod数量不会继续增加时,将访问界面的服务停止,观察php-apache这个Deployment资源的Pod数量的变化。当php-apache的CPU使用率完全降下来之后,hpa资源控制的Pod数量并没有马上降下来,这是因为它要维护这个Pod数量一段时间,保证大量的访问流量继续访问。
资源限制

requests:代表容器启动请求的资源限制,分配的资源必须要达到此要求;

limits:代表最多可以请求多少资源;

单位m:CPU的计量单位叫毫核(m)。一个节点的CPU核心数量乘以1000,得到的就是节点总的CPU总数量;
如一个节点有两个核,那么该节点的CPU总量为2000m。

基于pod
Kubernetes对资源的限制实际上是通过cgroup来控制的,cgroup是容器的一组用来控制内核如何运行进程的相关属性接。针对内存、CPU和各种设备都有对应的cgroup。
默认情况下,Pod运行没有CPU和内存限额。这意味着系统中的任何Pod将能够像执行该Pod所在接地那一样,消耗足够多的CPU和内存。一般会针对某些应用的Pod资源进行资源限制,这个资源限制是通过resources和requests和limits来实现。

[root@master ~]# vim cgroup-pod.yaml
spec:
  containers:
  - name: xxxx
    imagePullPolicy: Always
    image: xxx
    ports:
    - protocol: TCP
      containerPort: 80
    resources:
      limits:
        cpu: 4
        memory: 2Gi
      requests:
        cpu: 260m
        memory: 260Mi

requests:要分配的资源,limits为最高请求的资源值。可以简单的理解为初始值和最大值。
基于名称空间
1.计算资源配额

[root@master ~]# vim compute-resources.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: compute-resources
spec:
  hard:
    pods: 20
    requests.cpu: 20
    requests.memory: 100Gi
    limits.cpu: 40
    limits.memory: 200Gi

2.配置对象数量配额限制

[root@master ~]# vim object-counts.yaml
apiVersion: v1
kind: ResourceQuota
metadata:
  name: object-counts
spec:
  hard:
    configmaps: 10
    persistentvolumeclaims: 4
    replicationcontrollers: 20
    secrets: 10
    service.loadbalancers: 2

3.配置CPU和内存的LimitRange

[root@master ~]# vim limitRange.yaml
apiVersion: v1
kind: LimitRange
metadata:
  name: mem-limit-range
spec:
  limits:
  - default:
      memory: 50Gi
      cpu: 5
    defaultRequest:
      memory: 1Gi
      cpu: 1
    type: Container

default即limit的值,defaultRequest即request的值。

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐