K8s环境

masternode01node02
192.168.1.40192.168.1.41192.168.1.42

dashboard

1.下载dashboard的yaml文件

[root@master ~]# wget  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
--2020-11-20 20:43:00--  https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0/aio/deploy/recommended.yaml
正在解析主机 raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.76.133
正在连接 raw.githubusercontent.com (raw.githubusercontent.com)|151.101.76.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:7552 (7.4K) [text/plain]
正在保存至: “recommended.yaml”

100%[=======================================================================================================>] 7,552       6.05KB/s 用时 1.2s   

2020-11-20 20:43:03 (6.05 KB/s) - 已保存 “recommended.yaml” [7552/7552])
[root@master ~]# ls
recommended.yaml 

2.更改配置文件使暴露端口

[root@master ~]# vim recommended.yaml 
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort    #添加此项
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30066  #添加
  selector:
    k8s-app: kubernetes-dashboard
......
[root@master ~]# kubectl  apply  -f  recommended.yaml 

3.查看SVC已暴露端口

PS:是基于https的访问哦

[root@master ~]# kubectl  get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.96.74.95    <none>        8000/TCP        73s
kubernetes-dashboard        NodePort    10.96.18.208   <none>        443:30066/TCP   73s

在这里插入图片描述

这个页面,可以看到登录dashboard有两种方式

在这里插入图片描述

4.基于token的方法登录

4.1 创建一个dashboard的管理用户
[root@master ~]# kubectl create  serviceaccount dashboard-admin -n kube-system 
serviceaccount/dashboard-admin created
4.2 绑定用户为集群管理用户
[root@master ~]# kubectl  create clusterrolebinding dashboard-cluster-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-cluster-admin created
4.3 获取token
[root@master ~]# kubectl  get secrets  -n kube-system  | grep dashboard-admin
dashboard-admin-token-scbsf                      kubernetes.io/service-account-token   3      16s
[root@master ~]# kubectl  describe  secrets  -n kube-system dashboard-admin-token-scbsf 
Name:         dashboard-admin-token-scbsf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 35d3a0d0-3928-4005-b5a1-a779e864329f

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc2Nic2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzVkM2EwZDAtMzkyOC00MDA1LWI1YTEtYTc3OWU4NjQzMjlmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.VAWSBvU9sGdDV6l7ZEvXF4MRo7kY-YLTdrJWes-oy8haFji1ld35fZM7-qDrDntnZw-u1hOo4e1-BL3lyKXnuIoP8mTED8h6_IYHK6JZAixaTLMTkWBRlODaRdy8nH6j7gzcBU6-Kv9ubeeiutCXGJSjyfMY9qPcOnzzPHQTzSdCO1H67KMdkfGKJcFd0ZQoBMnNk49aUxxix9vBofpX7r3s00hJmw8Bmd_WBzGiHRVC1Q88l8uQ5sAeGBBoR7fS87NB9ZS-NM6XYMaQ_wTC_iK4KltSBdA5ZxyieWMh5SF1FScae-IBvwUbTE3jPW4N2LcvyJS3FYhlnX343sywvQ
ca.crt:     1025 bytes
namespace:  11 bytes
4.4 登录

PS:如果是使用的旧版本的dashboard,使用谷歌浏览器登录,可能是不成功的,需要换成其他的浏览器,比如:火狐。

在这里插入图片描述
在这里插入图片描述

5.基于kubeconfig配置文件的方式登录

5.1 获取token
[root@master ~]# kubectl  get secrets  -n kube-system  | grep dashboard-admin
dashboard-admin-token-scbsf                      kubernetes.io/service-account-token   3      16s
[root@master ~]# kubectl  describe  secrets  -n kube-system dashboard-admin-token-scbsf 
Name:         dashboard-admin-token-scbsf
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 35d3a0d0-3928-4005-b5a1-a779e864329f

Type:  kubernetes.io/service-account-token

Data
====
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tc2Nic2YiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMzVkM2EwZDAtMzkyOC00MDA1LWI1YTEtYTc3OWU4NjQzMjlmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.VAWSBvU9sGdDV6l7ZEvXF4MRo7kY-YLTdrJWes-oy8haFji1ld35fZM7-qDrDntnZw-u1hOo4e1-BL3lyKXnuIoP8mTED8h6_IYHK6JZAixaTLMTkWBRlODaRdy8nH6j7gzcBU6-Kv9ubeeiutCXGJSjyfMY9qPcOnzzPHQTzSdCO1H67KMdkfGKJcFd0ZQoBMnNk49aUxxix9vBofpX7r3s00hJmw8Bmd_WBzGiHRVC1Q88l8uQ5sAeGBBoR7fS87NB9ZS-NM6XYMaQ_wTC_iK4KltSBdA5ZxyieWMh5SF1FScae-IBvwUbTE3jPW4N2LcvyJS3FYhlnX343sywvQ
ca.crt:     1025 bytes
namespace:  11 bytes
5.2 生产kubeconfig配置文件
[root@master ~]# DASH_TOKEN=$(kubectl get secrets -n kube-system dashboard-admin-token-scbsf -o jsonpath={.data.token} | base64 -d)
[root@master ~]# kubectl  config set-cluster  kubernetes --server=192.168.1.40:6443 --kubeconfig=/root/.dashboard-admin.conf
Cluster "kubernetes" set.
[root@master ~]# kubectl config set-credentials dashboard-admin --token=$DASH_TOKEN --kubeconfig=/root/.dashboard-admin.conf
User "dashboard-admin" set.
[root@master ~]# kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/root/.dashboard-admin.conf
Context "dashboard-admin@kubernetes" created.
[root@master ~]# kubectl  config  use-context dashboard-admin@kubernetes --kubeconfig=/root/.dashboard-admin.conf 
Switched to context "dashboard-admin@kubernetes".
5.3 将生成的/root/.dashboard-admin.conf的配置文件,导出并做保存。

PS:根据自己的环境导出到相应的位置

[root@master ~]# scp -rp .dashboard-admin.conf  node01:
.dashboard-admin.conf                                              100% 1199   465.9KB/s   00:00
5.4 从浏览器选择kubeconfig的登录方式,然后导入配置文件即可.

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

Prometheus

1. 下载项目

[root@master ~]#  wget https://github.com/prometheus-operator/kube-prometheus/archive/v0.3.0.tar.gz
[root@master ~]# tar zxf v0.3.0.tar.gz 

2.更改配置文件

[root@master ~]# cd kube-prometheus-0.3.0/manifests/
[root@master manifests]# 
[root@master manifests]# vim grafana-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    app: grafana
  name: grafana
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: http
    port: 3000
    targetPort: http
    nodePort: 31001
  selector:
    app: grafana
[root@master manifests]# vim prometheus-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    prometheus: k8s
  name: prometheus-k8s
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9090
    targetPort: web
    nodePort: 31002
  selector:
    app: prometheus
    prometheus: k8s
  sessionAffinity: ClientIP
[root@master manifests]# vim alertmanager-service.yaml 
[root@master manifests]# cat alertmanager-service.yaml 
apiVersion: v1
kind: Service
metadata:
  labels:
    alertmanager: main
  name: alertmanager-main
  namespace: monitoring
spec:
  type: NodePort
  ports:
  - name: web
    port: 9093
    targetPort: web
    nodePort: 31003
  selector:
    alertmanager: main
    app: alertmanager
  sessionAffinity: ClientIP

3.基础环境配置

[root@master manifests]# pwd
/root/kube-prometheus-0.3.0/manifests
[root@master manifests]# kubectl  apply  -f  setup/
namespace/monitoring created
customresourcedefinition.apiextensions.k8s.io/alertmanagers.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/podmonitors.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheuses.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/prometheusrules.monitoring.coreos.com created
customresourcedefinition.apiextensions.k8s.io/servicemonitors.monitoring.coreos.com created
clusterrole.rbac.authorization.k8s.io/prometheus-operator created
clusterrolebinding.rbac.authorization.k8s.io/prometheus-operator created
deployment.apps/prometheus-operator created
service/prometheus-operator created
serviceaccount/prometheus-operator created

4.运行主yaml文件

PS:运行主yaml文件。有可能因为目录内yaml文件过多,一次不能全部运行,所以运行的时候,多运行两遍

[root@master kube-prometheus-0.3.0]# pwd
/root/kube-prometheus-0.3.0
[root@master kube-prometheus-0.3.0]# kubectl  apply  -f  manifests/
[root@master kube-prometheus-0.3.0]# kubectl  get pod  -n monitoring 
NAME                                  READY   STATUS              RESTARTS   AGE
alertmanager-main-0                   2/2     Running  			  0          16m
alertmanager-main-1                   2/2     Running 			  0          16m
alertmanager-main-2                   2/2     Running 			  0          16m
grafana-77978cbbdc-6ddcc              1/1     Running 			  0          19m
kube-state-metrics-7f6d7b46b4-gts5w   3/3     Running 			  0          19m
node-exporter-h779k                   2/2     Running 			  0          19m
node-exporter-pk2dv                   2/2     Running 			  0          19m
node-exporter-vgqtk                   2/2     Running 			  0          19m
prometheus-adapter-68698bc948-qnsb9   1/1     Running 			  5          19m
prometheus-k8s-0                      3/3     Running			  0          16m
prometheus-k8s-1                      3/3     Running 			  0          16m
prometheus-operator-6685db5c6-r5srv   1/1     Running 			  0          21m

[root@master kube-prometheus-0.3.0]# kubectl top node    #查看资源使用情况
NAME     CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master   226m         11%    1512Mi          87%       
node01   89m          4%     1360Mi          79%       
node02   86m          4%     1327Mi          77%  

5.登录

账户:admin 密码:admin

在这里插入图片描述
在这里插入图片描述

6.导入监控模板

在这里插入图片描述
在这里插入图片描述
在这里插入图片描述

在这里插入图片描述

Logo

K8S/Kubernetes社区为您提供最前沿的新闻资讯和知识内容

更多推荐