ubuntu20.04下使用juju+maas环境部署k8s-7-使用graylog和Prometheus2监控k8s集群-4-prometheus2部署
使用 Graylog 和 Prometheus 监视 Kubernetes 集群Kubernetes documentationPrometheus2 #22Grafana #40Telegraf #41多节点openstack charms 部署指南0.0.1-40-prometheus2原文:Prometheus2 #22By llama-charmers Stable, CandidateS
使用 Graylog 和 Prometheus 监视 Kubernetes 集群
Prometheus2 #22
Grafana #40
Telegraf #41
多节点openstack charms 部署指南0.0.1-40-prometheus2
By llama-charmers Stable, Candidate
Supports: Xenial Bionic Focal
描述
Prometheus 是一个系统和服务监控系统。它以给定的时间间隔从配置的目标收集指标,评估规则表达式,显示结果,并在观察到某些条件为真时触发警报。由于 2.0 版中的重大数据库更改,此魅力仅支持 prometheus 2.0 及更高版本。
Juju prometheus2 charm
这个charm提供了来自 http://prometheus.io/ 的 Prometheus 监控系统。
它支持 2.0 及更高版本。如果您想部署 prometheus 1.x,请使用 cs:prometheus charm。
或者,charm将与 Prometheus 一起安装 Prometheus 注册守护程序,以帮助注册目标。
charm可以与以下charm相关以扩展功能。
- grafana
- prometheus-alertmanager
- prometheus-pushgateway
- prometheus-snmp-exporter
- prometheus-blackbox-exporter
- telegraf
- mtail
配置
该魅力旨在开箱即用,无需设置任何配置选项。有关支持设置的详细帮助,请参阅 config.yaml。以下是最常见选项的列表:
- daemon-args - 添加额外的 CLI 参数,例如 --storage.tsdb.retention=21d
- scrape-jobs - 允许配置自定义抓取作业
- snap_proxy - 访问快照存储时使用的 Web 代理地址
- external_url
- scrape-interval
- evaluation-interval
- remote-read/remote-write - 配置对远程数据存储端点的读/写
Juju 存储支持
Charm 支持 juju 存储(需要 juju 1.25 或更高版本)。例如使用本地文件系统运行部署:
juju deploy local:trusty/prometheus2 --storage metrics-filesystem=rootfs prometheus2
juju资源支持
Charm 支持 juju 资源,方便离线部署。预取快照:
snap download --channel=stable core
snap download --channel=2/stable prometheus
将下载的快照作为资源提供给应用程序:
juju deploy cs:prometheus2 --resource core=core_6818.snap --resource prometheus=prometheus_20.snap
以下是实际配置:
maas配置
同样为了nat映射,在maas上配置如下:
节点名 | 标签 | vCPU数 | 网卡数 | 内存g数 | 硬盘数 | 硬盘g数 | ip | 端口 |
---|---|---|---|---|---|---|---|---|
prometheus2.maas | prometheus2 | 2 | 1 | 8 | 1 | 80 | 10.0.9.3 | 9090 |
grafana.maas | grafana | 2 | 1 | 8 | 1 | 60 | 10.0.9.28 | 3000 |
部署和配置prometheus2模块
#部署prometheus2、grafana和telagraf
juju add-model prometheus2
juju deploy cs:prometheus2-22 --constraints tags=prometheus2 --series focal --debug
juju deploy cs:grafana-40 --constraints tags=grafana --series focal --debug
juju deploy cs:telegraf --series focal --debug
#增加offer
juju offer prometheus2:target
#在本模块增加关系,并配置telagraf在
juju relate prometheus2:grafana-source grafana:grafana-source
juju relate telegraf:dashboards grafana:dashboards
juju relate telegraf:prometheus-client prometheus2:target
juju relate prometheus2:juju-info telegraf:juju-info
juju relate grafana:juju-info telegraf:juju-info
在k8s模块的部署和配置
#切换到k8s模块
juju switch k8s
#部署telegraf
juju deploy cs:telegraf --series focal --debug
#将telegraf部署为kubernetes-master和kubernetes-worker的子程序
juju relate kubernetes-master:juju-info telegraf:juju-info
juju relate kubernetes-worker:juju-info telegraf:juju-info
#consume 跨模块配置联系
juju consume admin/prometheus2.prometheus2
juju relate telegraf:prometheus-client prometheus2:target
juju status --relations
Model Controller Cloud/Region Version SLA Timestamp
k8s maas-controller mymaas/default 2.8.10 unsupported 10:51:43+08:00
SAAS Status Store URL
primary-rsyslog active maas-controller admin/rsyslog.primary-rsyslog
prometheus2 active maas-controller admin/prometheus2.prometheus2
App Version Status Scale Charm Store Channel Rev OS Message
containerd go1.13.8 active 5 containerd charmstore 130 ubuntu Container runtime available
easyrsa 3.0.1 active 1 easyrsa local 0 ubuntu Certificate Authority connected.
etcd 3.4.5 active 3 etcd charmstore 594 ubuntu Healthy with 3 known peers
flannel 0.11.0 active 5 flannel charmstore 558 ubuntu Flannel subnet 10.1.47.1/24
kubeapi-load-balancer 1.18.0 active 1 kubeapi-load-balancer charmstore 798 ubuntu Loadbalancer ready.
kubernetes-master 1.21.1 active 2 kubernetes-master local 0 ubuntu Kubernetes master running.
kubernetes-worker 1.21.1 active 3 kubernetes-worker charmstore 768 ubuntu Kubernetes worker running.
rsyslog-forwarder-ha unknown 9 rsyslog-forwarder-ha charmstore 20 ubuntu
telegraf active 5 telegraf charmstore 41 ubuntu Monitoring kubernetes-master/1 (source version/commit dec0633)
Unit Workload Agent Machine Public address Ports Message
easyrsa/0* active idle 0 10.0.3.189 Certificate Authority connected.
rsyslog-forwarder-ha/0* unknown idle 10.0.3.189
etcd/0* active idle 1 10.0.3.200 2379/tcp Healthy with 3 known peers
rsyslog-forwarder-ha/3 unknown idle 10.0.3.200
etcd/1 active idle 2 10.0.3.201 2379/tcp Healthy with 3 known peers
rsyslog-forwarder-ha/2 unknown idle 10.0.3.201
etcd/2 active idle 3 10.0.3.204 2379/tcp Healthy with 3 known peers
rsyslog-forwarder-ha/1 unknown idle 10.0.3.204
kubeapi-load-balancer/0* active idle 4 10.0.3.208 443/tcp Loadbalancer ready.
kubernetes-master/0 active idle 5 10.0.3.202 6443/tcp Kubernetes master running.
containerd/4 active idle 10.0.3.202 Container runtime available
flannel/4 active idle 10.0.3.202 Flannel subnet 10.1.22.1/24
rsyslog-forwarder-ha/5 unknown idle 10.0.3.202
telegraf/1 active idle 10.0.3.202 9103/tcp Monitoring kubernetes-master/0 (source version/commit dec0633)
kubernetes-master/1* active idle 6 10.0.3.207 6443/tcp Kubernetes master running.
containerd/2 active idle 10.0.3.207 Container runtime available
flannel/2 active idle 10.0.3.207 Flannel subnet 10.1.18.1/24
rsyslog-forwarder-ha/4 unknown idle 10.0.3.207
telegraf/0* active idle 10.0.3.207 9103/tcp Monitoring kubernetes-master/1 (source version/commit dec0633)
kubernetes-worker/0* active idle 7 10.0.3.203 80/tcp,443/tcp Kubernetes worker running.
containerd/0* active idle 10.0.3.203 Container runtime available
flannel/0* active idle 10.0.3.203 Flannel subnet 10.1.47.1/24
rsyslog-forwarder-ha/6 unknown idle 10.0.3.203
telegraf/3 active idle 10.0.3.203 9103/tcp Monitoring kubernetes-worker/0 (source version/commit dec0633)
kubernetes-worker/1 active idle 8 10.0.3.206 80/tcp,443/tcp Kubernetes worker running.
containerd/3 active idle 10.0.3.206 Container runtime available
flannel/3 active idle 10.0.3.206 Flannel subnet 10.1.4.1/24
rsyslog-forwarder-ha/8 unknown idle 10.0.3.206
telegraf/4 active idle 10.0.3.206 9103/tcp Monitoring kubernetes-worker/1 (source version/commit dec0633)
kubernetes-worker/2 active idle 9 10.0.3.205 80/tcp,443/tcp Kubernetes worker running.
containerd/1 active idle 10.0.3.205 Container runtime available
flannel/1 active idle 10.0.3.205 Flannel subnet 10.1.69.1/24
rsyslog-forwarder-ha/7 unknown idle 10.0.3.205
telegraf/2 active idle 10.0.3.205 9103/tcp Monitoring kubernetes-worker/2 (source version/commit dec0633)
Machine State DNS Inst id Series AZ Message
0 started 10.0.3.189 busy-raptor focal default Deployed
1 started 10.0.3.200 crisp-swift focal default Deployed
2 started 10.0.3.201 vital-tick focal default Deployed
3 started 10.0.3.204 stable-dory focal default Deployed
4 started 10.0.3.208 upward-ibex focal default Deployed
5 started 10.0.3.202 ideal-oyster focal default Deployed
6 started 10.0.3.207 safe-goat focal default Deployed
7 started 10.0.3.203 glad-hen focal default Deployed
8 started 10.0.3.206 cool-aphid focal default Deployed
9 started 10.0.3.205 epic-moose focal default Deployed
Relation provider Requirer Interface Type Message
easyrsa:client etcd:certificates tls-certificates regular
easyrsa:client kubeapi-load-balancer:certificates tls-certificates regular
easyrsa:client kubernetes-master:certificates tls-certificates regular
easyrsa:client kubernetes-worker:certificates tls-certificates regular
easyrsa:juju-info rsyslog-forwarder-ha:juju-info juju-info subordinate
etcd:cluster etcd:cluster etcd peer
etcd:db flannel:etcd etcd regular
etcd:db kubernetes-master:etcd etcd regular
etcd:juju-info rsyslog-forwarder-ha:juju-info juju-info subordinate
kubeapi-load-balancer:loadbalancer kubernetes-master:loadbalancer public-address regular
kubeapi-load-balancer:website kubernetes-worker:kube-api-endpoint http regular
kubernetes-master:cni flannel:cni kubernetes-cni subordinate
kubernetes-master:container-runtime containerd:containerd container-runtime subordinate
kubernetes-master:coordinator kubernetes-master:coordinator coordinator peer
kubernetes-master:juju-info rsyslog-forwarder-ha:juju-info juju-info subordinate
kubernetes-master:juju-info telegraf:juju-info juju-info subordinate
kubernetes-master:kube-api-endpoint kubeapi-load-balancer:apiserver http regular
kubernetes-master:kube-control kubernetes-worker:kube-control kube-control regular
kubernetes-master:kube-masters kubernetes-master:kube-masters kube-masters peer
kubernetes-worker:cni flannel:cni kubernetes-cni subordinate
kubernetes-worker:container-runtime containerd:containerd container-runtime subordinate
kubernetes-worker:coordinator kubernetes-worker:coordinator coordinator peer
kubernetes-worker:juju-info rsyslog-forwarder-ha:juju-info juju-info subordinate
kubernetes-worker:juju-info telegraf:juju-info juju-info subordinate
primary-rsyslog:aggregator rsyslog-forwarder-ha:syslog syslog regular
telegraf:prometheus-client prometheus2:target http regular
切换回prometheus2模块:
juju switch prometheus2
编辑scraper-yaml,监控节点:
以下是以将10.0.3.119和10.0.9.3为例:(10.0.9.3 是vm-156-1地址, 10.0.3.119是 juju-0de0d7-2-lxd-1 ,2/lxd/1 focal)
cat scraper-yaml
- job_name: ' juju-0de0d7-2-lxd-1 '
scrape_interval: 30s
scrape_timeout: 30s
static_configs:
- targets: ['10.0.3.119 :9090']
- job_name: 'vm-156-1'
scrape_interval: 30s
scrape_timeout: 30s
static_configs:
- targets: ['10.0.9.3 :9090']
编辑mydashboard.json
cat mydashboard.json
{ "dashboard": { exported-json-dashboard }, "overwrite": true }
prometheus数据配置
#configure prometheus applications
juju config prometheus2 scrape-jobs="<scraper-yaml>"
juju run-action --wait grafana/0 import-dashboard \
dashboard="$(base64 <dashboard-json>)"
#获取grafana密码
juju run-action --wait grafana/0 get-admin-password
登陆prometheus2
用web浏览器登陆:http://prometheus2-ip:9090
在status中选target:
登陆grafana
web浏览器:http://grafana-ip:3000
配置数据源
查看dashboard
更多推荐
所有评论(0)