使用docker-compose安裝Prometheus

Prometheus監控
一、 總覽主要組件:Prometheus server: 用于收集和存儲時間序列數據exporter: 客戶端生成監控指標Alertmanager: 處理警報Grafana: 數據可視化和輸出Pushgateway:主動推送數據給Prometheus server
架構圖:

使用docker-compose安裝Prometheus

文章插圖
二 、環境搭建2.1 環境準備軟件版本OSCentOS Linux release 7.8.2003docker20.10.17docker-composev2.6.0IP192.168.0.802.2 編輯prometheus配置文件mkdir /etc/prometheusvim /etc/prometheus/prometheus.yml/etc/prometheus/prometheus.yml
# 全局配置global:scrape_interval: 15sevaluation_interval: 15s# scrape_timeout is set to the global default (10s).# 告警配置alerting:alertmanagers:- static_configs:- targets: ['192.168.1.200:9093']# 加載一次規則,并根據全局“評估間隔”定期評估它們 。rule_files:- "/etc/prometheus/rules.yml"# 控制Prometheus監視哪些資源# 默認配置中,有一個名為prometheus的作業,它會收集Prometheus服務器公開的時間序列數據 。scrape_configs:# 作業名稱將作為標簽“job=<job_name>`添加到此配置中獲取的任何數據 。- job_name: 'prometheus'static_configs:- targets: ['localhost:9090']- job_name: 'node'static_configs:- targets: ['localhost:9100']labels:env: devrole: docker2.3 編輯告警規則文件/etc/prometheus/rules.yml
groups:- name: examplerules: # Alert for any instance that is unreachable for >5 minutes.- alert: InstanceDownexpr: up == 0for: 1mlabels:serverity: pageannotations:summary: "Instance {{ $labels.instance }} down"description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."2.4 編輯告警配置文件【使用docker-compose安裝Prometheus】/etc/alertmanager/alertmanager.yml
global:resolve_timeout: 5msmtp_smarthost: 'xxx@xxx:587'smtp_from: 'zhaoysz@xxx'smtp_auth_username: 'xxx@xxx'smtp_auth_password: 'xxxx'smtp_require_tls: trueroute:group_by: ['alertname']group_wait: 10sgroup_interval: 10srepeat_interval: 1hreceiver: 'test-mails'receivers:- name: 'test-mails'email_configs:- to: 'scottcho@qq.com'2.5 編輯docker-compose/docker-compose/prometheus/docker-compose.yml
services:prometheus:image: prom/prometheusvolumes:- /etc/prometheus/:/etc/prometheus/- prometheus_data:/prometheuscommand:- '--config.file=/etc/prometheus/prometheus.yml'- '--storage.tsdb.path=/prometheus'- '--web.console.libraries=/usr/share/prometheus/console_libraries'- '--web.console.templates=/usr/share/prometheus/consoles'- '--web.external-url=http://192.168.1.200:9090/'- '--web.enable-lifecycle'- '--storage.tsdb.retention=15d'ports:- 9090:9090links:- alertmanager:alertmanagerrestart: alwaysalertmanager:image: prom/alertmanagerports:- 9093:9093volumes:- /etc/alertmanager/:/etc/alertmanager/- alertmanager_data:/alertmanagercommand:- '--config.file=/etc/alertmanager/alertmanager.yml'- '--storage.path=/alertmanager'restart: alwaysgrafana:image: grafana/grafanaports:- 3000:3000volumes:- /etc/grafana/:/etc/grafana/provisioning/- grafana_data:/var/lib/grafanaenvironment:- GF_INSTALL_PLUGINS=camptocamp-prometheus-alertmanager-datasourcelinks:- prometheus:prometheus- alertmanager:alertmanagerrestart: alwaysvolumes:prometheus_data: {}grafana_data: {}alertmanager_data: {}2.6 啟動composerdocker-compose up -d
2.7 訪問端點
  • http://localhost:9090Prometheus server主頁
  • http://localhost:9090/metricsPrometheus server自身指標
  • http://192.168.0.80:3000Grafana
三、 添加監控主機Job3.1 安裝Node_Exportnode_export用于采集主機信息,本質是一個采用http的協議的api
RedHat家族的操作系統可以采用yum進行安裝
yum 安裝方法:https://copr.fedorainfracloud.org/coprs/ibotty/prometheus-exporters/
curl -Lo /etc/yum.repos.d/_copr_ibotty-prometheus-exporters.repo https://copr.fedorainfracloud.org/coprs/ibotty/prometheus-exporters/repo/epel-7/ibotty-prometheus-exporters-epel-7.repoyum -y install node_exportersystemctl start node_exportersystemctl enable node_exporter.service二進制文件安裝官網下載地址(https://prometheus.io/download/)
tar -zxvf node_exporter-1.0.0-rc.1.linux-amd64.tar.gz ./node_exporter --web.listen-address=:9100訪問地址: http://localhost:9100/metrics
3.2 在promethues中添加該監控- job_name: 'node'static_configs:- targets: ['localhost:9100']labels:env: devrole: docker3.3 重啟Prometheus由于這是靜態配置,需要重啟Prometheus服務,后面可以做成自動發現的
docker compose restart
四、 配置Grafana訪問http://192.168.0.80:3000/,初始登錄賬號/密碼: admin/admin
創建Prometheus數據源單擊側欄中的“齒輪”以打開“配置”菜單 。單擊“數據源” 。點擊“添加數據源” 。選擇“ Prometheus”作為類型 。設置適當的Prometheus服務器網址(例如,http://192.168.0.80:9090/)根據需要調整其他數據源設置(例如 , 選擇正確的訪問方法) 。單擊“保存并測試”以保存新的數據源 。

推薦閱讀