Prometheus uses the exporter tool to expose metrics on hosts and applications. Today, we will use node ﹣ exporter to collect various host indicator data (such as CPU, memory, disk, etc.).
Install node? Exporter
Download the installation package from the official website of Prometheus. Here is the Linux installation package.
Download address: https://prometheus.io/download/
Installation package: node_exporter-0.18.1.linux-amd64.tar.gz
$ tar zxvf node_exporter-0.18.1.linux-amd64.tar.gz $ cd node_exporter-0.18.1.linux-amd64/ $ ./node_exporter --version node_exporter, version 0.18.1 (branch: HEAD, revision: 3db77732e925c08f675d7404a8c46466b2ece83e) build user: root@b50852a1acba build date: 20190604-16:41:18 go version: go1.12.5
Run node > exporter
You can start the service by directly running the node ﹣ exporter command. At this time, you will print out which indicator collection is currently enabled, as follows:
$ ./node_exporter INFO[0000] Starting node_exporter (version=0.18.1, branch=HEAD, revision=3db77732e925c08f675d7404a8c46466b2ece83e) source="node_exporter.go:156" INFO[0000] Build context (go=go1.12.5, user=root@b50852a1acba, date=20190604-16:41:18) source="node_exporter.go:157" INFO[0000] Enabled collectors: source="node_exporter.go:97" INFO[0000] - arp source="node_exporter.go:104" INFO[0000] - bcache source="node_exporter.go:104" INFO[0000] - bonding source="node_exporter.go:104" INFO[0000] - conntrack source="node_exporter.go:104" INFO[0000] - cpu source="node_exporter.go:104" INFO[0000] - cpufreq source="node_exporter.go:104" INFO[0000] - diskstats source="node_exporter.go:104" INFO[0000] - edac source="node_exporter.go:104" INFO[0000] - entropy source="node_exporter.go:104" INFO[0000] - filefd source="node_exporter.go:104" INFO[0000] - filesystem source="node_exporter.go:104" INFO[0000] - hwmon source="node_exporter.go:104" INFO[0000] - infiniband source="node_exporter.go:104" INFO[0000] - ipvs source="node_exporter.go:104" INFO[0000] - loadavg source="node_exporter.go:104" INFO[0000] - mdadm source="node_exporter.go:104" INFO[0000] - meminfo source="node_exporter.go:104" INFO[0000] - netclass source="node_exporter.go:104" INFO[0000] - netdev source="node_exporter.go:104" INFO[0000] - netstat source="node_exporter.go:104" INFO[0000] - nfs source="node_exporter.go:104" INFO[0000] - nfsd source="node_exporter.go:104" INFO[0000] - pressure source="node_exporter.go:104" INFO[0000] - sockstat source="node_exporter.go:104" INFO[0000] - stat source="node_exporter.go:104" INFO[0000] - textfile source="node_exporter.go:104" INFO[0000] - time source="node_exporter.go:104" INFO[0000] - timex source="node_exporter.go:104" INFO[0000] - uname source="node_exporter.go:104" INFO[0000] - vmstat source="node_exporter.go:104" INFO[0000] - xfs source="node_exporter.go:104" INFO[0000] - zfs source="node_exporter.go:104" INFO[0000] Listening on :9100 source="node_exporter.go:170"
After the service is started, you can visit http://:9100 through the browser to view the collected indicators.
If we don't want to collect some indicators, we can use -- no collector.xxx to specify these parameters when starting the service. For example, ". / node_exporter -- no collector. zfs" specifies not to collect zfs indicators.
Configure Prometheus
After the node? Exporter service is started, it needs to be added to the Prometheus configuration for it to take effect. Now, modify the prometheus.yml file, and add it under "graph" config
scrape_configs: ... - job_name: 'node' static_configs: - targets: ['localhost:9100']
After modification, the contents of the complete prometheus.yml file are as follows:
global: scrape_interval: 15s evaluation_interval: 15s alerting: alertmanagers: - static_configs: - targets: # - alertmanager:9093 rule_files: # - "first_rules.yml" # - "second_rules.yml" scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'node' static_configs: - targets: ['localhost:9100']
The default node exporter will collect a lot of indicators. We can also set only required indicators in the configuration file, such as:
... scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] - job_name: 'node' static_configs: - targets: ['localhost:9100'] params: collect[]: - cpu - meminfo - loadavg - netstat
Start Prometheus
After modifying the configuration file, you need to restart the Prometheus service. After the service is started, the browser can view the monitoring information by visiting http://localhost:9090.
At this time, we can filter and display only the newly added indicators by entering "{instance =" localhost:9100 ", job =" node "}".
For example, enter node ﹣ CPU ﹣ seconds ﹣ total {instance = "localhost:9100", job = "node"} to view the node CPU monitoring indicators.
Element Value node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="idle"} 3653653.37 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="iowait"} 5653.09 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="irq"} 0 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="nice"} 5.95 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="softirq"} 155.15 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="steal"} 0 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="system"} 14571.01 node_cpu_seconds_total{cpu="0",instance="localhost:9100",job="node",mode="user"} 16084.06