1
1
# Influxdb Metrics for ZFS Pools
2
- The _ zpool_influxdb_ program produces
2
+ The _ zpool_influxdb_ program produces
3
3
[ influxdb] ( https://github.com/influxdata/influxdb ) line protocol
4
4
compatible metrics from zpools. In the UNIX tradition, _ zpool_influxdb_
5
5
does one thing: read statistics from a pool and print them to
6
- stdout. In many ways, this is a metrics-friendly output of
6
+ stdout. In many ways, this is a metrics-friendly output of
7
7
statistics normally observed via the ` zpool ` command.
8
8
9
9
## Usage
@@ -26,7 +26,7 @@ If no poolname is specified, then all pools are sampled.
26
26
#### Histogram Bucket Values
27
27
The histogram data collected by ZFS is stored as independent bucket values.
28
28
This works well out-of-the-box with an influxdb data source and grafana's
29
- heatmap visualization. The influxdb query for a grafana heatmap
29
+ heatmap visualization. The influxdb query for a grafana heatmap
30
30
visualization looks like:
31
31
```
32
32
field(disk_read) last() non_negative_derivative(1s)
@@ -116,11 +116,11 @@ The ZFS I/O (ZIO) scheduler uses five queues to schedule I/Os to each vdev.
116
116
These queues are further divided into active and pending states.
117
117
An I/O is pending prior to being issued to the vdev. An active
118
118
I/O has been issued to the vdev. The scheduler and its tunable
119
- parameters are described at the
119
+ parameters are described at the
120
120
[ ZFS documentation for ZIO Scheduler]
121
121
(https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/ZIO%20Scheduler.html )
122
- The ZIO scheduler reports the queue depths as gauges where the value
123
- represents an instantaneous snapshot of the queue depth at
122
+ The ZIO scheduler reports the queue depths as gauges where the value
123
+ represents an instantaneous snapshot of the queue depth at
124
124
the sample time. Therefore, it is not unusual to see all zeroes
125
125
for an idle pool.
126
126
@@ -190,7 +190,7 @@ The histogram fields show cumulative values from lowest to highest.
190
190
The largest bucket is tagged "le=+Inf", representing the total count
191
191
of I/Os by type and vdev.
192
192
193
- Note: trim I/Os can be larger than 16MiB, but the larger sizes are
193
+ Note: trim I/Os can be larger than 16MiB, but the larger sizes are
194
194
accounted in the 16MiB bucket.
195
195
196
196
#### zpool_io_size Histogram Tags
@@ -218,16 +218,16 @@ accounted in the 16MiB bucket.
218
218
| trim_write_agg | blocks | aggregated trim (aka unmap) writes |
219
219
220
220
#### About unsigned integers
221
- Telegraf v1.6.2 and later support unsigned 64-bit integers which more
221
+ Telegraf v1.6.2 and later support unsigned 64-bit integers which more
222
222
closely matches the uint64_t values used by ZFS. By default, zpool_influxdb
223
223
uses ZFS' uint64_t values and influxdb line protocol unsigned integer type.
224
224
If you are using old telegraf or influxdb where unsigned integers are not
225
225
available, use the ` --signed-int ` option.
226
226
227
227
## Using _ zpool_influxdb_
228
228
229
- The simplest method is to use the execd input agent in telegraf. For older
230
- versions of telegraf which lack execd, the exec input agent can be used.
229
+ The simplest method is to use the execd input agent in telegraf. For older
230
+ versions of telegraf which lack execd, the exec input agent can be used.
231
231
For convenience, one of the sample config files below can be placed in the
232
232
telegraf config-directory (often /etc/telegraf/telegraf.d). Telegraf can
233
233
be restarted to read the config-directory files.
@@ -269,26 +269,26 @@ be restarted to read the config-directory files.
269
269
```
270
270
271
271
## Caveat Emptor
272
- * Like the _ zpool_ command, _ zpool_influxdb_ takes a reader
272
+ * Like the _ zpool_ command, _ zpool_influxdb_ takes a reader
273
273
lock on spa_config for each imported pool. If this lock blocks,
274
274
then the command will also block indefinitely and might be
275
- unkillable. This is not a normal condition, but can occur if
276
- there are bugs in the kernel modules.
275
+ unkillable. This is not a normal condition, but can occur if
276
+ there are bugs in the kernel modules.
277
277
For this reason, care should be taken:
278
- * avoid spawning many of these commands hoping that one might
278
+ * avoid spawning many of these commands hoping that one might
279
279
finish
280
280
* avoid frequent updates or short sample time
281
281
intervals, because the locks can interfere with the performance
282
282
of other instances of _ zpool_ or _ zpool_influxdb_
283
283
284
284
## Other collectors
285
285
There are a few other collectors for zpool statistics roaming around
286
- the Internet. Many attempt to screen-scrape ` zpool ` output in various
286
+ the Internet. Many attempt to screen-scrape ` zpool ` output in various
287
287
ways. The screen-scrape method works poorly for ` zpool ` output because
288
288
of its human-friendly nature. Also, they suffer from the same caveats
289
289
as this implementation. This implementation is optimized for directly
290
290
collecting the metrics and is much more efficient than the screen-scrapers.
291
291
292
292
## Feedback Encouraged
293
- Pull requests and issues are greatly appreciated at
293
+ Pull requests and issues are greatly appreciated at
294
294
https://github.com/openzfs/zfs
0 commit comments