Skip to content

Commit 6791c0c

Browse files
committed
rebase prep
Signed-off-by: Richard Elling <[email protected]>
1 parent f3b1d31 commit 6791c0c

File tree

2 files changed

+16
-20
lines changed

2 files changed

+16
-20
lines changed

cmd/zpool_influxdb/README.md

Lines changed: 16 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,9 @@
11
# Influxdb Metrics for ZFS Pools
2-
The _zpool_influxdb_ program produces
2+
The _zpool_influxdb_ program produces
33
[influxdb](https://github.com/influxdata/influxdb) line protocol
44
compatible metrics from zpools. In the UNIX tradition, _zpool_influxdb_
55
does one thing: read statistics from a pool and print them to
6-
stdout. In many ways, this is a metrics-friendly output of
6+
stdout. In many ways, this is a metrics-friendly output of
77
statistics normally observed via the `zpool` command.
88

99
## Usage
@@ -26,7 +26,7 @@ If no poolname is specified, then all pools are sampled.
2626
#### Histogram Bucket Values
2727
The histogram data collected by ZFS is stored as independent bucket values.
2828
This works well out-of-the-box with an influxdb data source and grafana's
29-
heatmap visualization. The influxdb query for a grafana heatmap
29+
heatmap visualization. The influxdb query for a grafana heatmap
3030
visualization looks like:
3131
```
3232
field(disk_read) last() non_negative_derivative(1s)
@@ -116,11 +116,11 @@ The ZFS I/O (ZIO) scheduler uses five queues to schedule I/Os to each vdev.
116116
These queues are further divided into active and pending states.
117117
An I/O is pending prior to being issued to the vdev. An active
118118
I/O has been issued to the vdev. The scheduler and its tunable
119-
parameters are described at the
119+
parameters are described at the
120120
[ZFS documentation for ZIO Scheduler]
121121
(https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/ZIO%20Scheduler.html)
122-
The ZIO scheduler reports the queue depths as gauges where the value
123-
represents an instantaneous snapshot of the queue depth at
122+
The ZIO scheduler reports the queue depths as gauges where the value
123+
represents an instantaneous snapshot of the queue depth at
124124
the sample time. Therefore, it is not unusual to see all zeroes
125125
for an idle pool.
126126

@@ -190,7 +190,7 @@ The histogram fields show cumulative values from lowest to highest.
190190
The largest bucket is tagged "le=+Inf", representing the total count
191191
of I/Os by type and vdev.
192192

193-
Note: trim I/Os can be larger than 16MiB, but the larger sizes are
193+
Note: trim I/Os can be larger than 16MiB, but the larger sizes are
194194
accounted in the 16MiB bucket.
195195

196196
#### zpool_io_size Histogram Tags
@@ -218,16 +218,16 @@ accounted in the 16MiB bucket.
218218
| trim_write_agg | blocks | aggregated trim (aka unmap) writes |
219219

220220
#### About unsigned integers
221-
Telegraf v1.6.2 and later support unsigned 64-bit integers which more
221+
Telegraf v1.6.2 and later support unsigned 64-bit integers which more
222222
closely matches the uint64_t values used by ZFS. By default, zpool_influxdb
223223
uses ZFS' uint64_t values and influxdb line protocol unsigned integer type.
224224
If you are using old telegraf or influxdb where unsigned integers are not
225225
available, use the `--signed-int` option.
226226

227227
## Using _zpool_influxdb_
228228

229-
The simplest method is to use the execd input agent in telegraf. For older
230-
versions of telegraf which lack execd, the exec input agent can be used.
229+
The simplest method is to use the execd input agent in telegraf. For older
230+
versions of telegraf which lack execd, the exec input agent can be used.
231231
For convenience, one of the sample config files below can be placed in the
232232
telegraf config-directory (often /etc/telegraf/telegraf.d). Telegraf can
233233
be restarted to read the config-directory files.
@@ -269,26 +269,26 @@ be restarted to read the config-directory files.
269269
```
270270

271271
## Caveat Emptor
272-
* Like the _zpool_ command, _zpool_influxdb_ takes a reader
272+
* Like the _zpool_ command, _zpool_influxdb_ takes a reader
273273
lock on spa_config for each imported pool. If this lock blocks,
274274
then the command will also block indefinitely and might be
275-
unkillable. This is not a normal condition, but can occur if
276-
there are bugs in the kernel modules.
275+
unkillable. This is not a normal condition, but can occur if
276+
there are bugs in the kernel modules.
277277
For this reason, care should be taken:
278-
* avoid spawning many of these commands hoping that one might
278+
* avoid spawning many of these commands hoping that one might
279279
finish
280280
* avoid frequent updates or short sample time
281281
intervals, because the locks can interfere with the performance
282282
of other instances of _zpool_ or _zpool_influxdb_
283283

284284
## Other collectors
285285
There are a few other collectors for zpool statistics roaming around
286-
the Internet. Many attempt to screen-scrape `zpool` output in various
286+
the Internet. Many attempt to screen-scrape `zpool` output in various
287287
ways. The screen-scrape method works poorly for `zpool` output because
288288
of its human-friendly nature. Also, they suffer from the same caveats
289289
as this implementation. This implementation is optimized for directly
290290
collecting the metrics and is much more efficient than the screen-scrapers.
291291

292292
## Feedback Encouraged
293-
Pull requests and issues are greatly appreciated at
293+
Pull requests and issues are greatly appreciated at
294294
https://github.com/openzfs/zfs

tests/runfiles/common.run

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -898,7 +898,3 @@ tests = ['log_spacemap_import_logs']
898898
pre =
899899
post =
900900
tags = ['functional', 'log_spacemap']
901-
902-
[tests/functional/zpool_influxdb]
903-
tests = 'zpool_influxdb'
904-
tags = ['functional', 'metrics']

0 commit comments

Comments
 (0)