-
Notifications
You must be signed in to change notification settings - Fork 1.9k
Commit eb70ef7
committed
Add 'zpool status -e' flag to see unhealthy vdevs
When very large pools are present, it can be laborious to find
reasons for why a pool is degraded and/or where an unhealthy vdev
is. This option filters out vdevs that are ONLINE and with no errors
to make it easier to see where the issues are. Root and parents of
unhealthy vdevs will always be printed.
Sample vdev listings with '-e' option
- Single large pool
[root@iron5:~]# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
iron5 582T 89.7M 582T - - 0% 0% 1.00x ONLINE -
[root@iron5:~]# zpool status | wc -l
97
- All vdevs healthy
[root@iron5:zfs2.2]# ./zpool status -e
pool: iron5
state: ONLINE
scan: scrub repaired 0B in 00:00:01 with 0 errors on Thu Jan 11 14:40:45 2024
config:
NAME STATE READ WRITE CKSUM
iron5 ONLINE 0 0 0
errors: No known data errors
- ZFS errors
[root@iron5:zfs2.2]# ./zpool status -e
pool: iron5
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 24K in 00:00:01 with 1 errors on Thu Jan 11 14:41:27 2024
config:
NAME STATE READ WRITE CKSUM
iron5 ONLINE 0 0 0
raidz2-5 ONLINE 1 0 0
L23 ONLINE 1 0 0
L24 ONLINE 1 0 0
L37 ONLINE 1 0 0
errors: 1 data errors, use '-v' for a list
- Vdev faulted
[root@iron5:zfs2.2]# ./zpool status -e
pool: iron5
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 0B in 00:00:00 with 0 errors on Thu Jan 11 14:39:42 2024
config:
NAME STATE READ WRITE CKSUM
iron5 DEGRADED 0 0 0
raidz2-6 DEGRADED 0 0 0
L67 FAULTED 0 0 0 too many errors
errors: No known data errors
- Vdev faults and data errors
[root@iron5:zfs2.2]# ./zpool status -e
pool: iron5
state: DEGRADED
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in question if possible. Otherwise restore the
entire pool from backup.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
scan: scrub repaired 24K in 00:00:01 with 1 errors on Tue Jan 16 14:06:23 2024
config:
NAME STATE READ WRITE CKSUM
iron5 DEGRADED 0 0 0
raidz2-1 DEGRADED 0 0 0
L2 FAULTED 0 0 0 too many errors
raidz2-5 ONLINE 1 0 0
L23 ONLINE 1 0 0
L24 ONLINE 1 0 0
L37 ONLINE 1 0 0
raidz2-6 DEGRADED 0 0 0
L67 FAULTED 0 0 0 too many errors
errors: 1 data errors, use '-v' for a list
- Vdev missing
[root@iron5:zfs2.2]# ./zpool status -e
pool: iron5
state: DEGRADED
status: One or more devices could not be used because the label is missing or
invalid. Sufficient replicas exist for the pool to continue
functioning in a degraded state.
action: Replace the device using 'zpool replace'.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J
scan: scrub repaired 0B in 00:00:01 with 0 errors on Thu Jan 11 14:42:30 2024
config:
NAME STATE READ WRITE CKSUM
iron5 DEGRADED 0 0 0
raidz2-6 DEGRADED 0 0 0
L67 UNAVAIL 3 1 0
Signed-off-by: Cameron Harr <[email protected]>1 parent f0bf7a2 commit eb70ef7Copy full SHA for eb70ef7
File tree
Expand file treeCollapse file tree
2 files changed
+262
-189
lines changedFilter options
- cmd/zpool
- man/man8
Expand file treeCollapse file tree
2 files changed
+262
-189
lines changed
0 commit comments