Description
System information
Type | Version/Name |
---|---|
Distribution Name | Ubuntu |
Distribution Version | 20.04 focal |
Kernel Version | 5.13.0-41-generic |
Architecture | amd64 |
OpenZFS Version | zfs-2.1.4-0york0~20.04, zfs-kmod-2.0.6-1ubuntu2.1 |
Describe the problem you're observing
zpool remove errors despite documented requirements apparently fulfilled
Describe how to reproduce the problem
use my pool…
zfs provides no ability for more detail, any log or any helpful error messages
For example, zfs could tell me how many more bytes I need, right?
Include any warning/errors/backtraces from the system logs
I am trying to migrate a pool from disk to partition
- with usb, disks may come back under a new /dev/sd name, so if vdevs are not partitions, a reboot is required
To remove a top-level mirror or simple vdev (I tried both) ashift must be same, no raidz, encryption keys loaded
- The error message for no encryption key is still the unhelpful “permission denied”
THE ERROR
date --rfc-3339=second && zpool remove zro b7f9667a-343c-4fec-aeb1-2ed5fd9f7319
2022-06-13 00:01:00-07:00
cannot remove b7f9667a-343c-4fec-aeb1-2ed5fd9f7319: out of space
THE FACTS
zdb -C zro | less
path: '/dev/disk/by-partuuid/b7f9667a-343c-4fec-aeb1-2ed5fd9f7319'
ashift: 16
asize: 2000384688128
path: '/dev/disk/by-partuuid/202d6b64-987c-406d-b64e-81e6357a9721'
ashift: 16
asize: 2000393076736
NOTE: 202d… is apparently larger than b7f9… Data used to be stored on bf79… alone, and now it does not fit onto a larger vdev?
zpool list -poalloc zro
ALLOC
1828179083264
HERE’S THE SIZE OF 202d… as a POOL:
zpool list -posize zrw220612
SIZE
1992864825344
APPARENT SPACE AVAILABLE:
1992864825344 - 1828179083264 = 164685742080 bytes ≈ 153 GiB
PERCENTAGE SPACE AVAILABLE:
164685742080 / 1992864825344 ≈ 8.2%
#11409 lists some apparently undocumented requirements
#11356 have some more
Because the second device is actually larger, this is probably some other buggy bug hiding behind the out of space error
I noticed you lose about 16 GiB by going to partition from full disk
zdb -C also lists 3 indirect devices that should be removable using zfs remap That command is not available
SUGGESTION
for zpool remove to have a -f flag that ignores these highly inaccurate predictions: if it fails, it fails. remove is already aborted if an i/o error is encountered. All it costs to try is to wait for it to fail
make error message actionable: how many bytes, what are the determined values