Skip to content

Commit 95df010

Browse files
amotintonyhutter
authored andcommitted
ZTS: Remove ashift setting from dedup_quota test (#17250)
The test writes 1M of 1KB blocks, which may produce up to 1GB of dirty data. On top of that ashift=12 likely produces additional 4GB of ZIO buffers during sync process. On top of that we likely need some page cache since the pool reside on files. And finally we need to cache the DDT. Not surprising that the test regularly ends up in OOMs, possibly depending on TXG size variations. Also replace fio with pretty strange parameter set with a set of dd writes and TXG commits, just as we neeed here. While here, remove compression. It has nothing to do here, but waste CI CPU time. Signed-off-by: Alexander Motin <[email protected]> Sponsored by: iXsystems, Inc. Reviewed-by: Paul Dagnelie <[email protected]> Reviewed-by: Tony Hutter <[email protected]> (cherry picked from commit 1d8f625)
1 parent 243a46f commit 95df010

File tree

1 file changed

+10
-11
lines changed

1 file changed

+10
-11
lines changed

tests/zfs-tests/tests/functional/dedup/dedup_quota.ksh

Lines changed: 10 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -76,7 +76,7 @@ function do_setup
7676
{
7777
log_must truncate -s 5G $VDEV_GENERAL
7878
# Use 'xattr=sa' to prevent selinux xattrs influencing our accounting
79-
log_must zpool create -o ashift=12 -f -O xattr=sa -m $MOUNTDIR $POOL $VDEV_GENERAL
79+
log_must zpool create -f -O xattr=sa -m $MOUNTDIR $POOL $VDEV_GENERAL
8080
log_must zfs set compression=off dedup=on $POOL
8181
}
8282

@@ -186,31 +186,30 @@ function ddt_dedup_vdev_limit
186186
# add a dedicated dedup/special VDEV and enable an automatic quota
187187
if (( RANDOM % 2 == 0 )) ; then
188188
class="special"
189+
size="200M"
189190
else
190191
class="dedup"
192+
size="100M"
191193
fi
192-
log_must truncate -s 200M $VDEV_DEDUP
194+
log_must truncate -s $size $VDEV_DEDUP
193195
log_must zpool add $POOL $class $VDEV_DEDUP
194196
log_must zpool set dedup_table_quota=auto $POOL
195197

196198
log_must zfs set recordsize=1K $POOL
197-
log_must zfs set compression=zstd $POOL
198199

199200
# Generate a working set to fill up the dedup/special allocation class
200-
log_must fio --directory=$MOUNTDIR --name=dedup-filler-1 \
201-
--rw=read --bs=1m --numjobs=2 --iodepth=8 \
202-
--size=512M --end_fsync=1 --ioengine=posixaio --runtime=1 \
203-
--group_reporting --fallocate=none --output-format=terse \
204-
--dedupe_percentage=0
205-
log_must sync_pool $POOL
201+
for i in {0..63}; do
202+
log_must dd if=/dev/urandom of=$MOUNTDIR/file${i} bs=1M count=16
203+
log_must sync_pool $POOL
204+
done
206205

207206
zpool status -D $POOL
208207
zpool list -v $POOL
209208
echo DDT size $(dedup_table_size), with $(ddt_entries) entries
210209

211210
#
212-
# With no DDT quota in place, the above workload will produce over
213-
# 800,000 entries by using space in the normal class. With a quota, it
211+
# With no DDT quota in place, the above workload will produce up to
212+
# 1M of entries by using space in the normal class. With a quota, it
214213
# should be well under 500,000. However, logged entries are hard to
215214
# account for because they can appear on both logs, and can also
216215
# represent an eventual removal. This isn't easily visible from

0 commit comments

Comments
 (0)