Description
System information
Type | Version/Name |
---|---|
Distribution Name | Debian (Proxmox) |
Distribution Version | 10 |
Linux Kernel | 5.3.18-3-pve |
Architecture | x64 |
ZFS Version | 0.8.3 |
SPL Version | 0.83 |
Describe the problem you're observing
We have an issue where all our servers slab grow up to the point of filling the entire RAM. The server seems to be working fine even if our hypervisor is reporting over 95% ram usage. This is problematic because there is no way for us to accurately monitor the available ram to prevent the OOM to kick in and start killing processes. The only workaround is to reboot the server so that ram goes back down but over the next few week it will gradually climb up resulting in the same issue.
I don't have an exact way of reproducing the issue unfortunately as of now. I will try to spin up a test server in the next couple of days but was hoping my logs could help me shed some light on the issue.
Since this is an hypervisor there is a lot of read and write of small files (LXC containers fs)
Include any warning/errors/backtraces from the system logs
The server has 72GB of ram with only the following parameters set
cat /etc/modprobe.d/zfs.conf
options zfs zfs_arc_min=9119213814 # 12% of total server memory
options zfs zfs_arc_max=22798034534 # 30% of total server memory
options zfs zfs_deadman_failmode=continue
Current ram usage on the hypervisor is 60GB/72GB. I've calculated the amount of ram used by all containers according to the hypervisor and it's around 15GB. The ARC is currently at 7GB according to arc_summary
. That would mean 24GB of usage and leaving 36GB of unaccounted ram which is roughly 50% of the total memory of the server.
cat /proc/meminfo
MemTotal: 70.7707 GB
MemFree: 9.44844 GB
MemAvailable: 10.9473 GB
Buffers: 0 GB
Cached: 1.63555 GB
SwapCached: 0 GB
Active: 15.7017 GB
Inactive: 0.616753 GB
Active(anon): 14.8272 GB
Inactive(anon): 0.251537 GB
Active(file): 0.874466 GB
Inactive(file): 0.365215 GB
Unevictable: 0.153202 GB
Mlocked: 0.153202 GB
SwapTotal: 0 GB
SwapFree: 0 GB
Dirty: 0.00235748 GB
Writeback: 4.19617e-05 GB
AnonPages: 14.8362 GB
Mapped: 0.867214 GB
Shmem: 0.468136 GB
KReclaimable: 1.10764 GB
Slab: 18.0766 GB
SReclaimable: 1.10764 GB
SUnreclaim: 16.9689 GB
KernelStack: 0.0267487 GB
PageTables: 0.148586 GB
NFS_Unstable: 0 GB
Bounce: 0 GB
WritebackTmp: 0 GB
CommitLimit: 35.3854 GB
Committed_AS: 28.013 GB
VmallocTotal: 32768 GB
VmallocUsed: 1.36515 GB
VmallocChunk: 0 GB
Percpu: 19.3424 GB
HardwareCorrupted: 0 GB
AnonHugePages: 0.1875 GB
ShmemHugePages: 0 GB
ShmemPmdMapped: 0 GB
CmaTotal: 0 GB
CmaFree: 0 GB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 0.00195312 GB
Hugetlb: 0 GB
DirectMap4k: 52.8087 GB
DirectMap2M: 19.1816 GB
Could it be that something is not releasing memory correctly in SUnreclaim ?
I tried the following commands which freed some memory but a big chunk still left.
echo 2 > /proc/sys/vm/drop_caches
echo 3 > /proc/sys/vm/drop_caches
Here the slabtop
slabtop -o -s c
Active / Total Objects (% used) : 26729095 / 31011545 (86.2%)
Active / Total Slabs (% used) : 970158 / 970158 (100.0%)
Active / Total Caches (% used) : 160 / 219 (73.1%)
Active / Total Size (% used) : 18084118.78K / 18892850.91K (95.7%)
Minimum / Average / Maximum Object : 0.01K / 0.61K / 16.81K
OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME
3325664 3325161 99% 4.00K 415708 8 13302656K kmalloc-4k
1329580 1081626 81% 0.57K 47485 28 759760K radix_tree_node
38890 34173 87% 16.00K 19445 2 622240K zio_buf_16384
573230 474478 82% 0.89K 16378 35 524096K dnode_t
3594990 3522065 97% 0.13K 119833 30 479332K kernfs_node_cache
42596 34498 80% 8.00K 10649 4 340768K kmalloc-8k
636000 534857 84% 0.50K 19875 32 318000K kmalloc-512
3314490 3314490 100% 0.08K 64990 51 259960K Acpi-State
791742 386509 48% 0.32K 16158 49 258528K arc_buf_hdr_t_full
3086208 2191404 71% 0.06K 48222 64 192888K kmalloc-64
597324 488188 81% 0.30K 11487 52 183792K dmu_buf_impl_t
157800 141969 89% 1.05K 5260 30 168320K zfs_znode_cache
4957056 4675761 94% 0.03K 38727 128 154908K kmalloc-32
599256 525559 87% 0.19K 14268 42 114144K dentry
162288 156776 96% 0.65K 3312 49 105984K proc_inode_cache
462345 453514 98% 0.20K 11855 39 94840K vm_area_struct
154825 146127 94% 0.58K 2815 55 90080K inode_cache
10255 9218 89% 5.44K 2051 5 65632K task_struct
76791 75097 97% 0.81K 1969 39 63008K sock_inode_cache
213312 204110 95% 0.25K 6666 32 53328K filp
44224 41747 94% 1.00K 1382 32 44224K kmalloc-1k
22096 21050 95% 2.00K 1381 16 44192K kmalloc-2k
162723 146913 90% 0.24K 4931 33 39448K sa_cache
619520 607070 97% 0.06K 9680 64 38720K anon_vma_chain
381654 81746 21% 0.09K 9087 42 36348K kmalloc-96
16395 16040 97% 2.06K 1093 15 34976K sighand_cache
32670 31900 97% 1.06K 1089 30 34848K signal_cache
396566 389535 98% 0.09K 8621 46 34484K anon_vma
32448 32448 100% 1.00K 1014 32 32448K UNIX
29640 29351 99% 1.06K 988 30 31616K mm_struct
151746 149201 98% 0.19K 3613 42 28904K cred_jar
40710 40220 98% 0.69K 885 46 28320K files_cache
385056 269281 69% 0.07K 6876 56 27504K range_seg_cache
692580 257817 37% 0.04K 6790 102 27160K abd_t
33902 31861 93% 0.69K 737 46 23584K shmem_inode_cache
1449984 1449760 99% 0.02K 5664 256 22656K kmalloc-16
7406 7160 96% 2.19K 529 14 16928K TCP
54656 53879 98% 0.25K 1708 32 13664K skbuff_head_cache
65352 55747 85% 0.19K 1556 42 12448K kmalloc-192
9100 9100 100% 1.25K 364 25 11648K UDPv6
44960 36951 82% 0.25K 1405 32 11240K kmalloc-256
4407 4319 98% 2.31K 339 13 10848K TCPv6
1259008 325117 25% 0.01K 2459 512 9836K kmalloc-8
129664 125874 97% 0.06K 2026 64 8104K kmalloc-rcl-64
58976 58524 99% 0.12K 1843 32 7372K pid
70644 70021 99% 0.09K 1682 42 6728K kmalloc-rcl-96
6720 5031 74% 1.00K 210 32 6720K zio_buf_1024
12224 4230 34% 0.50K 382 32 6112K zio_data_buf_512
71553 39497 55% 0.08K 1403 51 5612K arc_buf_t
11136 7648 68% 0.50K 348 32 5568K zio_buf_512
68493 67655 98% 0.08K 1343 51 5372K task_delay_info
42208 41952 99% 0.12K 1319 32 5276K kmalloc-rcl-128
61544 61264 99% 0.07K 1099 56 4396K eventpoll_pwq
4288 2771 64% 1.00K 134 32 4288K zio_data_buf_1024
3296 3296 100% 1.00K 103 32 3296K RAW
7200 5696 79% 0.44K 200 36 3200K kmem_cache
23968 18981 79% 0.12K 749 32 2996K kmalloc-128
3204 3060 95% 0.88K 89 36 2848K nfs_read_data
3393 3106 91% 0.81K 87 39 2784K fuse_inode
21856 21856 100% 0.12K 683 32 2732K scsi_sense_cache
11458 11288 98% 0.23K 337 34 2696K tw_sock_TCP
68034 68034 100% 0.04K 667 102 2668K pde_opener
4399 4399 100% 0.60K 83 53 2656K hugetlbfs_inode_cache
3526 3440 97% 0.74K 82 43 2624K cifs_inode_cache
15600 15168 97% 0.16K 325 48 2600K sio_cache_2
27140 20863 76% 0.09K 590 46 2360K vmap_area
14999 14575 97% 0.15K 283 53 2264K sio_cache_1
1104 1040 94% 2.00K 69 16 2208K zio_data_buf_2048
1664 1380 82% 1.21K 64 26 2048K zio_cache
6324 6135 97% 0.31K 124 51 1984K mnt_cache
19782 18283 92% 0.09K 471 42 1884K arc_buf_hdr_t_l2only
1482 1482 100% 1.19K 57 26 1824K RAWv6
1197 1197 100% 1.50K 57 21 1824K zio_buf_1536
110 110 100% 12.00K 55 2 1760K zio_data_buf_12288
1134 1029 90% 1.50K 54 21 1728K zio_data_buf_1536
486 405 83% 3.50K 54 9 1728K zio_data_buf_3584
636 540 84% 2.50K 53 12 1696K zio_data_buf_2560
510 510 100% 3.00K 51 10 1632K zio_data_buf_3072
204 176 86% 8.00K 51 4 1632K zio_data_buf_7168
588 588 100% 2.50K 49 12 1568K zio_buf_2560
98 95 96% 12.00K 49 2 1568K zio_data_buf_10240
98 84 85% 16.00K 49 2 1568K zio_data_buf_14336
49 47 95% 16.25K 49 1 1568K cifs_request
384 363 94% 4.00K 48 8 1536K zio_buf_4096
4128 4042 97% 0.37K 96 43 1536K zil_lwb_cache
752 704 93% 2.00K 47 16 1504K zio_buf_2048
736 642 87% 2.00K 46 16 1472K biovec-128
414 351 84% 3.50K 46 9 1472K zio_buf_3584
450 370 82% 3.00K 45 10 1440K zio_buf_3072
344 312 90% 4.00K 43 8 1376K names_cache
172 148 86% 8.00K 43 4 1376K zio_buf_7168
86 70 81% 16.00K 43 2 1376K zio_buf_14336
84 76 90% 16.00K 42 2 1344K zio_data_buf_16384
164 140 85% 8.00K 41 4 1312K zio_buf_5120
160 128 80% 8.00K 40 4 1280K zio_data_buf_8192
156 136 87% 8.00K 39 4 1248K zio_data_buf_5120
156 144 92% 8.00K 39 4 1248K zio_data_buf_6144
78 64 82% 12.00K 39 2 1248K zio_buf_10240
152 152 100% 8.00K 38 4 1216K zio_buf_8192
76 62 81% 12.00K 38 2 1216K zio_buf_12288
296 296 100% 4.00K 37 8 1184K biovec-max
296 247 83% 4.00K 37 8 1184K zio_data_buf_4096
148 144 97% 8.00K 37 4 1184K zio_buf_6144
992 992 100% 1.00K 31 32 992K biovec-64
39780 32672 82% 0.02K 234 170 936K lsm_file_cache
4536 4326 95% 0.19K 108 42 864K kmalloc-rcl-192
15549 15477 99% 0.05K 213 73 852K nsproxy
6240 6000 96% 0.13K 208 30 832K sio_cache_0
1275 1275 100% 0.62K 25 51 800K task_group
12480 8408 67% 0.06K 195 64 780K kmem_cache_node
1472 1344 91% 0.50K 46 32 736K skbuff_fclone_cache
690 690 100% 1.05K 23 30 736K nfs_inode_cache
3738 3738 100% 0.19K 89 42 712K proc_dir_entry
17646 17036 96% 0.04K 173 102 692K Acpi-Namespace
5216 3934 75% 0.12K 163 32 652K skbuff_ext_cache
2528 2528 100% 0.25K 79 32 632K pool_workqueue
2464 2368 96% 0.25K 77 32 616K kmalloc-rcl-256
12325 11645 94% 0.05K 145 85 580K zio_link_cache
2856 2856 100% 0.19K 68 42 544K dmaengine-unmap-16
624 624 100% 0.81K 16 39 512K bdev_cache
544 544 100% 0.94K 16 34 512K PING
416 416 100% 1.19K 16 26 512K PINGv6
32 32 100% 16.00K 16 2 512K lz4_cache
1972 1972 100% 0.23K 58 34 464K tw_sock_TCPv6
294 294 100% 1.50K 14 21 448K SCTPv6
78 78 100% 4.75K 13 6 416K net_namespace
442 442 100% 0.94K 13 34 416K mqueue_inode_cache
552 552 100% 0.69K 12 46 384K xfrm_state
6120 6120 100% 0.05K 72 85 288K ftrace_event_field
954 954 100% 0.30K 18 53 288K request_sock_TCP
612 612 100% 0.44K 17 36 272K cifs_small_rq
752 752 100% 0.34K 16 47 256K taskstats
656 656 100% 0.38K 16 41 256K fuse_request
848 848 100% 0.30K 16 53 256K request_sock_TCPv6
408 408 100% 0.62K 8 51 256K rpc_inode_cache
2418 2418 100% 0.10K 62 39 248K Acpi-ParseExt
1147 1147 100% 0.21K 31 37 248K file_lock_cache
481 481 100% 0.43K 13 37 208K uts_namespace
90 90 100% 2.04K 6 15 192K request_queue
4352 4352 100% 0.03K 34 128 136K fsnotify_mark_connector
1848 1848 100% 0.07K 33 56 132K Acpi-Operand
544 544 100% 0.23K 16 34 128K posix_timers_cache
848 848 100% 0.15K 16 53 128K zil_zcw_cache
507 507 100% 0.20K 13 39 104K pid_namespace
512 512 100% 0.12K 16 32 64K spl_vn_cache
512 512 100% 0.12K 16 32 64K spl_vn_file_cache
736 736 100% 0.09K 16 46 64K zfs_znode_hold_cache
14 14 100% 4.06K 2 7 64K x86_fpu
46 46 100% 1.38K 2 23 64K SCTP
30 30 100% 1.06K 1 30 32K dmaengine-unmap-128
15 15 100% 2.06K 1 15 32K dmaengine-unmap-256
42 42 100% 0.75K 1 42 32K dax_cache
1 1 100% 16.81K 1 1 32K kvm_vcpu
195 195 100% 0.10K 5 39 20K buffer_head
62 62 100% 0.26K 2 31 16K numa_policy
30 30 100% 0.52K 1 30 16K user_namespace
84 84 100% 0.09K 2 42 8K configfs_dir_cache
40 40 100% 0.20K 1 40 8K ip4-frags
64 64 100% 0.06K 1 64 4K dmaengine-unmap-2
170 170 100% 0.02K 1 170 4K mod_hash_entries
0 0 0% 0.01K 0 512 0K kmalloc-rcl-8
0 0 0% 0.02K 0 256 0K kmalloc-rcl-16
0 0 0% 0.03K 0 128 0K kmalloc-rcl-32
0 0 0% 0.50K 0 32 0K kmalloc-rcl-512
0 0 0% 1.00K 0 32 0K kmalloc-rcl-1k
0 0 0% 2.00K 0 16 0K kmalloc-rcl-2k
0 0 0% 4.00K 0 8 0K kmalloc-rcl-4k
0 0 0% 8.00K 0 4 0K kmalloc-rcl-8k
0 0 0% 0.09K 0 42 0K dma-kmalloc-96
0 0 0% 0.19K 0 42 0K dma-kmalloc-192
0 0 0% 0.01K 0 512 0K dma-kmalloc-8
0 0 0% 0.02K 0 256 0K dma-kmalloc-16
0 0 0% 0.03K 0 128 0K dma-kmalloc-32
0 0 0% 0.06K 0 64 0K dma-kmalloc-64
0 0 0% 0.12K 0 32 0K dma-kmalloc-128
0 0 0% 0.25K 0 32 0K dma-kmalloc-256
0 0 0% 0.50K 0 32 0K dma-kmalloc-512
0 0 0% 1.00K 0 32 0K dma-kmalloc-1k
0 0 0% 2.00K 0 16 0K dma-kmalloc-2k
0 0 0% 4.00K 0 8 0K dma-kmalloc-4k
0 0 0% 8.00K 0 4 0K dma-kmalloc-8k
0 0 0% 0.12K 0 34 0K iint_cache
0 0 0% 0.25K 0 32 0K dquot
0 0 0% 0.03K 0 128 0K dnotify_struct
0 0 0% 0.19K 0 42 0K userfaultfd_ctx_cache
0 0 0% 0.05K 0 85 0K fscrypt_ctx
0 0 0% 0.06K 0 64 0K fscrypt_info
0 0 0% 0.05K 0 73 0K mbcache
0 0 0% 0.04K 0 102 0K ext4_extent_status
0 0 0% 0.03K 0 128 0K ext4_pending_reservation
0 0 0% 0.12K 0 32 0K ext4_allocation_context
0 0 0% 1.05K 0 30 0K ext4_inode_cache
0 0 0% 0.02K 0 256 0K jbd2_revoke_table_s
0 0 0% 0.12K 0 34 0K jbd2_journal_head
0 0 0% 0.69K 0 46 0K squashfs_inode_cache
0 0 0% 0.04K 0 102 0K fat_cache
0 0 0% 0.71K 0 45 0K fat_inode_cache
0 0 0% 0.81K 0 39 0K ecryptfs_auth_tok_list_item
0 0 0% 0.02K 0 256 0K ecryptfs_file_cache
0 0 0% 0.94K 0 34 0K ecryptfs_inode_cache
0 0 0% 0.56K 0 28 0K ecryptfs_key_record_cache
0 0 0% 2.57K 0 12 0K dm_uevent
0 0 0% 3.23K 0 9 0K kcopyd_job
0 0 0% 0.18K 0 44 0K ip6-frags
0 0 0% 1.12K 0 28 0K btrfs_inode
0 0 0% 0.11K 0 36 0K btrfs_path
0 0 0% 12.00K 0 2 0K btrfs_free_space_bitmap
0 0 0% 0.14K 0 28 0K btrfs_extent_map
0 0 0% 0.41K 0 39 0K btrfs_ordered_extent
0 0 0% 0.30K 0 52 0K btrfs_delayed_node
0 0 0% 0.19K 0 42 0K kcf_sreq_cache
0 0 0% 0.50K 0 32 0K kcf_areq_cache
0 0 0% 0.19K 0 42 0K kcf_context_cache
0 0 0% 0.44K 0 36 0K ddt_entry_cache
0 0 0% 0.38K 0 41 0K arc_buf_hdr_t_full_crypt
0 0 0% 0.31K 0 51 0K nf_conntrack
0 0 0% 0.17K 0 46 0K kvm_mmu_page_header
0 0 0% 0.13K 0 30 0K kvm_async_pf
0 0 0% 0.35K 0 45 0K nfs_direct_cache
Here are my ARC stat
------------------------------------------------------------------------
ZFS Subsystem Report Thu May 07 15:35:32 2020
Linux 5.3.13-1-pve 0.8.2-pve2
Machine: prd-proxmox-5 (x86_64) 0.8.2-pve2
ARC status: HEALTHY
Memory throttle count: 0
ARC size (current): 34.2 % 7.3 GiB
Target size (adaptive): 40.0 % 8.5 GiB
Min size (hard limit): 40.0 % 8.5 GiB
Max size (high water): 2:1 21.2 GiB
Most Frequently Used (MFU) cache size: 76.6 % 4.9 GiB
Most Recently Used (MRU) cache size: 23.4 % 1.5 GiB
Metadata cache size (hard limit): 75.0 % 15.9 GiB
Metadata cache size (current): 12.3 % 2.0 GiB
Dnode cache size (hard limit): 10.0 % 1.6 GiB
Dnode cache size (current): 25.3 % 413.1 MiB
ARC hash breakdown:
Elements max: 1.0M
Elements current: 38.1 % 400.3k
Collisions: 64.1M
Chain max: 5
Chains: 4.8k
ARC misc:
Deleted: 9.0M
Mutex misses: 2.7k
Eviction skips: 1.4M
ARC total accesses (hits + misses): 55.6G
Cache hit ratio: 99.8 % 55.5G
Cache miss ratio: 0.2 % 112.2M
Actual hit ratio (MFU + MRU hits): 99.2 % 55.2G
Data demand efficiency: 100.0 % 45.2G
Data prefetch efficiency: 79.5 % 305.6M
Cache hits by cache type:
Most frequently used (MFU): 96.1 % 53.4G
Most recently used (MRU): 3.3 % 1.8G
Most frequently used (MFU) ghost: < 0.1 % 2.8M
Most recently used (MRU) ghost: 0.1 % 35.3M
Anonymously used: 0.5 % 297.4M
Cache hits by data type:
Demand data: 81.4 % 45.2G
Demand prefetch data: 0.4 % 243.1M
Demand metadata: 17.9 % 9.9G
Demand prefetch metadata: 0.3 % 141.3M
Cache misses by data type:
Demand data: 15.4 % 17.3M
Demand prefetch data: 55.7 % 62.5M
Demand metadata: 7.4 % 8.3M
Demand prefetch metadata: 21.4 % 24.0M
DMU prefetch efficiency: 16.0G
Hit ratio: 0.7 % 116.0M
Miss ratio: 99.3 % 15.9G
L2ARC status: HEALTHY
Low memory aborts: 16.1k
Free on write: 9.1M
R/W clashes: 82
Bad checksums: 0
I/O errors: 0
L2ARC size (adaptive): 8.7 GiB
Compressed: 36.9 % 3.2 GiB
Header size: < 0.1 % 1.3 MiB
L2ARC breakdown: 112.2M
Hit ratio: 10.4 % 11.7M
Miss ratio: 89.6 % 100.5M
Feeds: 10.1M
L2ARC writes:
Writes sent: 100 % 6.6 MiB
L2ARC evicts:
Lock retries: 140
Upon reading: 2
Solaris Porting Layer (SPL):
spl_hostid 0
spl_hostid_path /etc/hostid
spl_kmem_alloc_max 1048576
spl_kmem_alloc_warn 65536
spl_kmem_cache_expire 2
spl_kmem_cache_kmem_limit 2048
spl_kmem_cache_kmem_threads 4
spl_kmem_cache_magazine_size 0
spl_kmem_cache_max_size 32
spl_kmem_cache_obj_per_slab 8
spl_kmem_cache_obj_per_slab_min 1
spl_kmem_cache_reclaim 0
spl_kmem_cache_slab_limit 16384
spl_max_show_tasks 512
spl_panic_halt 0
spl_taskq_kick 0
spl_taskq_thread_bind 0
spl_taskq_thread_dynamic 1
spl_taskq_thread_priority 1
spl_taskq_thread_sequential 4
Tunables:
dbuf_cache_hiwater_pct 10
dbuf_cache_lowater_pct 10
dbuf_cache_max_bytes 712438579
dbuf_cache_shift 5
dbuf_metadata_cache_max_bytes 356219289
dbuf_metadata_cache_shift 6
dmu_object_alloc_chunk_shift 7
dmu_prefetch_max 134217728
ignore_hole_birth 1
l2arc_feed_again 1
l2arc_feed_min_ms 200
l2arc_feed_secs 1
l2arc_headroom 2
l2arc_headroom_boost 200
l2arc_noprefetch 1
l2arc_norw 0
l2arc_write_boost 8388608
l2arc_write_max 8388608
metaslab_aliquot 524288
metaslab_bias_enabled 1
metaslab_debug_load 0
metaslab_debug_unload 0
metaslab_force_ganging 16777217
metaslab_fragmentation_factor_enabled 1
metaslab_lba_weighting_enabled 1
metaslab_preload_enabled 1
send_holes_without_birth_time 1
spa_asize_inflation 24
spa_config_path /etc/zfs/zpool.cache
spa_load_print_vdev_tree 0
spa_load_verify_data 1
spa_load_verify_maxinflight 10000
spa_load_verify_metadata 1
spa_slop_shift 5
vdev_removal_max_span 32768
vdev_validate_skip 0
zap_iterate_prefetch 1
zfetch_array_rd_sz 1048576
zfetch_max_distance 8388608
zfetch_max_streams 8
zfetch_min_sec_reap 2
zfs_abd_scatter_enabled 1
zfs_abd_scatter_max_order 10
zfs_abd_scatter_min_size 1536
zfs_admin_snapshot 0
zfs_arc_average_blocksize 8192
zfs_arc_dnode_limit 0
zfs_arc_dnode_limit_percent 10
zfs_arc_dnode_reduce_percent 10
zfs_arc_grow_retry 0
zfs_arc_lotsfree_percent 10
zfs_arc_max 22798034534
zfs_arc_meta_adjust_restarts 4096
zfs_arc_meta_limit 0
zfs_arc_meta_limit_percent 75
zfs_arc_meta_min 0
zfs_arc_meta_prune 10000
zfs_arc_meta_strategy 1
zfs_arc_min 9119213814
zfs_arc_min_prefetch_ms 0
zfs_arc_min_prescient_prefetch_ms 0
zfs_arc_p_dampener_disable 1
zfs_arc_p_min_shift 0
zfs_arc_pc_percent 0
zfs_arc_shrink_shift 0
zfs_arc_sys_free 0
zfs_async_block_max_blocks 100000
zfs_autoimport_disable 1
zfs_checksum_events_per_second 20
zfs_commit_timeout_pct 5
zfs_compressed_arc_enabled 1
zfs_condense_indirect_commit_entry_delay_ms 0
zfs_condense_indirect_vdevs_enable 1
zfs_condense_max_obsolete_bytes 1073741824
zfs_condense_min_mapping_bytes 131072
zfs_dbgmsg_enable 1
zfs_dbgmsg_maxsize 4194304
zfs_dbuf_state_index 0
zfs_ddt_data_is_special 1
zfs_deadman_checktime_ms 60000
zfs_deadman_enabled 1
zfs_deadman_failmode continue
zfs_deadman_synctime_ms 600000
zfs_deadman_ziotime_ms 300000
zfs_dedup_prefetch 0
zfs_delay_min_dirty_percent 60
zfs_delay_scale 500000
zfs_delete_blocks 20480
zfs_dirty_data_max 4294967296
zfs_dirty_data_max_max 4294967296
zfs_dirty_data_max_max_percent 25
zfs_dirty_data_max_percent 10
zfs_dirty_data_sync_percent 20
zfs_disable_ivset_guid_check 0
zfs_dmu_offset_next_sync 0
zfs_expire_snapshot 300
zfs_flags 0
zfs_free_bpobj_enabled 1
zfs_free_leak_on_eio 0
zfs_free_min_time_ms 1000
zfs_immediate_write_sz 32768
zfs_initialize_value 16045690984833335022
zfs_key_max_salt_uses 400000000
zfs_lua_max_instrlimit 100000000
zfs_lua_max_memlimit 104857600
zfs_max_missing_tvds 0
zfs_max_recordsize 1048576
zfs_metaslab_fragmentation_threshold 70
zfs_metaslab_segment_weight_enabled 1
zfs_metaslab_switch_threshold 2
zfs_mg_fragmentation_threshold 95
zfs_mg_noalloc_threshold 0
zfs_multihost_fail_intervals 10
zfs_multihost_history 0
zfs_multihost_import_intervals 20
zfs_multihost_interval 1000
zfs_multilist_num_sublists 0
zfs_no_scrub_io 0
zfs_no_scrub_prefetch 0
zfs_nocacheflush 0
zfs_nopwrite_enabled 1
zfs_object_mutex_size 64
zfs_obsolete_min_time_ms 500
zfs_override_estimate_recordsize 0
zfs_pd_bytes_max 52428800
zfs_per_txg_dirty_frees_percent 5
zfs_prefetch_disable 0
zfs_read_chunk_size 1048576
zfs_read_history 0
zfs_read_history_hits 0
zfs_reconstruct_indirect_combinations_max 4096
zfs_recover 0
zfs_recv_queue_length 16777216
zfs_removal_ignore_errors 0
zfs_removal_suspend_progress 0
zfs_remove_max_segment 16777216
zfs_resilver_disable_defer 0
zfs_resilver_min_time_ms 3000
zfs_scan_checkpoint_intval 7200
zfs_scan_fill_weight 3
zfs_scan_ignore_errors 0
zfs_scan_issue_strategy 0
zfs_scan_legacy 0
zfs_scan_max_ext_gap 2097152
zfs_scan_mem_lim_fact 20
zfs_scan_mem_lim_soft_fact 20
zfs_scan_strict_mem_lim 0
zfs_scan_suspend_progress 0
zfs_scan_vdev_limit 4194304
zfs_scrub_min_time_ms 1000
zfs_send_corrupt_data 0
zfs_send_queue_length 16777216
zfs_send_unmodified_spill_blocks 1
zfs_slow_io_events_per_second 20
zfs_spa_discard_memory_limit 16777216
zfs_special_class_metadata_reserve_pct 25
zfs_sync_pass_deferred_free 2
zfs_sync_pass_dont_compress 8
zfs_sync_pass_rewrite 2
zfs_sync_taskq_batch_pct 75
zfs_trim_extent_bytes_max 134217728
zfs_trim_extent_bytes_min 32768
zfs_trim_metaslab_skip 0
zfs_trim_queue_limit 10
zfs_trim_txg_batch 32
zfs_txg_history 100
zfs_txg_timeout 5
zfs_unlink_suspend_progress 0
zfs_user_indirect_is_special 1
zfs_vdev_aggregate_trim 0
zfs_vdev_aggregation_limit 1048576
zfs_vdev_aggregation_limit_non_rotating 131072
zfs_vdev_async_read_max_active 3
zfs_vdev_async_read_min_active 1
zfs_vdev_async_write_active_max_dirty_percent 60
zfs_vdev_async_write_active_min_dirty_percent 30
zfs_vdev_async_write_max_active 10
zfs_vdev_async_write_min_active 2
zfs_vdev_cache_bshift 16
zfs_vdev_cache_max 16384
zfs_vdev_cache_size 0
zfs_vdev_default_ms_count 200
zfs_vdev_initializing_max_active 1
zfs_vdev_initializing_min_active 1
zfs_vdev_max_active 1000
zfs_vdev_min_ms_count 16
zfs_vdev_mirror_non_rotating_inc 0
zfs_vdev_mirror_non_rotating_seek_inc 1
zfs_vdev_mirror_rotating_inc 0
zfs_vdev_mirror_rotating_seek_inc 5
zfs_vdev_mirror_rotating_seek_offset 1048576
zfs_vdev_ms_count_limit 131072
zfs_vdev_queue_depth_pct 1000
zfs_vdev_raidz_impl cycle [fastest] original scalar sse2 ssse3
zfs_vdev_read_gap_limit 32768
zfs_vdev_removal_max_active 2
zfs_vdev_removal_min_active 1
zfs_vdev_scheduler noop
zfs_vdev_scrub_max_active 2
zfs_vdev_scrub_min_active 1
zfs_vdev_sync_read_max_active 10
zfs_vdev_sync_read_min_active 10
zfs_vdev_sync_write_max_active 10
zfs_vdev_sync_write_min_active 10
zfs_vdev_trim_max_active 2
zfs_vdev_trim_min_active 1
zfs_vdev_write_gap_limit 4096
zfs_zevent_cols 80
zfs_zevent_console 0
zfs_zevent_len_max 512
zfs_zil_clean_taskq_maxalloc 1048576
zfs_zil_clean_taskq_minalloc 1024
zfs_zil_clean_taskq_nthr_pct 100
zil_nocacheflush 0
zil_replay_disable 0
zil_slog_bulk 786432
zio_deadman_log_all 0
zio_dva_throttle_enabled 1
zio_requeue_io_start_cut_in_line 1
zio_slow_io_ms 30000
zio_taskq_batch_pct 75
zvol_inhibit_dev 0
zvol_major 230
zvol_max_discard_blocks 16384
zvol_prefetch_bytes 131072
zvol_request_sync 0
zvol_threads 32
zvol_volmode 1
VDEV cache disabled, skipping section
ZIL committed transactions: 160.2M
Commit requests: 22.3M
Flushes to stable storage: 22.3M
Transactions to SLOG storage pool: 2.6 TiB 38.6M
Transactions to non-SLOG storage pool: 0 Bytes 0
And the pool (I know there is a SLOG and L2ARC but the issue also appear on other servers without them)
pool: rpool
state: ONLINE
scan: scrub repaired 0B in 0 days 00:37:05 with 0 errors on Sun Apr 12 01:01:06 2020
config:
NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
wwn-0x5000cca05935b164-part2 ONLINE 0 0 0
wwn-0x5000cca05934a01c-part2 ONLINE 0 0 0
mirror-1 ONLINE 0 0 0
wwn-0x5000cca05936d108-part2 ONLINE 0 0 0
wwn-0x5000cca059346164-part2 ONLINE 0 0 0
logs
mirror-2 ONLINE 0 0 0
wwn-0x5002538050002f4a-part2 ONLINE 0 0 0
wwn-0x5002538050002e22-part2 ONLINE 0 0 0
cache
wwn-0x5002538050002f4a-part3 ONLINE 0 0 0
wwn-0x5002538050002e22-part3 ONLINE 0 0 0
errors: No known data errors
Any ideas ?