Description
System information
Type | Version/Name |
---|---|
Distribution Name | Debian |
Distribution Version | 10 |
Linux Kernel | 5.4.14 |
Architecture | x64 |
ZFS Version | 0.8.4 |
SPL Version | 0.8.4-pve1 |
Describe the problem you're observing
I'm trying to tune L2ARC for maximum performance but I'm having trouble understand how exactly it operates in ZFS. The device that I want to use as an L2ARC cache vdev is an Intel P4608 enterprise SSD which is an x8 PCIe3 SSD device. It features 2 seperate pools of 3.2TB, each with x4 lanes, and I want to stripe both of these together for the combined speed and IOPS (would use x8 PCIe lanes for this). You can view additional stats about this drive here to give you an idea of the performance it is capable of.
RAM on this system is 512GB in total. I want to stripe this device as a cache vdev, so total L2ARC would be ~6.4TB.
Random reads and smaller size reads will surely be much faster compared to the pool of disks, but I'm confused about sequential reads and the docs do not explain properly. I do want sequential cached reads to be pulled from these drives as well for now because I can't see the harddrive pool outperforming this SSD when striped. I'm planning to use zpool add poolname cache ssd1 ssd2
as command, this will stripe the SSDs together instead of creating a JBOD pool, right?
Additionally, I'm seeing information that you need to set the l2arc_noprefetch
tunable to 0 in order to properly allow seqential reads but is this how it actually works? Does it not do sequential reads unless you set that to 0 (default is 1) or am I not understanding it correctly?
I'm also wondering what other performance tuning I should do in order to get the most out of L2ARC with modern hardware and hope you guys can give me some pointers.