ZFS How to dimension log device and cache device of a zpool

Well, that's interesting. That implies your standard workload you probably see a lot of migration from MFU to MFU-ghost to L2ARC then back to MRU/MFU. Interesting. Almost like "I'm watching the same movie or playing the same music all day" so blocks get read/prefetched, then fall off into L2ARC until the loop starts over and the data comes from L2ARC instead of the disk.

That's a good example "performance tuning": you need to base it on your specific workload, don't go by any Internet generalizations and actually the most important: Test the changes, one at a time.

Thanks for sharing Argentum
 
I had a free SSD lying around in my PC and i had put on it a ZIL cache and L2ARC.
But it's a good thing to analyse the statistics to evaluate the effectivenes...
 
A properly tuned L2ARC will increase read performance, but it comes at the price of decreased write performance. The pool essentially magnifies writes by writing them to the pool as well as the L2ARC device. Another interesting effect that's been observed is a falloff in L2ARC performance when doing a streaming read from L2ARC while simultaneously doing a heavy write workload.

References:
 
  • Like
Reactions: mer
A properly tuned L2ARC will increase read performance, but it comes at the price of decreased write performance. The pool essentially magnifies writes by writing them to the pool as well as the L2ARC device. Another interesting effect that's been observed is a falloff in L2ARC performance when doing a streaming read from L2ARC while simultaneously doing a heavy write workload.
Agree that tuning the ZFS system is heavily dependent of application. Even desktop loads are different. But in general, IMHO, on desktop the users are up to responsiveness and on servers up to throughput.
Personally I have done several experiments and found out that 64GB of fast L2ARC is enough on desktop and you can actually feel it. But the feeling probably depends how one uses the system, what DM is in use, etc.
Another thing is that however L2ARC doubles the disk write operations, there is also a separate SATA channel for L2ARC. The truth in practical experiments.
I have a test system, where is only one main SSD drive for desktop system and another SSD for L2ARC and ZIL. I have tried to connect and disconnect the other drive, and the feeling is, that even in this simple configuration the L2ARC and ZIL make the desktop system more responsive. My hypothesis in this case is that this is due to an extra SATA channel and how the load distributes between these channels.
 
I have tried to connect and disconnect the other drive, and the feeling is, that even in this simple configuration the L2ARC and ZIL make the desktop system more responsive. My hypothesis in this case is that this is due to an extra SATA channel and how the load distributes between these channels
I can't agree more. There is a machine holding 5xHDD (not mirrored) here that crawls more than snail. Increasing its RAM did make a difference.

But using an SSD in the SSD Caching slot on the MoBo for L2ARC and the topping it with log partition make a huge difference. Imagine dropping a Mercedes Benz engine in a Toyota car; a short time spin - boot up time - says it all among other things.

But the L2ARC with log makes little difference on a server doing daily Poudrière build, big data and much more. The RAM on is fairly up to the task and on top of it, it's ECC.
 
  • Like
Reactions: mer
Back
Top