• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.

ZFS using a spindle drive to cache metadata

chrcol

Well-Known Member

Thanks: 13
Messages: 376

#1
Just for the curious this is surprisingly effective for some reason.

I am no expert on how L2ARC is coded and technically designed, but I read 2 days ago about someone trying to fix low performance reading directories with large amounts of files (a well known ZFS weakness), typically this can be temporarily improved by increasing the meta limit inside the ARC but its very ram hungry and doesnt stick well.

The person reported by setting their L2ARC device to only cache metadata they had an improvement as the metadata wasnt been flushed out by other data on their L2 device.

I got no spare SSD's and since my target server for this performance improvement is an ESXi guest (which has no SSD's) I decided to test if a spindle backed storage would work as a metadata only caching device. The results were not bad.

On a directory with 602k files in. In a cold cache situation on the VM it takes 17 seconds to read the directory.
Rereading immediately afterwards with the metadata in primary ARC based in ram is very quick under a second.
Rereading 20 mins later its back to over 10 seconds, as the ARC is not sufficient to keep the metadata cached for long. However seems to be partially intact in ARC as is not the full 17 seconds.

I then added a virtual HDD on the VM, its hosted on the same spindle drive as the ZFS storage. So its got contention for i/o and its no faster even if uncontended.
It was configured as a L2ARC device for the pool, and set to only cache metadata. Caching all data made it perform poorly as one would expect.
The same initial read from cold is around 17 seconds.
Reread is under one second.
However waiting 20 mins and then rereading is now much better than before, its about 1-2 seconds.

Any thoughts?

Link to the mailing list post is here, this guy used an SSD for his L2ARC.

https://lists.freebsd.org/pipermail/freebsd-fs/2013-February/016492.html

In case you get a 503 backend failure (something is up with those mailing list servers right now), then google cache link is here.

https://webcache.googleusercontent....ebruary/016492.html+&cd=5&hl=en&ct=clnk&gl=uk
 

sko

Well-Known Member

Thanks: 158
Messages: 350

#2
The person reported by setting their L2ARC device to only cache metadata they had an improvement as the metadata wasnt been flushed out by other data on their L2 device.
Rereading 20 mins later its back to over 10 seconds, as the ARC is not sufficient to keep the metadata cached for long. However seems to be partially intact in ARC as is not the full 17 seconds.
This both indicates that the system is under constant memory pressure. Data is only flushed to L2ARC if the ARC is full and new data has to be cached. If the metadata is flushed out of ARC and the L2ARC within only 20 minutes, the system (or ZFS) simply has way too less memory for the load it is supposed to support.

This guy is running a huge pool (96x 3TB drives, thats 288TB raw storage) with only 64GB of RAM:

We recently upgraded, and are now at 96 3TB drives in the pool.
System Memory:
[...]
Real Installed: 64.00 GiB
However, the ARC and L2ARC hit ratios are pretty good - so I suspect most of the data on the pool is "cold" data and most of the data they work on fits into ARC. The lower hit ratio on the L2ARC usually indicates that most of the "hot" data is in ARC and most of what spills out to L2ARC really isn't accessed any more in a reasonable time frame.
But:
L2 ARC Summary: (HEALTHY)
[...]
Low Memory Aborts: 20
This indicates the system is under memory pressure from time to time. IIRC this indicates that space in RAM for the L2ARC table had to be abandoned. Per 1GB of L2ARC requires roughly 25MB RAM, so for a 250GB L2ARC the system should provide 6,25 GB of (additional) RAM - IIRC this is lend from the ARC, so the max ARC size should be increased accordingly. Adding L2ARC in a system with low memory is usually counterproductive.

I've used and abused ZFS for only ~3 years now, but almost all performance issues I've encountered in this short Time were caused by RAM shortage.


BTW: Users putting over half a million files in a single directory deserve to be hit with a blunt object repeatedly....
 
Top