Just for the curious this is surprisingly effective for some reason.
I am no expert on how L2ARC is coded and technically designed, but I read 2 days ago about someone trying to fix low performance reading directories with large amounts of files (a well known ZFS weakness), typically this can be temporarily improved by increasing the meta limit inside the ARC but its very ram hungry and doesnt stick well.
The person reported by setting their L2ARC device to only cache metadata they had an improvement as the metadata wasnt been flushed out by other data on their L2 device.
I got no spare SSD's and since my target server for this performance improvement is an ESXi guest (which has no SSD's) I decided to test if a spindle backed storage would work as a metadata only caching device. The results were not bad.
On a directory with 602k files in. In a cold cache situation on the VM it takes 17 seconds to read the directory.
Rereading immediately afterwards with the metadata in primary ARC based in ram is very quick under a second.
Rereading 20 mins later its back to over 10 seconds, as the ARC is not sufficient to keep the metadata cached for long. However seems to be partially intact in ARC as is not the full 17 seconds.
I then added a virtual HDD on the VM, its hosted on the same spindle drive as the ZFS storage. So its got contention for i/o and its no faster even if uncontended.
It was configured as a L2ARC device for the pool, and set to only cache metadata. Caching all data made it perform poorly as one would expect.
The same initial read from cold is around 17 seconds.
Reread is under one second.
However waiting 20 mins and then rereading is now much better than before, its about 1-2 seconds.
Any thoughts?
Link to the mailing list post is here, this guy used an SSD for his L2ARC.
https://lists.freebsd.org/pipermail/freebsd-fs/2013-February/016492.html
In case you get a 503 backend failure (something is up with those mailing list servers right now), then google cache link is here.
https://webcache.googleusercontent....ebruary/016492.html+&cd=5&hl=en&ct=clnk&gl=uk
I am no expert on how L2ARC is coded and technically designed, but I read 2 days ago about someone trying to fix low performance reading directories with large amounts of files (a well known ZFS weakness), typically this can be temporarily improved by increasing the meta limit inside the ARC but its very ram hungry and doesnt stick well.
The person reported by setting their L2ARC device to only cache metadata they had an improvement as the metadata wasnt been flushed out by other data on their L2 device.
I got no spare SSD's and since my target server for this performance improvement is an ESXi guest (which has no SSD's) I decided to test if a spindle backed storage would work as a metadata only caching device. The results were not bad.
On a directory with 602k files in. In a cold cache situation on the VM it takes 17 seconds to read the directory.
Rereading immediately afterwards with the metadata in primary ARC based in ram is very quick under a second.
Rereading 20 mins later its back to over 10 seconds, as the ARC is not sufficient to keep the metadata cached for long. However seems to be partially intact in ARC as is not the full 17 seconds.
I then added a virtual HDD on the VM, its hosted on the same spindle drive as the ZFS storage. So its got contention for i/o and its no faster even if uncontended.
It was configured as a L2ARC device for the pool, and set to only cache metadata. Caching all data made it perform poorly as one would expect.
The same initial read from cold is around 17 seconds.
Reread is under one second.
However waiting 20 mins and then rereading is now much better than before, its about 1-2 seconds.
Any thoughts?
Link to the mailing list post is here, this guy used an SSD for his L2ARC.
https://lists.freebsd.org/pipermail/freebsd-fs/2013-February/016492.html
In case you get a 503 backend failure (something is up with those mailing list servers right now), then google cache link is here.
https://webcache.googleusercontent....ebruary/016492.html+&cd=5&hl=en&ct=clnk&gl=uk