ZFS Mirrored NVMe workstation: what to do with extra SATA slot

Its a tinkering machine so I don't mind breakage and I have backups elsewhere. I currently have an SSD in there holding the ZIL and L2ARC just to learn how that works. From what I understand, the L2ARC at least is a poor use of it, as the drive is slower than the pool drives. I read somewhere that the L2ARC can be set to only hold metadata, perhaps I can try testing the performance of that.

The NVMes are 2TB each, SATA SSD 500GB, machine has a Xeon and 64GB RAM. Besides playing with the OS I do some data science work on it, miscellaneous development, web browsing. I also compile the desktop from ports and use the linuxulator for some graphical apps. I haven't yet learnt how to use byhve but I'm thinking to spin up a Windows VM in it someday.

All that said, any other ideas for the drive? I'm willing to recreate the pool if needed. These are the ones that I'd had in mind:

- Dedup table on SSD. I'm aware of the caveats of dedup, but like I said its a tinkering machine. Besides there are a lot of duplicated packages when it comes to virtual environments in say Python. It'd be interesting to see ZFS handle that.

- Special VDEVs on SSD. I don't know much about these but I'd be interested in learning about the types of metadata etc. that can be offloaded. I'm aware that many of these options introduce a single point of failure at the SSD.

- Putting certain directories on the SSD to reduce fragmentation of main pool or reduce IO competition (likely very insignificant for my usecases and a bit uninteresting). I currently have /tmp in RAM and my browser caches in RAM.

- Dedicating it to virtual machines. Is there any benefit here? I could create a new ZFS pool or use UFS. Is there any good reason to do so?

Appreciate any input, thanks
 
A separate ZIL should always have lower latency than the pool it serves, and have equivalent (or better) redundancy. It never needs to be larger than main memory.

You will benefit from a separate ZIL provided it meets these criteria, and if, and only if, you have synchronous I/O on the pool.

A special vdev offloads all the metadata, and, optionally, smaller files. It should have equivalent (or better) redundancy than (and superior performance to) the pool it serves.

As for the rest, we need to know more about your starting point. You have two NVMe SSDs. How exactly are they configured and used.
 
Thanks for the reply. It seems that both ZIL and special vdev are indeed poor choices for this configuration as well. I’m less concerned about redundancy as this isn’t a production machine but don’t want to do something that actively harms perf. I had been looking into whether an intel optane can use a SATA slot but concluded a while ago that there’s no option for that, maybe it’s worth seeing if there’s a workaround

Sorry, I forgot to specify the NVMe configuration. They hold the root filesystem in a mirrored configuration, set by the FreeBSD installer. No software encryption as I’m using the hardware encryption of the devices.
 
You have a mis-fit in terms of performance and redundancy.

There's no harm in playing. e.g. sizing special VDEVs is a bit of a black art.
 
Indeed. Learning lots of new things though. Just learnt about drives (aside from optane) that are actually some form of ram disk... ZeusRAM and the likes. The HGST s840Z is in fact purpose built for SLOG and could have been an option as it's 2.5 inch form factor, but alas its SAS and I have SATA. Then I found out that PLP is actually an important feature regarding the performance of SLOG, as it allows a drive to report data as "written" as soon as it hits the DRAM cache. Looking into the Intel S3700 as a popular option considering this, then I can test whether it can actually improve sync I/O with an NVMe pool.
 
Back
Top