ZFS NVMe's as caching for ZFS?

Hi all,

I am planning a new storage box which will be running things like VM datastores, minecraft servers and other game servers (plus a central MySQL instance). I've come up with an idea which might or might not work but here it is.

I'm planning to run 6 (will expand to 12 if necessary later) 3,5" 2TB Ultrastar drives in a raidz2 (will raidz actually be okay?) and accellerate them with 4 NVMe's (Samsung 950 PRO 256GB). i am planning to split the NVMe's up into 2 partitions of 100GB (the rest for overprovisioning) whereas the first part will be running a raid 10 (ZIL) and the second part raid 0 (L2ARC).

Would this setup be ideal or am I wrong about this? Would RAID5 maybe be better for the L2ARC since an NVMe is not really hot-swapable? I do know that consumer SSD's are not suited for DC use hence the RAID 10 on the ZIL, but a recent report from Google that there really isn't a difference between Consumer and DC ssd's in terms of endurance.

All of this is going to be exported through FC and NFS, would anyone see possible issues? Also, I am currently presenting FC targets with ubuntu (zfs on ubuntu), however would FreeBSD be a better solution for this?

I would recommend against RAID-Z2 in favour of more spindles an mirrored VDEVS (aka RAID1+0). I can't tell you if your money is better spent on NVMe SSDs or more RAM for your use-case. If in doubt I would prefer more RAM. Keep in mind that the L2ARC requires wired down kernel memory for its meta-data as well. I suspect that you're better of with a single dedicated mirrored SLOG on high endurance, low jitter and low latency flash storage with lots of RAM and a mirrored pool. While RAID1+0 offers less redundancy it also has smaller failure domains and faster recovery. Keep in mind that ZFS also supports 3 disk mirrors if you need n-2 redundancy and/or maximum read performance.