Solved ZFS configuration & cost effectiveness of it on PCs

Edit: I realized I knew too little about ZFS when I wrote this. Sorry guys.


I don't have a ECC memory so I don't want ZFS to use memory as ARC. I have a 256 GB M.2 PCIE SSD and 1 TB of HDD. And I will buy a 32 GB Optane module. And I have a SATA SSD 256 GB.

How much for ZIL needed for heavy use for example when copying a file from 500 MB/sn USB stick to PC.

Or maybe is it better to use M.2 as ARC completely and use Optane for ZIL and and a secondary SSD which I have for L2ARC.

And something about swap. Should I configure a part of Optane as swap or put swap in M.2?

Currently I think:
16 GB of Optane - ARC
8 GB of Optane - SLOG
8 GB of Optane - swap
All of M.2 - L2ARC


16 GB of Optane - swap
16 GB of Optane - SLOG
All of M.2 - ARC

And for ARC and L2ARC in config 2 or L2ARC in config 1 I will cache the sequential things to.

Can you help me with the configuration. Thank you.

(Edited after learning more things)
Last edited:
I don't have a ECC memory so I don't want ZFS to use memory as ARC.

The use of L2ARC will not make ZFS stop using RAM as ARC. L2ARC will just become a extension of that.

For 256GB of L2ARC (IIRC) you would need at least 64GB of RAM.

Unless you need utterly fast transfers, IMO, spend your money in more HDD space and/or ECC memory.
So can I change the ARC to use Optane instead?

Edit: Never mind I just realized the performance is not as DRAM even thou it is marketed like that. I will go for Unbuffered ECC.

As far I know it is not possible to "move" ARC from memory to elsewhere.

In regards to ZIL, you probably would be better using some small SLC SSD/M.2. ZIL is write intensive and a MLC one should not last too long.

EDIT: so, if you just need 1TB, a 1TB SSD + 4GB of ECC ram should be more than enough, assuming you do not need more memory for others tasks.
Dear lebarondemerde,

Actually I am a hobbyist who is going to try FreeBSD on a high-end laptop. (It is ironic that on my broken laptop I had Fedora 26 because of Optimus problem on FreeBSD and lack of CUDA and now I am on Debian machine because I can't play videos with Intel N3150 on FreeBSD but that is another problem. This laptop has no Optimus and I will try to run CUDA with Linux Emulation.) It is for daily use. I checked it and unfortunately ECC DRAM is not an option. Yes, I heard SLC is better for ZIL rather than Optane. I am not considering Optane anymore. Because even if it has low latency and high endurance, SLC can provide more performance overall.

I may buy a 64 GB M.2 NVMe SLC for using with; 16 Gigs of ZIL, 16 Gigs of swap and 32 Gigs of L2ARC for 1 TB 7200 RPM HDD. So, now I am considering to use 256 GB MLC M.2 NVMe SSD for regular cases. To some points I may mount HDD pool. Like wine Steam games and DBs. Thou I only have 8 Gigs of DRAM. I can upgrade it to 16 Gigs when I buy the SLC.

The problem is I have only 3 M.2 slots for SATA3 or NVMe and only one SATA3 for 2.5" HDD or SSD. So mirroring is not an option. Either I can mirror the SLC or MLC. No HDD mirror. With that and the lack of ECC I don't think I am going to be able to use any healing stuff whatsoever. So, should I consider UFS instead? What benefits can I get from ZFS other than self-healing? Yes, I can mount any pool to any directory that I want, but I can choose wisely where I am going to install stuff. And I won't need that much DRAM with UFS. Will ZFS provide me performance gain or loss since I am going to run on SSD with UFS?

I want to use ZFS since I enjoy using more advanced tech, but I can't find any significant benefit for using it.

What do you think about this, may I ask?

P.S.: Thanks for your helpful posts by the way.
nosferatu I am also just a hobbyist. :)

IMO, unless you are trying to really test ZFS features (while in a laptop is not the best way), I think you are over complicating the things.

L2ARC/ZIL are usually used to workaround a bottleneck, what you do not have since you can use SSD/M.2 almost everywhere.

In your situation I would:
  1. grab a M.2 NVMe stick and install the system in it (+ swap partition), nothing huge then. Lets call it zroot pool.
  2. get a large SSD (of the size you need) and create another pool for $HOME. Lets call it zdata pool.
  3. More memory is always good.
Alternatively, since you have x3 M.2 options and if you are not willing to save money, instead of using a SSD you could use two mirrored M.2 for $HOME and safety. You can easily install the system again.

In any of those scenarios either L2ARC or ZIL would make little to no difference, almost non sense. :D

Still, there is a 2.5" slot remaining, what could simple not be used (saving battery); be used for something you like that need more space (video, music) and what would not be really affected by its slowness (compared with NVMe); or simple use it for backup.

So, if 2.5" will be used you would just need to create another pool in it [zdata, or zmedia, zbackup etc.] and mount it wherever you like. For instance: $HOME/{Music}, $HOME/{Videos} etc.

Or, for backup only, simple let it in there and use the zfs send | zfs recv feature.

Cheers! :beer:

EDIT: another alternative would be using 3 identical M.2 in RAIDZ1, one pool, and let the 2.5" for anything you desire. It is not a good practice to use different drives in RAID/MIRROR.

Just remembering, if for backup, the driver should be larger than the all data it will backup. :)

You could even use sysutils/zap to automatize that using snapshots.

Just one thing: look for NVMe with good write endurance, those Samsung usually have 3K writes, what is very little. A good consumer one should be around 10K, I think.

Using them in RAIDZ will make them wear faster than normal.

I usually like Plextor, but I am not aware about they current drivers.
Do not let a lack of non-ECC RAM disuade you from using ZFS.
If your non-ECC memory corrupts data, it doesn’t matter if you are using ZFS, UFS, or any other FS; your data will be corrupt. It is no more damaging to use ZFS over another.

I use ZFS on my ThinkPad T440, I reep the benefits of snapshots, cloned datasets, simple replication across the network, boot environments, etc.
I’ve not tested the same hardware with UFS to compare performance, but my laptop is performant enough for my (also hobbyist) needs.
Thanks for your posts. You have been so helpful. Now, I have the option to chose 2 x SATA3 M.2 + SATA3 2.5" High Endurance SSDs in RAIDZ and 1 NVMe versus 3 NVMe in RAIDZ and 1 HDD. Option 3 would be 2 x NVMe in mirror mode and a M.2 SATA3 and 2.5" SATA3. I will go for the option 3.

Thanks again. And cheers :beer::).
nosferatu just do not use different drives/sticks in RAID, they should all be identical or will always be default to the minor/slowest option available.

For instance, if you mirror a 128GB NVMe stick with a 1TB 5400 RPM, the result will be "two" 128GB 5400 RPM drives, mirrored.
nosferatu just do not use different drives/sticks in RAID, they should all be identical or will always be default to the minor/slowest option available.
While your advice is correct, it it too strong for a small system. Let me describe two hypothetical scenarios:

One: Professional builds a new storage server, buys 100 disks at $300 each. 99 of the disks are 10TB disks that spin at 7200RPM, one is a 4TB disk that spins at 5400RPM. The performance and capacity of his 100-disk RAID array may be limited by the one slowest and smallest disk (multiplied by 100), and he is wasting roughly 1/2 of his capacity and 1/3 of his performance (meaning he wasted roughly half of $30,000, which is real money). Obviously this is a somewhat hypothetical scenario, as most consumer-grade RAID systems do not support building a single array out of 100 disks.

Two: Amateur has a few disks sitting around which are already built and paid for. They use an old 3TB disk and a 4TB disk to build a RAID1 file system (simple mirroring), which has only 3TB capacity. Certainly some of the capacity of the larger disk is wasted: 1TB will go "unused" in the RAIDed file system, but could be used for another non-redundant file system (perhaps for scratch space, or unimportant data). Or they have a used 1TB SSD and a 3TB spinning disk, and partition the system into a 1TB file system that uses mirroring (and is reliable), and a 2TB file system for scratch space. In this case the performance of the SSD is wasted. But: given that the disks were free, it is still better to use *some* redundancy than no RAID at all.

Where your advice is absolutely correct: When buying new hardware to be used in RAID, try to buy matched sets of disks with the same performance and capacity, for get the most reliability and performance out of the $$$ spent.
OK, I will keep that in mind. I guess M.2 SATA3 SSD and Regular SATA3 SSD have similar performances. I hope so at least because I have a Regular one already.