Solved What should I consider when building a FreeBSD NAS?

Hello everyone,

I have been thinking about challenging myself and building my own NAS based on FreeBSD instead of using something like TrueNAS CORE.
There are already a number of good resources on the web about setting up Samba and NFS shares, creating automatic snapshots, ZFS replication and so on.
But what are some common pitfalls I should watch out for and do you have any recommendations when it comes to performance, stability and security?

Edit

What I am mainly interested in is what settings I should use for software and services (e.g. Samba, NFS, ZFS) running on the NAS and things like FreeBSD sysctl tunables. Choosing the right hardware is something I have much less trouble with.

Regards :)
 
Get a nice LSI based HBA. Keep in mind these are typically PCIe x4 or x8 cards, so you need to have an appropriate slot for it. You may also want to add a faster ethernet card (10Gbps for example) at one point, those are usually PCIe x4 too. Most 'desktop' consumer boards rarely have PCIe x4 slots but you can use a PCIe x16 slot instead. Just keep in mind that very often the second or third PCIe x16 slot often only has 1 lane (PCIe x1).
 
I already have an LSI SAS2008 HBA (PCIe 2.0 x8) and an Intel X520/X710 (PCIe 2.0/3.0 x8) NIC lying around, and I plan to use it with a Supermicro X10SLH-F (2x PCIe 3.0 x8 + 1x PCIe 2.0 x4) so I guess I should be set hardware wise. RAM will be 32 GiB DDR3 ECC and the CPU a Xeon E3-1231 v3. For storage there are two 18 TB Toshiba HDDs and two Hitachi 200 GB SAS SSDs (for my Jails).

I think FreeBSD should run happily on that. :)
 
I'm assuming that you are going to mirror the media.

Consider your sensitivity to poor disk performance if you ever have to re-silver one of those 18 TB disks. I expect it would take days rather than hours, during which time your application I/O will run like a dog.

For some, that's expected to be a rare event, and the pain is acceptable because 18 TB disks provide cheap storage.

However if you have an requirement to provide consistent service levels, you may wish to consider more and smaller disks.
 
I'd also consider buying at least a SAS3008 based HBA. those 2008s are *really* old now and especially with SSDs they are definitely a bottleneck.

To at least somewhat speed up that huge spinning rust mirror, you could use another pair of SSDs as mirrored 'special' device for metadata (and smaller files, see 'special_small_blocks' in zfsprops(7). Especially for operations such as handling (lots of) snapshots this will give a huge performance boost, otherwise running zfs list -rt snapshot on spinning disks can easily take several minutes, especially if only 2 large disks have to handle all that random I/O.
 
I expect it would take days rather than hours, during which time your application I/O will run like a dog.
I am aware of that, the reason why I went with two 18 TB drives in a mirror instead of e.g. four 10 TB drives in a raid-z2, is lower power consumption and easy expandability, so that I can just add two more 18 TB drives in a mirror vdev in the future. Performance isn't a huge factor.
I'd also consider buying at least a SAS3008 based HBA. those 2008s are *really* old now and especially with SSDs they are definitely a bottleneck.
Here I have the same reason, the SAS2008 based HBAs draw less power than SAS2308 or SAS3008 HBAs, and the performance is enough for me (I mostly use gigabit ethernet anyways).
To at least somewhat speed up that huge spinning rust mirror, you could use another pair of SSDs as mirrored 'special' device for metadata (and smaller files, see 'special_small_blocks' in zfsprops(7).
I have also thought of that, either I use the two 200GB SAS SSDs (MLC btw) as a special vdev, and put my jails on the big pool, or I add two more of the same SSDs and create a pool just for the jails. :)
 
I have also thought of that, either I use the two 200GB SAS SSDs (MLC btw) as a special vdev, and put my jails on the big pool, or I add two more of the same SSDs and create a pool just for the jails. :)
Maybe also have a look on M.2 carriers like the Supermicro AOC-SLG3-2M2 (~35-40EUR via supermicro dealers; don't fall for overpricing scammers on amazon or ebay). NVMe SSDs have even lower power consumption and are faster by orders of magnitudes and are a lot cheaper than SAS SSDs (e.g. micron 7450Pro 1.92TB have just passed the 200EUR mark and keep falling).
 
About the Hitachi/HGST SSDs, they do draw more power than NVMe SSDs, but I've got them for a tenth of the price of the Micron SSD and they all still have several petabytes of life left (and I already have them lying around).

If I ever have too much money, I'll retire the SAS SSDs and SAS HBA and switch to NVMe SSDs :)
 
Maybe also have a look on M.2 carriers like the Supermicro AOC-SLG3-2M2 (~35-40EUR via supermicro dealers; don't fall for overpricing scammers on amazon or ebay). NVMe SSDs have even lower power consumption and are faster by orders of magnitudes and are a lot cheaper than SAS SSDs (e.g. micron 7450Pro 1.92TB have just passed the 200EUR mark and keep falling).
Scanning through the manual it most likely needs a motherboard with bifurcation to use this card with two NVMe's. As it also can acommodate 22x110mm NVMe's, for example the Micron 7450 PRO 1.92TB, M.2 22110 & Micron 7400 PRO 3.84TB, M.2 22110, it generally looks like a great card option.
 
Scanning through the manual it most likely needs a motherboard with bifurcation to use this card with two NVMe's.
That's usually only an issue with desktop/gaming hardware.
Supermicro boards support bifurcation since ~X9 (some may require a BIOS update), so any X10 generation should be fine.
 
In 2020 I asked Supermicro whether AOC-SLG3-2M2 would work in my X10SLL-F.

They explained that they hadn't tested that combination before.

But they actually went and tried it for me in their lab, and said that they couldn't get both NVMe drives to show up; only one showed up, no matter what.

They recommended that I go with an X10SRM-F/TF board, since they'd already tested that and knew it worked.

Not being "Mr. Money Bags," I wound up keeping the X10SLL-F, booting from a SATADOM and using a single NVMe drive (for my fast storage) on a generic NVMe to PCI-Express adapter board, instead.
 
I have also thought of that, either I use the two 200GB SAS SSDs (MLC btw) as a special vdev, and put my jails on the big pool, or I add two more of the same SSDs and create a pool just for the jails. :)
Given that your SSDs are cheap, I'd consider creating a separate SSD mirror for boot, swap, zroot, and special VDEV.

Size and usage (allowing small files in addition to metadata) of the special VDEV would require some ongoing investigation. You may have to re-size your special VDEV as the tank grows.

Don't allocate all your storage at the outset. Mirrors are easy to quadruplicate, break, and thus relocate (optionally changing the size as you do it). You may use either spare devices or spare partitions to do this. Plan ahead to be able to move and re-size things. Consider acquiring spare media for emergency replacements (why not if your SSDs are cheap). These might also be deployed occasionally for re-organisations (if you have the space, power, and data cables).

What do you have planned for backups? I have a hot swap SATA cage. And that's where I use my very large disks.

These issues all impact on your choice of case.

I split my VMs between SSD and spinning disks, depending on usage.
 
I got introduced to FreeBSD by building a XigmaNAS which is FBSD based.
Their latest release runs FBSD 13.1, which is a level back.

Mine is constructed on Xeon/SuperMicro hardware and ECC memory.
Six SATA drives, and boots from an SLC thumb drive.
Samba, et al, is built in and integrated.
Works great.
 
What do you have planned for backups?
I have a second (now TrueNAS Core) server which is woken up every weekend via a IPMI script and shut down again by another script, after the pools are synced via ZFS replication.

For offsite backups, I have a single encrypted 14 TB drive (the pool holds only about 9 TB of files right now), and I am planning to get a second one.
 
In 2020 I asked Supermicro whether AOC-SLG3-2M2 would work in my X10SLL-F.

They explained that they hadn't tested that combination before.

But they actually went and tried it for me in their lab, and said that they couldn't get both NVMe drives to show up; only one showed up, no matter what.

They recommended that I go with an X10SRM-F/TF board, since they'd already tested that and knew it worked.

Not being "Mr. Money Bags," I wound up keeping the X10SLL-F, booting from a SATADOM and using a single NVMe drive (for my fast storage) on a generic NVMe to PCI-Express adapter board, instead.

Supermicro is *very* (overly) thorough with their "working/tested" lists. They really don't assume but actually test everything in their labs before they will list it - be it controllers, adapters, RAM, SSDs etc.. Even hardware thats pretty 'generic'.
Those M.2 carriers are nothing more than re-routed PCIe lanes with some power distribution/voltage stabilization added. As long as the BIOS supports bifurcation (most X10 should - I haven't seen one yet that didn't) you are fine.

booting from M.2/PCIe however is a different story - this is a relatively recent addition to Supermicro BIOS (some X9 and X10 that explicitly have support, most X11 AFAIK). If supported, you have to make sure to set all boot ROM options for all involved risers and pcie slots to UEFI (https://www.supermicro.com/support/faqs/faq.cfm?faq=21166) for EFI entries to show up on PCIe/NVMe devices. Also there is never any "PCIe" or "NVMe" boot option - depending on the version the EFI boot entries on those drives show up as "hard disk - <OS name>" or simply under EFI boot options (later versions).

If booting from PCIe/NVMe it isn't officially supported:
Supermicro uses pretty standard AMI BIOS images with standard drivers and no obfuscation/"encryption" or other crap on top, so those images can be easily unpacked and modified with standard NVMe drivers. There is an old, long thread on the servethehome forums about this, that linked to this guide on the winraid forum (link on STH forum is dead since they changed to discourse): https://winraid.level1techs.com/t/h...t-for-all-systems-with-an-ami-uefi-bios/30901
I'm running several X10 based servers (and X9, but the last one was decommissioned ~1/2 year ago) that boot from NVMe pools with BIOS images that have been modified that way. IIRC the UEFITool even works via wine, so no windows required and the BIOS update can be done via IPMI if the BMC already runs the HTML5 based web interface (Maintenance -> BIOS Update).
 
Back
Top