Other Scalable HA SAN

Hi!

I am looking to build a scalable HA SAN to serve static files. So I am looking at various alternatives. I would prefer to do this using FreeBSD, Linux being the next choice if there is no potential with FreeBSD.

Simplest option would possibly be FreeBSD NFS with HAST & CARP. But I think that would limit me to just two machines I guess... I havent done this before so my knowledge is purely theoretical. I want to use low-cost hardware. So, two machines may not scale for me in the medium term... but then in the medium term many new options might just emerge (just countering myself) :) If I do go this route, should I use NFS v3 or v4 or NFS available via ZFS (sharenfs). I have noticed that NFSv4.2 doesn't play well with Ubuntu. My Ubuntu 20.04LTS client is just not able to write to the NFSv4.2 server running off FreeBSD 12.2. Perhaps kerberizing will fix it, but that is a bit of a pain.

Next option, is to use a DFS like MooseFS, GlusterFS or Ceph. I have zero practical experience with these but would love to hear from those who have used them. Are they secure, stable and fast? 3 very important criteria for me. I have read good and bad about all of them so am not sure... neither do i have the luxury of trying out each at a time - time and resource constraints. I guess these work better with Linux by default, although I have read MooseFS plays well with FreeBSD.

Last option is to build my own DFS for FreeBSD... unfortunately, that is difficult as my hands are already full.

Finally, my last question is what kind of disks do you use in production for a data center? NAS HDDs like WD Reds, SCSIs or just normal SATA2 HDDs or do you prefer SSDs.

I am looking for advice from the experienced experts out here.

Please share your thoughts.

Thanks in advance!

- Nitin
 
Oh ok :) so its just the exports file that is located in different directories.
No, it means at least on FreeBSD and illumos the exports are handled (or can be handled) by the zfs property "sharenfs" and /etc/exports can be omitted.
It is good practice to add a note to an otherwise empty /etc/exports like: "# NFS exports are handled by ZFS". Mixing ZFS and "classic" exports becomes ugly and error-prone very quickly and can behave unexpectedly when exports are defined via both mechanisms, especially when different options are defined.
The zfs manpage is a little short on details, but essentially you put all export options in that property like you would in /etc/exports; e.g. "-network 192.168.0.0/24" for an old-style network-wide share.
Of course the NFS server still needs to be configured and running; the "sharenfs" property only substitutes the exports file!

And just in case you stumble upon the property "sharesmb"; this has no effect on FreeBSD as there is no native SMB server (like e.g. on illumos) and samba is not zfs-aware

As for disks: don't use SATA in anything that should perform halfway decent. Always use SAS and for HA you'd want dual port drives and a backplane that supports multipath connections, so you can hook 2 nodes to the same disk array/backplane for failover.

There are a lot of resources online about HA/failover with TrueNAS/FreeNAS and HAST/CARP that go into the details and essentially also apply to FreeBSD (minus all the clicky-coloury-stuff in the TrueNAS GUI). Just make sure to use some relatively recent sources when it comes to the in-depth details.

With a HAST/CARP solution you won't be limited to NFS as it only manages the GEOM part. But for file-based access you can only use one of the connected hosts; with block storage (iSCSI) you could use multipathing over both hosts in parallel (or for failover). The gmultipath() manpage gives an overview of the multipath architecture.
 
The 'sharenfs' feature is a bit of a hack on FreeBSD. It triggers a script that converts these and saves them in /etc/zfs/exports. The FreeBSD NFS daemon just reads both /etc/exports and /etc/zfs/exports. On Solaris/Illumos ZFS uses system calls to add the exports dynamically.

If you can coax Samba into reading a second configuration script besides smb4.conf (/usr/local/etc/zfs/smb4.conf for example) then a similar hack could be done for the 'sharesmb' feature.
 
Always use SAS and for HA you'd want dual port drives and a backplane that supports multipath connections, so you can hook 2 nodes to the same disk array/backplane for failover.
Thank you sko for the tips and guidance👍
With a HAST/CARP solution you won't be limited to NFS as it only manages the GEOM part. But for file-based access you can only use one of the connected hosts; with block storage (iSCSI) you could use multipathing over both hosts in parallel (or for failover). The gmultipath() manpage gives an overview of the multipath architecture.
Ok this is something I wasn't aware of... Have to do some reading to understand multipath architecture... thank you so much!
 
Back
Top