• This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn more.



Active Member

Thanks: 27
Messages: 161

A little bird told me that net/glusterfs is capable of creating a ZFS file system that spans multiple computers/pools. However, I have not been able to find any decent "howto's" or the such on how exactly one would go about implementing it or "best practices" and the such. Would be eternally grateful for any input anyone could offer.
Last edited by a moderator:


Well-Known Member

Thanks: 88
Messages: 283

Last time i've read GlusterFS on ZFS in FreeBSD is still experimental; which probably explains the lack of "official" tutorials for end user testing.

You can probably follow the comments in Thread 46923 and some entries in this bug report to get started.


Aspiring Daemon

Thanks: 309
Messages: 737

(Disclaimer: I've never actually used or installed GlusterFS, but am familiar with other cluster file systems.)

A: Other than some details like how to configure firewalls and how to set up / start services (systemd vs. init), the instructions for Fedora or RHEL should work just fine on FreeBSD; the gluster commands should all be the same. I've read the gluster documentation a few months ago, and I remember seeing a howto guide or a simple install guide, and the commands manual was unusually clear.

B: But cluster file systems tend to be complex and powerful beasts. When using high-end networking hardware and storage backends, they have obvious advantages (in particular for high performance), but the price you pay is extra work configuring and maintaining. Unless you are doing this just for fun to learn something, you might want to consider simpler solutions, like one machine with the disks in it (suitably RAIDed) acting as a file server, or an active/standby pair of nodes doing the same.


Active Member

Thanks: 95
Messages: 175

I made a test two-node cluster with net/pacemaker, net/corosync, net-mgmt/crmsh and net/glusterfs in a VM environment on ESXi. It took a while to set it up right but it was a fun learning experience. I used ufs file systems for it and didn't try zfs.

Regardless of the filesystem, I could not mount the cluster locally on the host nodes until after the server was completely up - even mount late would not work, however that's not a requirement for a true clustered host. Client's could mount it without any problems and failover worked great.

Be sure to put fuse_load="YES" in /boot/loader.conf