ZFS ZFS + GlusterFS (+ CBSD): "not supported on brick"

Alright, so this may be pretty niche but I'm curious to know if anyone's run into this particular issue. The background is that I'm using sysutils/cbsd for bhyve orchestration and I'm getting around to learning more on how I can leverage cbsd to build out a little cluster to make VM migration easier. One of the cbsd's requirements is that the nodes need to be on a distributed filesystem and I've chosen to look at GlusterFS to do this because it seemed to be one of the more straightforward ways of sharing disk space without requiring some auxiliary server to facilitate file distribution or whatever.

Since I'm new to GlusterFS I decided to spin up--you guessed it--a few bhyves on an existing cbsd hypervisor so I could try this thing out. So, to break it down, I have one host, and on that host I have three VMs (testnode1, testnode2, and testnode3 because I don't have very creative node-naming abilities). Each VM is running FreeBSD 14.0-RELEASE and has a ZFS filesystem, and I've installed net/glusterfs. I've also enabled fuse at boot time, and I believe that I have all nodes peering with each other.

Code:
root@testnode1:~ # gluster peer status
Number of Peers: 2

Hostname: testnode2
Uuid: ba88386a-6175-459c-af11-c3edb36d3e42
State: Peer in Cluster (Connected)

Hostname: testnode3
Uuid: 3e3a59c5-1841-4fda-a4ea-8f431a3eabaa
State: Peer in Cluster (Connected)

The other VMs yield similar outputs. On each VM, I've executed zfs create -o mountpoint=/bricks zroot/bricks and have created several folders:

Code:
root@testnode1:~ # tree /bricks
/bricks
├── brick1
├── brick2
└── brick3

I also made sure that I ran service glusterd start (and it's enabled on boot anyhow.) Unfortunately, when I go to create the volume, I end up with an error with a level of verbosity less than that which I'd like to have:

Code:
root@testnode1:~ # gluster volume create jails-data replica 2 arbiter 1 transport tcp testnode1:/bricks/brick1 testnode2:/bricks/brick1 testnode3:/bricks/brick1
volume create: jails-data: failed: Glusterfs is not supported on brick: testnode1:/bricks/brick1.
Setting extended attributed failed, reason: Invalid argument.

I guess I should note that brick2 and brick3 will end up being other volumes if I get this thing figured out. Anyhow, the logs in /var/log/glusterfs/glusterd.log and /var/log/glusterfs/cli.log are just about as inconclusive, they mostly parrot the above (albeit slightly reworded).

Ok, so has anyone seen this sort of thing before? I'm trying to figure out if it's just because I'm running this in a bhyve instead of on bare metal, or if there's some sort of misconfiguration happening. I kind of doubt that the issue is related to bhyves at all, but I figured I'd note it just in case. Thanks in advance for any insight! I'm fully expecting this to just be PEBCAK.
 
Ok, some more info on this-- this appears to definitely be a FreeBSD 14 problem, because I tried on a fresh set of FreeBSD 13.2 nodes and I'm not getting this error. I still would appreciate any insight into what might have changed on FreeBSD 14 that might be causing these sort of issues, but at least I'm not at a roadblock anymore.
 
Its probably an unpopular post here: I have looked into glusterfs, but then switched all of my clients clusters to Linux. If you seriously want to work with glusterfs or any other high-availabilty storage solution like ceph, drbd, ocfs2 etc. just use Linux.
 
Its probably an unpopular post here: I have looked into glusterfs, but then switched all of my clients clusters to Linux. If you seriously want to work with glusterfs or any other high-availabilty storage solution like ceph, drbd, ocfs2 etc. just use Linux.

I quite like ceph and sometimes use it for other projects (though I'd love to see it get more FreeBSD support). But for this use case I really enjoy cbsd and bhyves, and the stability FreeBSD generally affords. No, I'd much rather stick with FreeBSD and try to nail down RCA on the 14.0/GlusterFS incompatibility.
 
On cbsd site (bsdstore.ru) they have the following in cbsd “features”:

  • lack of binding to ZFS : CBSD works transparently on the UFS, HammerFS or any other FS: some of people use jail and bhyve on cluster filesystem such NFS, GlusterFS and Ceph which is typical for DC and Failover
Not sure why this is a feature but best ask for help from them. [Perhaps they mean zfs is not *required* to use cbsd]
 
On cbsd site (bsdstore.ru) they have the following in cbsd “features”:

  • lack of binding to ZFS : CBSD works transparently on the UFS, HammerFS or any other FS: some of people use jail and bhyve on cluster filesystem such NFS, GlusterFS and Ceph which is typical for DC and Failover
Not sure why this is a feature but best ask for help from them. [Perhaps they mean zfs is not *required* to use cbsd]

Yeah, I suspect that it's actually not CBSD-related at all, now that I got it working on 13.2-RELEASE (still broken on 14.0-RELEASE). They recommend disabling certain ZFS features when using a DFS for bhyve migration, I'm not really sure why they require a DFS instead of just taking advantage of zfs send/receive.
 
I'm also trying to use GlusterFS from packages on native FreeBSD 14 (using 14.1-RELEASE) and hitting the same issue with a ZFS backend when trying to create the volume:

volume create: globalvol: failed: Glusterfs is not supported on brick: <IP>:/gluster/brick0.
Setting extended attributes failed, reason: Invalid argument.

However I had this working well on FreeBSD 13.2 so it appears something has changed - either in FreeBSD or the Gluster package.

It also looks like the Gluster package is behind a bit (v8.4 vs v10) - not sure if we can get the port to a newer release, but it appears there are FreeBSD specific changes in v9 at least that might be related given the error message/issue we are seeing here:

 
However I had this working well on FreeBSD 13.2 so it appears something has changed - either in FreeBSD or the Gluster package.

I asked Reddit about this earlier and got a response:
Newer versions of ZFS forbid the use of some name prefixes in xattr. One of them is 'trusted.', which is used by GlusterFS.

I haven't validated the response to see if it's correct. To date, I've not found a patch for this, and haven't gotten around to reaching out to the port maintainer for net/glusterfs to see if a patch can be developed (or perhaps the upgrade you mentioned).
 
It also looks like the Gluster package is behind a bit (v8.4 vs v10) - not sure if we can get the port to a newer release, but it appears there are FreeBSD specific changes in v9 at least that might be related given the error message/issue we are seeing here:

I'm the GlusterFS port maintainer and it has some issues with regards to performance and memory leaks. GlusterFS also needs to be ported to use kqueue/kevent, it's currently using regular poll, and that's not tested upstream and the code is filled with memory leaks. The devel/libepoll-shim won't work due to how GlsuterFS forks at startup and file descriptors are not carried over. Also the GlusterFS recovery and upgrade tooling is currently broken. The part which makes me hesitate using GlusterFS for anything mission critical (even on Linux) is that performance will always be second class (I say that from experience) no matter where you use it, since it uses FUSE to mount the filesystem.

I plan on updating it soon once I can find a few days of downtime. A few people have emailed me some updated port skeletons that I haven't had a chance to dig into/test yet. However, none of this fixes the underlying issues above.

I could use some help porting the epoll code to kqueue/kevent. I know enough C to do troubleshooting, patching, and also understand the concepts behind kqueue. However I haven't done anything with epoll, nor have I actually written anything significant around event multiplexing or TCP/IP servers. So it's a bit over my head, and the upstream GlusterFS devs have no interest in providing support/resources.
 
There's a version 11 of Glusterfs now, may be a good idea checking the latest version and fixing issues there directly.

GlusterFS is critical for my infrastructure and I'm really hoping that it will be working soon on FreeBSD.
 
GlusterFS is critical for my infrastructure and I'm really hoping that it will be working soon on FreeBSD.

I would not hold my breath for this - serious effort is necessary, and probably not feasable for developers doint it in their spare time. I do not see this happening until some major FreeBSD player like klarasystems, netflix or the FreeBSD Foundation step in to sponsor this development and also further updates. If it is really critical, move to a Linux distribution of your choice and be done with it.
 
Back
Top