Alright, so this may be pretty niche but I'm curious to know if anyone's run into this particular issue. The background is that I'm using sysutils/cbsd for bhyve orchestration and I'm getting around to learning more on how I can leverage cbsd to build out a little cluster to make VM migration easier. One of the cbsd's requirements is that the nodes need to be on a distributed filesystem and I've chosen to look at GlusterFS to do this because it seemed to be one of the more straightforward ways of sharing disk space without requiring some auxiliary server to facilitate file distribution or whatever.
Since I'm new to GlusterFS I decided to spin up--you guessed it--a few bhyves on an existing cbsd hypervisor so I could try this thing out. So, to break it down, I have one host, and on that host I have three VMs (testnode1, testnode2, and testnode3 because I don't have very creative node-naming abilities). Each VM is running FreeBSD 14.0-RELEASE and has a ZFS filesystem, and I've installed net/glusterfs. I've also enabled fuse at boot time, and I believe that I have all nodes peering with each other.
The other VMs yield similar outputs. On each VM, I've executed
I also made sure that I ran
I guess I should note that brick2 and brick3 will end up being other volumes if I get this thing figured out. Anyhow, the logs in /var/log/glusterfs/glusterd.log and /var/log/glusterfs/cli.log are just about as inconclusive, they mostly parrot the above (albeit slightly reworded).
Ok, so has anyone seen this sort of thing before? I'm trying to figure out if it's just because I'm running this in a bhyve instead of on bare metal, or if there's some sort of misconfiguration happening. I kind of doubt that the issue is related to bhyves at all, but I figured I'd note it just in case. Thanks in advance for any insight! I'm fully expecting this to just be PEBCAK.
Since I'm new to GlusterFS I decided to spin up--you guessed it--a few bhyves on an existing cbsd hypervisor so I could try this thing out. So, to break it down, I have one host, and on that host I have three VMs (testnode1, testnode2, and testnode3 because I don't have very creative node-naming abilities). Each VM is running FreeBSD 14.0-RELEASE and has a ZFS filesystem, and I've installed net/glusterfs. I've also enabled fuse at boot time, and I believe that I have all nodes peering with each other.
Code:
root@testnode1:~ # gluster peer status
Number of Peers: 2
Hostname: testnode2
Uuid: ba88386a-6175-459c-af11-c3edb36d3e42
State: Peer in Cluster (Connected)
Hostname: testnode3
Uuid: 3e3a59c5-1841-4fda-a4ea-8f431a3eabaa
State: Peer in Cluster (Connected)
The other VMs yield similar outputs. On each VM, I've executed
zfs create -o mountpoint=/bricks zroot/bricks
and have created several folders:
Code:
root@testnode1:~ # tree /bricks
/bricks
├── brick1
├── brick2
└── brick3
I also made sure that I ran
service glusterd start
(and it's enabled on boot anyhow.) Unfortunately, when I go to create the volume, I end up with an error with a level of verbosity less than that which I'd like to have:
Code:
root@testnode1:~ # gluster volume create jails-data replica 2 arbiter 1 transport tcp testnode1:/bricks/brick1 testnode2:/bricks/brick1 testnode3:/bricks/brick1
volume create: jails-data: failed: Glusterfs is not supported on brick: testnode1:/bricks/brick1.
Setting extended attributed failed, reason: Invalid argument.
I guess I should note that brick2 and brick3 will end up being other volumes if I get this thing figured out. Anyhow, the logs in /var/log/glusterfs/glusterd.log and /var/log/glusterfs/cli.log are just about as inconclusive, they mostly parrot the above (albeit slightly reworded).
Ok, so has anyone seen this sort of thing before? I'm trying to figure out if it's just because I'm running this in a bhyve instead of on bare metal, or if there's some sort of misconfiguration happening. I kind of doubt that the issue is related to bhyves at all, but I figured I'd note it just in case. Thanks in advance for any insight! I'm fully expecting this to just be PEBCAK.