Other Are There Any Distributed Block Storage Filesystems that support FreeBSD

I know that Min.io is available for FreeBSD, so there is at least a distributed object store available, but I have not been able to find something like Ceph or Gluster that currently supports FreeBSD. Does anyone know of any?

With the work being done on OCI containers, and Kubernetes in the Enterprise Working Group, we are close to being able to deploy Cloud Native solutions on FreeBSD but without storage we will not get far.
 
Thanks for the quick reply, but HAST is not really distributed storage, it is a two node Primary/secondary system. Ceph, Rook, Gluster, etc. allow one to increase the amount of storage available by increasing the number of nodes, and by increasing the number of nodes one can increase the overall throughput.
 
You can scale up with HAST/CARP/LAGG instead of scaling out with something like Gluster/Ceph. The former is much simpler and requires less leg work. If you're not Google/Amazon, you probably don't need Gluster/Ceph.
 
latest versions of openzfs support live expandable zraids, so in theory you can create a zpool out of iscsi disks and grow it forever and iscsi export zvols from it.
it will most likely suck badly :)
 
I think the iSCSI stuff works good now. It's in kernel now and it's fully integrated into the CAM Target Layer. Haven't tried it though. I'd just use NFS to keep it simple.
 
its not that good over wan (at least in 13.x). sometimes the disk gets unresponsive on the initiator and you have to reboot to reconnect it.
i actually had a zpool locally and the disk was "vps block storage". i had this setup because the vps did not have enough memory to run zfs there. the purpose was offsite backup and zfs send always failed with memory problems. the iscsi / zpool frankenstein worked but with the above problems.
 
You can scale up with HAST/CARP/LAGG instead of scaling out with something like Gluster/Ceph. The former is much simpler and requires less leg work. If you're not Google/Amazon, you probably don't need Gluster/Ceph.
Can you explain how one would use HAST with CARP and LAGG to scale the amount of storage available and the performance of it?
 
I know that Min.io is available for FreeBSD, so there is at least a distributed object store available, but I have not been able to find something like Ceph or Gluster that currently supports FreeBSD. Does anyone know of any? With the work being done on OCI containers, and Kubernetes in the Enterprise Working Group, we are close to being able to deploy Cloud Native solutions on FreeBSD but without storage we will not get far.

Hi! Have a look at glusterfs (net/glusterfs)
Regards
 
* Good support for FreeBSD, but CE only supports one master which means SPOF: https://github.com/moosefs/moosefs
* Abandoned: https://github.com/leo-project/leofs
* MinIO (it's not block storage, but worth of mention): https://blog.min.io/filesystem-on-object-store-is-a-bad-idea/
* Ported and looks promising but: https://github.com/seaweedfs/seaweedfs/issues/6645
* Not tested yet: https://man.freebsd.org/cgi/man.cgi?query=pnfs
* GlusterFS: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=280148

There are others worth mentioning, but their use cases are completely different from clustering or similar:

* Works in FreeBSD but not ported: https://github.com/tahoe-lafs/tahoe-lafs
 
Hi! Have a look at glusterfs (net/glusterfs)
Regards
Unfortunately:

Click to add this to your default watch list[s] glusterfs GlusterFS distributed file system
8.4_3 net BROKEN: Fails to build, ld: error: version script assignment of 'global' to symbol 'client_dump' failed: symbol not defined
Expiration Date EXPIRATION DATE: 2025-01-31
Ignore IGNORE: is marked as broken: Fails to build, ld: error: version script assignment of 'global' to symbol 'client_dump' failed: symbol not defined
 
* Good support for FreeBSD, but CE only supports one master which means SPOF: https://github.com/moosefs/moosefs

Looks really interesting.

* MinIO (it's not block storage, but worth of mention): https://blog.min.io/filesystem-on-object-store-is-a-bad-idea/

I mentioned MinIO, as I have used it. Very nice, but as you said, it is an Object Store, great for Object Store things, but not an FS. :-)


Does look interesting. Will try it out in a few weeks and let you know my thoughts.


I have not tried it, but I will test it as well.


No longer works in FreeBSD. :-(
 
I have looked at MooseFS some more, and even sent them a request for a price, but I have not received any response. I am a bit concerned that they are not really developing it any more. Any one have any experience with it?
 
I have looked at MooseFS some more, and even sent them a request for a price, but I have not received any response. I am a bit concerned that they are not really developing it any more. Any one have any experience with it?
The same day I am answering this post, an interesting and wonderful thing has happened: we can now use the mount subcommand on FreeBSD. A bug in go-fuse has been fixed. However, both issues are not closed yet:

* https://github.com/seaweedfs/seaweedfs/issues/6645#issuecomment-2885323156
* https://github.com/hanwen/go-fuse/issues/570#issue-3064133503

I am testing it among all my systems and it works.
 
  • Thanks
Reactions: g++
Will it support a POSIX file system and ACLs?
It is currently supported thanks to the filer.
How fast does it seem to be?
See the wiki if you are interested in benchmarks, but my impression is that it is fast at least for my use cases. I have set up replication on filers to replicate between datacenters (actually, it's just two servers) that are in the same country but in different states and it's very fast.
Does it scale well?
Yes, I think this is the brilliant thing about SeaweedFS. You can scale master servers, volumes, filers, etc., and all of these parts work very well. SeaweedFS uses raft to accomplish this.
Does it have k8s integration?
Reading the documentation, yes, but as I don't use kubernetes, I have no opinion about that. Try it and let me know.
 
It is currently supported thanks to the filer.

I am waiting for some new machines, but then I will start testing it.

See the wiki if you are interested in benchmarks, but my impression is that it is fast at least for my use cases. I have set up replication on filers to replicate between datacenters (actually, it's just two servers) that are in the same country but in different states and it's very fast.

I have need for two clusters, in two different locations, so that is great.

Yes, I think this is the brilliant thing about SeaweedFS. You can scale master servers, volumes, filers, etc., and all of these parts work very well. SeaweedFS uses raft to accomplish this.

Interesting. I will play a bit and see what how well it works.

Reading the documentation, yes, but as I don't use kubernetes, I have no opinion about that. Try it and let me know.

Will do.
 
The same day I am answering this post, an interesting and wonderful thing has happened: we can now use the mount subcommand on FreeBSD. A bug in go-fuse has been fixed. However, both issues are not closed yet:

* https://github.com/seaweedfs/seaweedfs/issues/6645#issuecomment-2885323156
* https://github.com/hanwen/go-fuse/issues/570#issue-3064133503

I am testing it among all my systems and it works.
Everything is upstreamed now, so anyone using SeaweedFS on FreeBSD should have a fully working out-of-the-box experience. It doesn't cover the distributed block storage use case, but it does cover the file system use case.

Making or adapting a distributed block device is on my list of things to do, now that I fixed the FUSE mount. Time and money are the enemies. LOL!
 
Everything is upstreamed now, so anyone using SeaweedFS on FreeBSD should have a fully working out-of-the-box experience. It doesn't cover the distributed block storage use case, but it does cover the file system use case.

Making or adapting a distributed block device is on my list of things to do, now that I fixed the FUSE mount. Time and money are the enemies. LOL!
 
  • Thanks
Reactions: g++
Back
Top