Other FreeBSD 12.1 GlusterFS setfacl Operation not supported

Dear all

1. I have GlusterFS (ver. 3.11.1) installed on FreeBSD 12.1. This is compiled from source using net/glusterfs port.

2. I have enabled acls on FreeBSD UFS2 using tunefs.
mount
/dev/ada0p3 on / (ufs, local, journaled soft-updates, acls)

3. I mount GlusterFS on /mnt/gluster/ using /etc/fstab as follows:
host:/GFSVol /mnt/gluster/ fusefs rw,acl,transport=tcp,backup-volfile-servers=host2:host3,mountprog=/usr/local/sbin/mount_glusterfs,late,failok,log-level=WARNING 0 0

4. ACLs (setfacl and getfacl) works well on FreeBSD UFS.

5. But setfacl does not work on /mnt/gluster/.
setfacl -m user:bob:rwx /mnt/gluster/file.txt
setfacl: /mnt/gluster/file.txt: acl_get_file() failed: Operation not supported

Appreciate any help to fix this.
 
Start with the documentation: Does Gluster support ACLs? If it does in general, does the FreeBSD version of Gluster support them?
 
1. @ralphbsz, your answer shows that you have zero experience using GluserFS. I'm not a student trying to learn either GlusterFS or FreeBSD. In one corporate, I used ver. 3.10 of GlusterFS on Ubuntu Linux ver. 14.04 (LTS) running on 8 bricks cluster with over 20TB (it was a whopping amount at Ubuntu 14 times). I managed the entire permission scheme using ACLs.

2. In another corporate which uses FreeBSD in backend, I'm building another GlusterFS cluster. It's the latest FreeBSD (12.1) and latest GlusterFS (ver. 3.11.1) on FreeBSD.

3. Do you know the what's current version of the GlusterFS? It's version 7.2. On FreeBSD, it's still version 3.11.1 !!!

4. On ver. 3.10 of GlusterFS on Ubuntu Linux ver. 14, ACLs worked without any failure! Are you saying ver. 3.11.1 of GlusterFS on FreeBSD 12.1, that ACLs do not work???? What's a shame for FreeBSD if so?

5. Do you know there is NO distributed file systems anywhere near to Lustre file system (https://en.wikipedia.org/wiki/Lustre_(file_system)) on FreeBSD? Do you know why?

6. Have you seen this? https://news.ycombinator.com/item?id=15711418

7. Why don't you declare, that FreeBSD is for child play and don't try anything serious? So that I can show that to this company and switch to Ubuntu Linux 18.04 LTS without struggling like hell to get a minor thing working.
 
Looks it's not GlusterFS, it's may be FreeBSD itself is not fit for the job!!!

I'm referring to this:
https://www.freebsdfoundation.org/project/fuse-userspace-file-system-update/

- Project Status: Complete

- "FreeBSD’s fuse(4) driver is buggy and out-of-date. It’s essentially unusable
for any networked filesystem like CephFS, MooseFS, or Tahoe-LAFS."

- "Fuse(4)’s kernel API (the communication protocol between the kernel
and the file system daemon) is about 11 years behind the standard.
That means we can’t support some features relating to cache
invalidation, ioctl(2), poll(2), chflags(2), file locking, utimes(2),
posix_fallocate(2), and ACLs."

My questions in this regards:
1. Is there a project page for the above link? The news was posted without any further reference!!!

2. Is this completed project "FUSE Userspace File System update" included on to FreeBSD 12.1 Release?
 
1. @ralphbsz, your answer shows that you have zero experience using GluserFS.
That is exactly correct. I have never used GlusterFS, nor do I have a need to start using it. My suggestion was that you start by helping yourself, I was trying to lead you down the path to answer your own question, since I can not answer it for you.

5. Do you know there is NO distributed file systems anywhere near to Lustre file system (https://en.wikipedia.org/wiki/Lustre_(file_system)) on FreeBSD?
I'm quite familiar with Lustre, but I have never used it (didn't have a need to), in particular not on FreeBSD. Correction: I used to be quite familiar with Lustre around the 2003-2005 time frame, when I was involved in an effort by my employer back then to acquire Peter Braam's company (he ended up selling to Sun instead).

Yes, I know that neither Lustre nor the other major cluster or distributed file systems run on FreeBSD. I could speculate on why, but that speculation is pointless. It is sufficient to point out that 100% of the TOP500 list runs on Linux.

7. Why don't you declare, that FreeBSD is for child play and don't try anything serious?
I don't think at all that FreeBSD is only for child's play. And my friends in places like NetApp, Jupiter and Netflix don't think that either.

And by the way: ACL support in UFS and ZFS on FreeBSD works great. I use it occasionally, and have had no problems.
 
ralphbsz
1. Sorry for my rant. Your original reply was suitable for a kid :)

2. It's very good "ACL support in UFS and ZFS on FreeBSD works great". UFS and ZFS are single machine file systems. They cannot provide guarantee that data available if the machine itself fails. Critical systems are supported by file systems that span across a cluster of machines so that one or two machine failure can tolerate.

3. I'm building a critical system, and found it's taking way too long compared to Linux. That's why it's important to share knowledge and experience.

4. Hope you may have seen my previous post that I'm referring to this: https://www.freebsdfoundation.org/project/fuse-userspace-file-system-update/

I think this the root cause why ACLs do not work with distributed files systems on FreeBSD. It's FUSE. Require a rock solid FUSE support from the OS. I don't think it's been merged to a FreeBSD release yet.
 
UFS and ZFS are single machine file systems. They cannot provide guarantee that data available if the machine itself fails. Critical systems are supported by file systems that span across a cluster of machines so that one or two machine failure can tolerate.
I disagree. It is possible to build storage systems that have high durability data using single-node file systems. It does require moving the physical disks out of the failure domain of the host itself. One example would be putting the disks in one or more external disk enclosures (probably with multiple power feeds from separate power distribution systems), dual-porting the enclosures, and having two computers connected to the enclosures, in an active/standby configuration. For better load-balancing one can even partition this system, for example in the following way: Two computers, two disk enclosures, and everything cross-connected. All data is fully mirrored between the two enclosures. In normal operation, each computer serves one set of file systems. If a disk enclosure or an individual disk fails (perhaps due to failure of half the power if the systems don't have redundant power supplies), both file systems continue functioning, alas in degraded mode. If one computer fails, the other takes over serving or using those file systems. (Side remark: There is a slightly tricky problem that needs to be solved in this setup, namely making sure that exactly 1 computer mounts each file system, not 0 or 2. This is a problem that has been well understood for decades, and there are known group consistency solutions for it, including some that don't need a third computer as a tie breaker or witness. The literature has examples.)

However, I agree that today for high-availability high-durability systems, cluster file systems are the more common solution. I don't want to speculate on whether that's a good or a bad thing, because that's a multi-faceted question. When one gets into HPC or cloud computing size and speed requirements, there is just no other solution, but for moderate workloads (dozens or hundreds of disks, throughput requirements that can still be measured in GByte/s using fingers and toes) using a single computer (or a pair) is just much less hassle. What is clear that cluster systems are significantly more complex and harder to set up and administer than single-node systems. About 20 years ago I was working with CERN (the European particle physics research center), and I coined the following joke: "How many storage administrators does CERN have? About 10, and they all have PhDs."

I'm building a critical system, and found it's taking way too long compared to Linux.
That's regrettable. But given the state of the world, namely the extremely high market share of Linux in cluster and distributed computing environments, it's also not surprising.

I think this the root cause why ACLs do not work with distributed files systems on FreeBSD. It's FUSE. Require a rock solid FUSE support from the OS. I don't think it's been merged to a FreeBSD release yet.
The following is my personal opinion, and does not reflect anything my friends, colleagues, or employers think. And furthermore, I say it at the risk of getting my dear colleague Erez upset. I personally detest FUSE, and I think it is an inappropriate technology to implement production file systems. It is useful for toys, experimentation (in particular academic research), and systems that have no reliability requirements. The problems of putting data and metadata flow and low-level memory management outside the kernel are just too hard, and lead to flaky software. We have appropriate technology for abstracting file system interfaces in VFS. Note that I'm not saying that all parts of a file system implementation have to be in the kernel, only that the way FUSE splits it is very risky.

But to get to your specific situation: I think you are saying that FreeBSD's FUSE implementation is incomplete or buggy in its ACL support. While that's sad, the only way to fix it is either money or elbow grease.
 
ZFS over HAST seems to be how Freebsd operators handle clusters.

If someone wants the latest Glusferfs, talk to the maintainer.

I assume the OP has seen this:

 
Last edited:
Back
Top