UFS and ZFS are single machine file systems. They cannot provide guarantee that data available if the machine itself fails. Critical systems are supported by file systems that span across a cluster of machines so that one or two machine failure can tolerate.
I disagree. It is possible to build storage systems that have high durability data using single-node file systems. It does require moving the physical disks out of the failure domain of the host itself. One example would be putting the disks in one or more external disk enclosures (probably with multiple power feeds from separate power distribution systems), dual-porting the enclosures, and having two computers connected to the enclosures, in an active/standby configuration. For better load-balancing one can even partition this system, for example in the following way: Two computers, two disk enclosures, and everything cross-connected. All data is fully mirrored between the two enclosures. In normal operation, each computer serves one set of file systems. If a disk enclosure or an individual disk fails (perhaps due to failure of half the power if the systems don't have redundant power supplies), both file systems continue functioning, alas in degraded mode. If one computer fails, the other takes over serving or using those file systems. (Side remark: There is a slightly tricky problem that needs to be solved in this setup, namely making sure that exactly 1 computer mounts each file system, not 0 or 2. This is a problem that has been well understood for decades, and there are known group consistency solutions for it, including some that don't need a third computer as a tie breaker or witness. The literature has examples.)
However, I agree that today for high-availability high-durability systems, cluster file systems are the more common solution. I don't want to speculate on whether that's a good or a bad thing, because that's a multi-faceted question. When one gets into HPC or cloud computing size and speed requirements, there is just no other solution, but for moderate workloads (dozens or hundreds of disks, throughput requirements that can still be measured in GByte/s using fingers and toes) using a single computer (or a pair) is just much less hassle. What is clear that cluster systems are significantly more complex and harder to set up and administer than single-node systems. About 20 years ago I was working with CERN (the European particle physics research center), and I coined the following joke: "How many storage administrators does CERN have? About 10, and they all have PhDs."
I'm building a critical system, and found it's taking way too long compared to Linux.
That's regrettable. But given the state of the world, namely the extremely high market share of Linux in cluster and distributed computing environments, it's also not surprising.
I think this the root cause why ACLs do not work with distributed files systems on FreeBSD. It's FUSE. Require a rock solid FUSE support from the OS. I don't think it's been merged to a FreeBSD release yet.
The following is my personal opinion, and does not reflect anything my friends, colleagues, or employers think. And furthermore, I say it at the risk of getting my dear colleague Erez upset. I personally detest FUSE, and I think it is an inappropriate technology to implement production file systems. It is useful for toys, experimentation (in particular academic research), and systems that have no reliability requirements. The problems of putting data and metadata flow and low-level memory management outside the kernel are just too hard, and lead to flaky software. We have appropriate technology for abstracting file system interfaces in VFS. Note that I'm not saying that all parts of a file system implementation have to be in the kernel, only that the way FUSE splits it is very risky.
But to get to your specific situation: I think you are saying that FreeBSD's FUSE implementation is incomplete or buggy in its ACL support. While that's sad, the only way to fix it is either money or elbow grease.