Distributed file systems are fascinating, but they don't make any sense unless they are across a very fast network, preferably dedicated. My professor (I may be an old fart, but I'm back in school) was experimenting with a distributed file system on Linux (don't remember which FS), but he saw no speedups when adding more disks or hosts. But then he was running on gigabit Ethernet.
I used to design chips at Compaq's Enterprise Storage Division in Colorado Springs (formerly DEC's facility) until Compaq was absorbed by HP (2002). We did rack-sized (and larger) SAN over fibre channel. SAN differs from a distributed file system in that SAN is stand-alone virtualization of storage. From the SAN admin UI you set how many virtual disks and their sizes that a particular client sees. The client (typically a sever) then formats those disks. It would be insane if the client used ZFS to format their blocks. Our SAN (and other competitive brands at the time) used redundant controllers, redundant networks, and redundant disk drives (even the disk drives had redundant interfaces), and offered many of the features now found in ZFS such as snapshots, copy on write, and checksums. SAN done this way costs about as much as a house.
A distributed file system seems to want to do a similar thing, but in a completely different way. Each participant is both client and server: "I'll let you use some of my disk if you let me use some of your disk." And the overlying FS software is responsible for some level of redundancy so that if a member goes down, the other members don't lose data. It might make sense to put ZFS under that (at the member-server level), maybe. It depends on how deeply the distributed FS software wants to get it's roots intertwingled into each client/server.
A quick peek at BeaST gives me the impression that it's a method to use multiple HBAs and disk drives to achieve improved fail-safe operation on a single machine using ZFS.
Thinking out loud here:
Disk drives fail, but then so do other components: fans, power supplies, network links (including the ports at either end), memory glitches (or fails outright), and very rarely other bits of hardware too. It doesn't make sense to have redundant disks on a single server unless they are hot-swap (or a repair time of minutes is acceptable), but hot-swap means another layer of connectors and connectors wear out and fail. All this redundancy adds cost. Yes, it's a whole lot more convenient to the clients if a server never seems to vanish or lose data, even in the middle of a transaction. But to achieve that you need a machine from Tandem (now owned by HP) or something from NEC's FT series, which makes a "fully redundant" server from Dell or Supermicro look downright cheap. NEC's FT ran (back when I looked at them) redundant motherboards with each core of the CPU lock-stepped with a core from it's twin on the other mobo. That's crazy! (but in a fun way) Of course everything else, except the case and passive mid-plane, was redundant as well.
There is no ideal solution. Anything we do has weaknesses. For example on my LAN I want to try duplicating the whole server. It'd be insanely great if when a file is modified (create/change/delete) that the file is immediately modified on the other server. Asynchronous means the change is queued, but the server's client is told the deed's done (it should happen within a few seconds). Synchronous means the file must be changed on both before the client's told the deed's done. Synchronous is great, and pretty much a requirement for an active/active pair. However it must fall back to asynchronous when one of the servers goes down. For this application a file-copy scheme that runs periodically (e.g. once per hour) is wholly inadequate.
Active/active pairs are a problem in themselves. How do the server's clients pick which machine to use this moment? Okay, a load-balancing router, but what if that fails? Active/standby is a heck of a lot easier to use. But on bootup who is active, the first server to come online? And if the active server dies, how does the standby promote itself? And if the active server glitches and the standby promotes itself to active (and taken the former active server's network identity), then what? When the former active server comes back from it's momentary brain fart it must to play nice when it discovers it's twin is now active. And of course how does each server reliability determine if it's twin is alive or dead, or even active or standby? And when a machine comes back from the dead it must reconnect with its twin, become standby, and then catch up on all those queued asynchronous file mods. I'd love to find existing software for FreeBSD that does this.
For my personal situation it'd be easier to go with mirrored disks under ZFS, and then keep an extra (identical) machine so I have spare parts. If I'm around at the time of failure I can fix it within a few minutes, and keeping the standby machine turned off will save electricity. But what fun is that?
Conclusion:
Of course bullet-proof hardware is great, but it does nothing for human error (or maliciousness). Still need backups.