mrmarcel said:
Yes. If you are using a file system (such as ZFS or any other single-node file system) that is not built for SAN access, you have to be 100% sure that at most one server is accessing the disks at any given moment. Absolutely, positively sure. No if's or but's. Even in all possible corner cases. Even if IP connectivity or the Ethernet fails.
This can be done (all cluster or SAN file systems are capable of it). The biggest single ingredient is a group services package, which makes sure the two servers always know whether the other guy is alive or dead, and whether the other guy knows that I am alive or dead. In some cases, this uses hardware assists, like independent management networks (so clogging on the normal IP network doesn't cause spurious failover), memory-to-memory bridges (PCIe is quite suitable for this), and remote power control for SOTH functionality (Shoot the Other guy in the Head), which is the best way to make sure only one node is up.
Also, the failover requires the other node mounting a file system that was not cleanly dismounted. Make sure whatever file system you use is really good about fsck, and uses some technology (logs, journals, non-overwrite, transactions …) that makes data loss on crash/restart somewhere completely impossible.
This is not for the faint of heart. There is a good reason software companies make good money selling such solutions.