Other ZFS + HAST

Hi all!
I am about to set up (need) a fail-over cluster for two machines. I have been searching the web for solutions and different options (as I am new to FreeBSD). I need ZFS (due to a number of things) and the most logical solution seems to be HAST. I saw this thread: https://forums.freebsd.org/threads/21474/ and after reading the man page for hast.conf(5) I got a bit confused. I hope you can clear this out.

The post says that HAST should be at the "Bottom" and ZFS "on top" of HAST. In the man page for hast.conf(5) the option local <path> "This can be either GEOM provider or regular file".
To me this sound as if I can make a zpool and than use that directory in hast.conf. Create the same zpool in slave server and use the same in that local hast.conf.

I just wanted to ask before I start configuring the servers the wrong way and have to start all over again.

Many thanks in advance!
I would not use HAST with ZFS. The reason is that HAST exposes the GEOM nodes under /dev/hast/*. What this means is that since HAST is basically mirror of two machines, ZFS will have no idea if a resource on the active machine has a problem or not.
Ok, then HAST do not seem to be a good option.
But what fail-over file cluster solution is recommended together with zfs in production?
It really depends on what you want to replicate. If it is just data then I would suggest a good RAID controller with HAST and carp failover. If you need to use ZFS then you could use snapshots with carp again.

What exactly are you trying to replicate?
I am supposed to replicate data, files and yes I need zfs. (At lease for the closest time period).
I will look in to carp, but I have also looked a bit on gluster. ( You know I am coming from Linux)
What I want is a fileserver fail-over cluster.
Two machines with duplicated data, if master fails, slave will take over and these should be not significant data loss.
Usually you would have a couple of servers with disks in (say two disks each to keep it simple). You create two hast devices on each server, using one local and one remote disk, effectively creating two network-level mirrors. You then create the ZFS pool on top of the hast devices, so in this case the pool would be made out of two hast devices on the master.

The idea is that ZFS has redundancy between the two hast devices, and all writes should be also going to the local and remote disks. If the server fails you can import the pool on the slave host.

Personally anything like this worries me as there's a lot of awkward edge cases, especially if you want to automate it. Unless you need enterprise-grade high availability (in which case you may be better off with something commercial like a TrueNAS), I would suggest just using send/recv every few minutes and having a manual standby node.