The main issue with HAMMER would be that it doesn't do any volume management so you'd have to use a RAID controller. Ok, but the downside with FreeBSD is always how well supported it is, and whether you can actually get any monitoring tools or software to work properly. The beauty of ZFS is that it's well supported and all software based, so everything is easily managed from the FreeBSD command line. If necessary you can move the whole array to another machine without worrying about any RAID hardware and just pick the pool up again.
Yes, ECC is great is you want to guarantee data integrity but a hell of a lot of people are running ZFS without it, especially at home, including me. Maybe I have a few incorrect bits in some files somewhere but I don't think it's going to be the end of the world. All file systems are writing data that originates from RAM so just suggesting a different file system isn't exactly an instant fix. What would be interesting is actual research on whether ZFS is more susceptible to errors from non-ECC RAM when compared to other file systems. Statements like "Good luck running ZFS on non-ECC RAM" as if the ZFS pool is going to fall apart, then suggesting other file systems as if ECC is suddenly not an issue are a bit futile.
Expanders seem like a great idea and are fairly common in enterprise storage, but in enterprise it's usually all one manufacturer and been heavily tested. Unfortunately it seems that a lot of these devices have weird quirks which are probably handled fine by the tested controllers but we see a lot of people with strange problems on the forums. Because the set up of controller -> expander -> disks adds a lot of variables, many combinations of which have had little use or testing among community members, it's very difficult to support. On the forums you tend to just see stuff like "try a firmware upgrade" which is basically just clutching at straws in the hope you've hit a bug that's been fixed.
I would just go for a well supported HBA and as mentioned use RAID-Z2 if possible, if only for peace of mind.
You still have the failure, you still resilver, and during the resilvering, another drive still fails (and thus keep the cycle going).
It's not so much the ability to lose two disks that's important, it's that time when you have had a single disk failure. If you lose a disk in RAID-Z1, you need to get a replacement. It may be that you have a spare, or you may need to order it. Once you have the disk you then need to do a resilver, which could take a couple of days with that size of pool. So your total time in degraded state could be several days. If you get any errors on a disk during that time (doesn't have to be a full failure, could just be a checksum error or bad block), the data that was being read is lost. ZFS will start showing "errors detected. applications are affected" warnings and will start building a list of files that are corrupt as it has no redundancy to recreate them from. Not the end of the world but not very nice either. The possibility of errors happening during those degraded days on arrays ~>10 TB is high enough that in general, RAID5/RAID-Z1 is discouraged. With RAID-Z2, if you get any read or checksum errors while that one disk is missing or rebuilding, ZFS will just recover the data automatically as normal.
I'm not saying you outright can't use RAID-Z1 if it makes sense for your budget and number of disks, just be aware that it puts you in a more dangerous position when you are replacing a failed disk.
Obviously you'll still want to backup any data that can't be recreated easily.
As with all RAID, redundancy is not a backup.