1. You need more than one or two disks for redundancy and creating an array for unforeseen situations.
If you want your disk subsystem to be fault-tolerant to one whole disk failing, you need at least two disks. If you want it to be tolerant to double faults (either one whole disk + one unreadable sector on the second disk, or two whole disks failing near simultaneously), you need at least 3 disks. And so on. This is simply an application of information theory: counting bits.
However, even without redundancy, ZFS gives you a lot of other stuff. The single biggest one: checksums. With that, you have a very good chance of catching data corruptions that the disk's ECC doesn't catch, for example user error. You can't fix it once caught, but at least you know about it. Also scrubbing, which makes your disk effectively more reliable, by catching problems early.
In theory, you could even use the "copies=2" feature of ZFS to create redundancy on a single disk. Most people won't do that, because it cuts write speed in half.
2. The disks should be able to rustle quickly
ZFS does not create significantly more disk IO than UFS, so the effect of disk speed is not particularly more significant for ZFS. It is true that the IO pattern of ZFS and UFS are different (in a nutshell, ZFS writes tend to be more sequential, which is both good and bad), so for some workloads ZFS may be faster, and for others it might be slower. The difference between them is not huge.
... and have technologies for fast encryption processing (if anyone uses encryption).
SED or self encrypting disks can encrypt at wire rates. So this is not a factor in performance. Whether disk encryption has value is a difficult question; how hardware encryption at the disk compares to software encryption in the OS is another difficult question. It is quite independent of ZFS.
The disks should not be from the "for home use" series.
In general, if you value your data, you should not use consumer grade disks. But ZFS versus UFS makes little or no difference here. On the contrary, ZFS handles unreliable disks better than most other systems, see above: built-in checksums, scrub, redundancy ...
3. A lot of RAM. If you are in virtual machines, then 32 GB and higher. Very high quality!!!
That canard keeps getting brought up over and over. And it is still false. My home server runs ZFS with 4 disks, and for many years had only 3GB; today it has 4GB. It sadly not have ECC memory (I would like that, but it is hard to get in the form factor I'm after). It runs perfectly fine. The idea that you "need" 32GB of very high quality memory to run ZFS is and remains nonsense.
4. Certified and high-quality power supply.
5. High-quality cooling of all this.
6. How much electricity will such a tower produce... do you have extra money?
ZFS is not particularly more vulnerable to crashes due to power outages. In the old days one could have argued that its log-based design is actually better for crashes; today with both journaling and soft updates in UFS, both are excellent, and very unlikely to lose data on a power outage. UFS may take a little longer to reboot due to fsck, but in practice that's probably not a big deal. While I very much want every user of a stationary computer to have a good UPS, the reason is not ZFS; it is keeping your sanity to not have to have anxiety every time the lights flicker.
And if you want to build a high-performance, high-reliability storage server, you will need multiple disks, a good power supply, and good cooling. Whether you run UFS, ZFS, ext4+RAID, or any other system. If you want industrial-grade storage quality, you will spend industrial-level money and power.
Otherwise: you are simply using ZFS like a hamster, at your own risk.
But if you run another file system on the same hardware, your risk will not be better; it might be worse.
I see the tradeoff differently. Yes, ZFS uses more CPU power to do IO. Its cache management algorithm may be more reluctant to release RAM, which may lower effective machine performance if other RAM hungry and inflexible tasks (such as a GUI or VMs) are running on the same machine. It is harder to learn and administer for Unix old-timers, since you have to learn new concepts (such as pools and parameters) and commands (not just mkfs and mount). The benefit you get for the slightly higher CPU usage and perhaps more memory contention is significantly better reliability, even with a single disk. I think on a server machine, ZFS will de facto always win reasonable tradeoffs, assuming the admin can be taught how to use it. On a pure GUI machine (laptop), if the on-disk data is considered transient anyway, and the user is willing to reinstall the OS regularly (perhaps because they are distro hopping in the first place), ZFS may not be worth the effort.
My personal opinion: If it weren't for ZFS, I would not be running FreeBSD. And no, I've never been a ZFS developer, so I don't have any "parentage" feeling for it; matter-of-fact, I've worked on several storage software stacks that compete with ZFS.