I thought zfs was rock-stable. It's not. It even does not have a fsck to fix problems like ufs.
Do you have ECC on your memory? Watch the BIOS logs for single-bit errors that were corrected?
Do you buy enterprise-grade disks? And connect them with enterprise-grade HBAs and cables? And watch the firmware versions (also of the disks) and upgrade when needed? And configure redundant disks and systems?
If the answer to one of those questions is "no", then please don't throw eggs at ZFS.
To reinforce what Cracauer and fcorbelli said above: Consumer grade disks are optimized for lowest possible cost (which is what consumers really want), and looking good on dumb benchmarks (Phoronix or Tom's Hardware come to mind). Enterprise systems (not just disks!) are optimized to make the customer long-term happy, and the single biggest reason for unhappiness is data loss. Part of that optimization is to buy high-quality components. Part of it is to actively investigate firmware versions, and make sure the optimal one is used at all times (anecdote 1 below). Part of it is to prevent unplanned power failures as much as possible (there is a reason data centers have both batteries and diesel generators). Part of that is to perform destructive testing to make sure the (disk...) vendors promises are actually correct (anecdote 2 below). Another part is to enable hardware protection mechanisms where available, for example hardware encyption and checksums (anecdote 3 below). And a final part is to always have multiple copies of the data, typically in multiple locations (anecdote 4 below).
Anecdote 1: At a former employer, sales wanted to ship a system to a customer, using a new model of disk drive for which engineering and quality control had not yet studied, so we didn't know what firmware version should be used. Another engineer and I vetoed having the system shipped out. Our veto was overridden by an executive, because (a) the customer urgently needed the system, (b) we needed the revenue, and (c) there were no other disks available. About a week after the system ships, the disks started dying like flies. We ended up replacing several thousand disk drives in the field, and giving the customer a retroactive 100% discount on the system. All because some VP didn't want to wait for firmware testing.
Anecdote 2: A friend of mine worked for a different storage systems company. His job for a while was to write data to hundreds of SSDs in a lab system, and then cut power in the middle of the writes. This was about 15 years ago, when SSDs were the new and hot thing. He found that a surprisingly large fraction of the SSDs (a) don't actually write to media even though they replied to the host that they had, (b) reorder writes, (c) return corrupted data, or (d) even completely die and refuse to reboot after a power outage during a write. If you remember the beginning of the SSD era, there used to be a lot of small vendors that assembled SSDs from NAND chips and generic OEM controllers: Most of those small vendors went under because they couldn't get their quality under control.
Anecdote 3: Several decades ago, the T10 committee (which defines the SCSI standard, and really leads the industry in how disk interfaces worked) proposed to allow applications to calculate checksums, pass those checksums all the way through the stack (OS, HBA, hardware), write the to disk, and return them to the application. This cost a lot of money, and was really complex. Why did the disk vendors push this? Because they were sick and tired of being wrongly accused of corrupting data: Anytime a database was broken, the software companies like Oracle or IBM claimed "it was the disk drive that corrupted the data", and Seagate and Hitachi wanted to be able to prove: "You calculated this checksum, you wrote it, and we're returning the data to you unmolested".
Anecdote 4: A big corporate IT customer of my previous employer had two complete data centers, with complete storage systems in both, and fast network cables to connect them. In two separate buildings. Like that, even a complete disaster that affects one building could be recovered from. Unfortunately, the second data center was in the other tower of the World Trade Center.