I prefer a drive to just die and shut up than limping on and causing various failure modes... SATA drives are as always by far the worst here - you get anything from happily accepting all writes but returning only garbage (ZFS throws those out pretty quickly and the system is fine); to stalling for minutes just to fake being alive again for a few seconds and going catatonic again - dragging the whole system to a crawl because all IO is stalling and has to time out; up to completely locking up the whole controller/expander they are connected to, even preventing the system to boot... Almost always the useless "SMART health status" still blatantly lies to your face and returns "OK".
SAS drives are usually much better at handling failures - they accept they are failing, return errors and allow the system to keep operating. (However, I haven't had a SAS SSD fail yet)
NVMEs usually just go dark - they might still be recognized as PCIe devices, sometimes even the controller responds, but they are dead and don't interfere with the system.
That being said, I had *far* less drive failures with any kind of flash than with spinning rust over the years. Especially when it comes to ageing drives, where HDDs tended to fail due to mechanical wear, SSDs are usually completely fine if they are still well below their rated TBW. We have some Intel S4500 and S3510 in less-important systems which are near or even over 70k Power_on_hours and they are perfectly fine, not showing a single reallocated sector or other signs that they will fail soon (they are between 10%-50% media wearout). Most of the SATA SSDs in our servers are Kingston DC series; not a single failure yet.
With NVMe I can only remember 2 failures with micron 7400 in very short succession, which just went silent but still showed up as a PCIe device but the controller wasn't responding any more (I still suspect a firmware issue; replacements had another version and are still running fine). In both cases the failing drive caused a panic and system reboot, afterwards the drive was just silent. Otherwise I had no failures with "server rated" NVMe (M.2) yet.
The ~20 1.92TB (HPE-branded) Toshiba and Sandisk SAS SSDs, which we have been running for ~5 years now had zero failures.
We run ZFS mirrors everywhere. Those old intels are running in less important systems and usually have at least one newer drive in their mirror(s) (some 3-way mirrors) and I deliberately keep some of those old things running just to see how far you can push them.
When it comes to consumer/desktop drives, we used samsung 850 pro SATA for a long time with only 1 or 2 failures, which were due to wearout over many years and simply showed failing sectors (of course, running windows/ntfs this lead to massive data corruption and hard crashes); but after those SATA drives samsung had been the worst for us - their 9xx NVMes were especially terrible with several failures within the first few months. Given that their TBW ratings have gone down where all other vendors ratings are usually going up, and the fact that samsungs are by far the most power hungry space heaters, we/I avoid them like the plague nowadays... We now mostly deploy WD blues in all of our client systems (NUCs) as well as in low-power/low-load appliances (e.g. branch VPN routers) and up to date had zero issues with them, with the oldest SN520s being ~5 years of age now. They have been so uneventful over the years, I even use them as boot drives in some (non-critical) servers and network appliances.
Again: *much* less (near-zero) failures with SSDs than back with spinning rust. Usually they only get replaced because they get too small...
Over the last 15 years I encountered 4 catastrophic storage failures leading to outages and/or data loss - one was thanks to a hardware bug in the RAID-controller (i.e. the vendor cheaping out and using some chipset with weird 12-bit registers that overflowed, leading to returning wrong addresses and overwriting existing data), one was thanks to Seagate drives (HDDs) which were dying like flies almost exactly after 3 years in very short succession - i.e. 3 out of 6 drives failed within 2 weeks, 2 more followed in the weeks after...
The other 2 failures were caused by dying SAS-HBAs (both SAS2008) and involved ZFS pools that got their metadata corrupted.
So except for those Seagates (never bought a single drive from them ever since...) basically none of the "catastrophic failures"/outages caused by storage were related to dying disks. However, my sample set isn't terribly huge with currently ~10 running servers, ~20 smaller appliances and ~50 clients, so YMMV.