I find it very hard to believe that Micron SSDs are fundamentally defective, and frequently corrupt data. If that were true, then their big customers would be killing them. Look at it this way: You probably have had handfuls or dozens of "improper shutdowns", meaning cutting power or crashing the OS. You claim to see consistent damage, meaning statistically meaningfully measureable on dozens of experiments. That means the probability of damage must be on the order of 10% (perhaps 5% or 20%) of something going wrong. Now extrapolate that to a large computer user with millions of computers using those Micron SSDs. They have unplanned power outages too (although much less often than you do), and they would be seeing thousands of cases, and they would be beating Micron up. Can you imagine what Jeff Bezos would do if this endangered his web business? There would be no building left standing within a 10 mile radius of their headquarters in Boise, Idaho.
Have you contacted Micron? Installed the newest firmware on the drive? Tried an alternate drive? But really, I don't think the problem is the drive at all.
Next suspect: UFS. Again, same argument. Millions of computers use UFS, including in large server farms (think NetApp, Netflix, Juniper). Many of those machines crash. Some of them run on blazing-fast storage, and have for many years (storage as fast as NVMe SSDs has existed for at least two decades, using either RAM disk hardware or large RAM caches on disk arrays). It is incredibly hard to imagine that UFS has bugs of this sort, and you are the first to experience them.
Here's my theory. You are pointing to two very complex pieces of software, which probably use many files, and do lots of file IO. I'm going to bet that these two applications were written without paying attention to the fact that file systems can (validly!) lose data when the system crashes, and do so in an inconsistent fashion, if the data was either written recently or is still being written. For example, the applications might first write to file A, then write to file B, and assume that when they start, anything that's in A will also be present in B, because A was written before B. Well, that assumption is wrong in a crash, because it's quite possible that the updates to A never made it to disk, while B did. And given sufficiently strange writing patterns (for example, write just a little bit, less than 512 bytes, to A, then keep the file open, while writing a heck of a lot of stuff to B and closing the file), that outcome is even very likely.
On traditional spinning disk based systems, the scenario that kills your applications may be very rare, because a lot fewer things happen, and in particular a lot fewer things happen at once.
My suggestion: Go into the source code of these applications (and any other programs that modify the same data), and add O_SYNC to all the
open(2) statements. That might be a little difficult, as they might be hidden behind language-specific run time libraries (for example fopen in C, file in Python...). Given that your storage is blazingly fast, the performance hit is probably small and irrelevant. My educated guess is that a lot of these problem will go away.
An alternative would be to contact the authors of the applications. I don't think that will go well, but you can try.