Sadly, agree. SSDs are a bizarre marketplace. The thing to remember that SSDs are internally incredibly complicated beasts, with millions of lines of internal source code for their firmware. They typically contain a miniature file system, redundant storage layers, extensive error checking and health management, and wear leveling and tracking. You can completely change their behavior (latency, throughput, good IO patterns, durability, reliability) by changing and adjusting the firmware. If you go to academic/research conferences on storage, you can always hear many talks about FTL (flash translation layers, the firmware in SSDs). Personally, I always fall asleep in these talks.
The versions sold to consumers (purchasers of individual units) are usually built in firmware to maximize customer satisfaction in the most common use case; for most people that means gaming PCs with Windows. Now you take the FTL that's (well and carefully adjusted) for that IO pattern, and put it behind a RAID system and a modern complex file system ... and things go sideways. Performance drops, because the SSD gets a write workload that is completely unlike the one it was optimized for.
What's the fix? At the consumer level, I don't know. I use SSDs as a boot disk in my server (where the workload is super light, I'd be surprised if it gets to 100KB/s more often than a few seconds per day), and in laptops and small desktops (all MacOS in my household), where performance is irrelevant. Another member of our household uses NVMe SSDs in their ... Windows gaming computer, and they work great, as long as you install all the required heatsinks. For a consumer storage server, I don't know what to do. In big computer industry, the answer include: (a) build your own SSDs: just buy flash chips from Micron, Toshiba or Samsung, and do all the rest yourself. (b) Buy raw SSDs, but then write all the firmware yourself. (c) Work with the SSD vendor to carefully tune the FTL to the workload. None of this is viable for small systems.