A: Flash drives can have enormously high power consumption, therefore heating. I was just looking at some NVMe drive specs, and they can dissipate 17W in a 2.5" SFF form factor. Good cooling (air flow) is mandatory at these power consumptions, otherwise the drive will self-limit, and performance will fluctuate.
B: To measure performance of flash drives at maximum, you need to drive a workload with high queue depth. I have no idea what diskinfo does. Here would be my proposal: Find a good disk benchmarking tool, and set it for a queue depth of 32 or 64. Or run this many copies of dd in parallel. In python, it's actually pretty easy to write a script that creates a very large file (a few GB), and then does random reads and writes of that file using read() and write() system calls, and then run a few dozen copies of that program in parallel. The advantage of doing this in python is that you can do the same test on different OSes.
C: In the end, the only thing that matters is performance for your workload, and your personal cost/benefit analysis. Is this thing cheap and fast enough for what you want to accomplish?
B: To measure performance of flash drives at maximum, you need to drive a workload with high queue depth. I have no idea what diskinfo does. Here would be my proposal: Find a good disk benchmarking tool, and set it for a queue depth of 32 or 64. Or run this many copies of dd in parallel. In python, it's actually pretty easy to write a script that creates a very large file (a few GB), and then does random reads and writes of that file using read() and write() system calls, and then run a few dozen copies of that program in parallel. The advantage of doing this in python is that you can do the same test on different OSes.
C: In the end, the only thing that matters is performance for your workload, and your personal cost/benefit analysis. Is this thing cheap and fast enough for what you want to accomplish?