There is a giant part that's missing in Backblaze's calculation, which is the fact that in enterprise applications, data will be stored redundantly. And the amount of redundancy can be adjusted. The classic example of this is RAID: You take your data, split it across 5 disks, and write one parity to the 6th disk. Given fixed capacity, the disk space overhead is 20%, but you are protected against failure of any one disk. It turns out with modern disks (which are extremely large), that's no longer sufficient, as there is a non-zero probability of getting a second fault while the first disk is being rebuilt onto a spare. So what you do is you go to 5 + 2 disks, which is pretty reliable, but has a 40% overhead. For home users and amateurs, that's pretty much the end of the road: it's impractical to have many more disks. Clearly, RAID also has some performance overhead, but with a variety of modern techniques, we've learned how to abate that.
But for an enterprise application, there are ways to do this much more efficiently. It turns out that if you use much larger groups of disks, you can get pretty good reliability, at much lower overhead. For example, if you split your data across 100 disks, you will definitely need more than 1 redundancy disk, but you probably won't need 20 of them (for a 20% overhead, similar to the 5+1 the amateur had), and definitely not 40. For fun, let's say that to get your desired redundancy with a dozen redundant disks, so you are using 100+12 with a 12% overhead. That's because the probability of 12 disk failures (out of 112 total) is about as large as the probability of 2 disk failures (out of 7). Great.
The question that Backblaze did not answer (and which is heinously complicated): What is the tradeoff between reliability and cost, once you are using RAID? Say for example a company that uses lots of disks (millions of them, all arranged in those crazy 100+12 redundancy groups I used as an example above) get a deal: The next million disks will be 10% cheaper, but they will also be 5% less reliable. Or perhaps they will be 15% less reliable. The 10% cheaper means that for the same cost, they can arrange the RAID to be 100+23, with much more redundancy for data protection. But will that be more or less reliable than the previous disks (more expensive but each better)? Tough question. Lots of graduate students have written PhD their thesis on related topics, and lots of engineers and researchers study this topic every day (well, not today, it's the weekend). For the average home user, it's not relevant, since they (a) won't be using 100+ disks, and (b) they don't have the means to calculate how reliable they want their system.
What is feasible for the home user is to get very very cheap disks (for example 5 year old used ones, somewhat similar to the "DOM 2017 you found), and then compensate for their lousy life expectancy by being extremely redundant (for example run 4-way mirroring, which is easily possible using ZFS). My intuition is that this is nearly always a bad idea: while theoretically it could work, what people ignore is the side effects of failures (lots of hassle, and chance of operator errors and bugs), and the risk of correlated failures.