I think you either missed my earlier posted link or perhaps you disagree with it and aren't mentioning that. It turns out the Blackblaze data is not statistically interpreted correctly and is not useful or correct. See here:
https://www.theregister.co.uk/2014/02/17/backblaze_how_not_to_evaluate_disk_reliability/
It's complicated.
On one hand: Backblaze is (to my knowledge) still the only source of disk reliability statistics that's publicly available without vendor/model information having been removed. Backblaze's raw data seems trustworthy, since it would make no sense for them to forge the data. But in its blogs, Backblaze people may reach conclusions that over-interpret their raw data, by going outside the limits of good taste in statistics. I have no opinion on whether they do that or not; I look at the raw data only, and I'm capable of doing my own statistics.
On the other hand, Henry Newman's rebuttal of Backblaze's data is mostly just incorrect. To begin with, he complains that the bulk of Seagate failure's in the old Backblaze data was caused by a small number of disk models, which even Seagate admits have a hardware problem, therefore they should be ignored. But that doesn't change the (undisputed) fact that customers bought those disks, paid for them, and didn't get their money or their data back after Seagate admitted the hardware problem; and that if you calculate the average reliability of all Seagate drives, you need to include *all* Seagate drives, not exclude some that Seagate *after the fact* declared to be faulty. Then Henry Newman complains that some of these drives are over 5 years old, and he claims that "disk drives last about 5 years" (direct quote from his writing). Sorry, but that statement is nonsense; the disk manufacturers specify AFRs or MTBFs of ~1 million hours, which works out to about 112 years. If, as Henry is implying, all disks fail within 5 years, or perhaps at exactly 5 years of age, they would violate that spec by a huge margin (their MTBF would be about 45K hours, not 1M hours). But Henry's ludicrous statement contains a grain of truth: Given the progress of disk peformance/capacity, the economic lifetime of many disk drives is about 5 years; after 5 years, it becomes economically advantageous to take large disk subsystems out of production, and move the data to newer (higher capacity, lower energy/space consumption) subsystems. Then Henry talks about the bit error rate of the drive, and claims that if you use a disk long enough you will get an uncorrectable error; here he fails to distinguish between a drive failing, and it having a single uncorrectable error. Finally, Henry didn't read the Backblaze statistics carefully enough, and his complaint about 120% of drives failing is pointless, since Backblace explicitly tells us how their numbers are collected and calculated.
Backblaze is not in the business of selling disks; and in their blog they have even explained that they mostly ignore their reliability statistics themselves when making purchasing decisions. If anyone else tries to use the Backblaze data to make purchase decisions, they have to understand the data first.
Statistic can be sales tricks but it all starts with data. In the link I provided, the 4TB WD had a 8.87% Failure rate, but the number of drives tested (45) with drive days of 4113 indicates indicates that they had been used less than 100 days each. Note that the HGST 8TB drive had a similar number of drives tested and drive days but no failures.
That doesn't surprise me at all. Things like this do happen.
Anecdote from my former professional life: I was involved with shipping a product that contained several thousand disk drives, all of the same manufacturer and model (I will not disclose which manufacturer and which model, nor what the product or the customer were). Within the first few weeks of operation, we had a failure rate of roughly 10% (which for a system with that many disks is a lot of dead disks). This is for good quality enterprise disk drives from a reputable manufacturer, which had been burned in by the disk manufacturer, and then "burned in" again by the system integrator (where burnin means: a quick multi-hour test before shipping the system to the customer). We ended up replacing all the disks with product from a competing disk manufacturer. Why am I telling this story? To demonstrate that sometimes real-world problems occur that are specific to one model disk, or to a specific production batch of disks. In that sense, it does not surprise me that Backblaze observed a 8.87% failure rate of one specific batch of disks within 100 days (if it had been statistically significant); been there, done that, got the T-shirt, in a statistically significant unintentional experiment.
Dell used to "Burn In" a newly order computer but I believe the term was misleading. Manufacturing defects tend to fail early, I view Dells process as less burning in and more weeding out manufacturing defects. Was the early WD failure due to poor engineering and materials or a manufacturing defect covered under warranty?
Burn-in for disk drives is more complicated. Today's disk drives are supposed to be limited to ~550TByte of total IO in a year. At "full speed" (about 250 MByte/s for fully sequential), it takes only ~4 weeks to reach the annual limit. On the other hand, we also know that initial failure of disk drives can often take several weeks, if the failure is caused by problems with contamination, the spindle bearing, the seals of the enclosure, or the lubrication layer on the platters. So a complete burnin that is likely to get the bulk of early failures is no longer possible, without exceeding the annual workload of the disk. From this viewpoint, a systems integrator (such as Dell) no longer has the capability of performing burnin of disk drives, and simply has to trust the disk manufacturer. And as the examples above show, things can go wrong with that trust relationship.