For those looking for hard drives to buy or test, this storage company just posted their reliability numbers for the hard drives they use: http://blog.backblaze.com/2014/01/21/wh ... uld-i-buy/
Exactly. I discussed this in some detail in my RAIDzilla II / Backblaze Pod comparison.ralphbsz said:Their new data is interesting. But it has to be taken with a HUGE grain of salt. Disk failure mechanisms are complicated, and very dependent on environment (temperature, vibration, power quality, power cycle count) and workload (r/w ratio, small updates, spin down). Matter-of-fact, they hint at that, when they explain why they don't use WD green drives. The biggest factor is that Backblaze uses their own enclosures, which are very different from consumer-grade computer cases, and also quite different from enterprise-grade JBODs. They also run a very large number of drives in close physical proximity, mechanically strongly coupled (sheet metal resonator, ahem, enclosure), and consumer-type drives in particular are very sensitive to vibration, in particular sympathetic vibration while writing. While I'm not saying that this a bad thing to do (it works great for their business model, and these guys are really smart), it also means that their conclusions about reliability, in particular reliability/$, are not applicable to other users and other uses.
Another thing that skews the Backplaze data is their method of acquiring disk drives - they often "farm" them (Link). One of the main influences on out-of-the-box and early failure rates is how the drives were handled before being installed. One major online retailer (N****g) seems to ship many drives with inadequate protection. And when they started asking users to buy drives and ship the drives to them, that was another shipment and another chance to for the drives to be damaged.Uniballer said:Granted that the complete conditions are not stated, but at least Backblaze names brands and models in the linked post above. Google's paper failed to do so, but partly made up for it by debunking some of the theories about drive activity and environmental conditions causing failures. In the short run, specific models that are failing rapidly are the most useful data. In the long run, the environmental and statistical information is far more useful.
That's the big problem with reviews that "name names" - by the time there's enough real-world experience with a drive series, the manufacturer has moved on to the next thing. And, in some cases, even drives with the same model number will have major internal differences that affect reliability - I've see drives where there are different numbers of platters for the same model, depending on date code or assembly location.After all, who would even think of buying the Seagate 1.5TB Barracuda Green model now that bigger drives are available from all manufacturers?
Terry_Kennedy said:And it isn't possible to make a blanket statement that "All Brand X drives are good, while all Brand Y ones are junk"
Absolutely. Take Fujitsu - anybody who still wants to run M2351 "Eagle" drives is running them - they just don't fail, even after 25+ years of 24x7x365 operation. I didn't even know they had a filter that was supposed to be cleaned regularly until a decade after I installed them. :Oralphbsz said:Manufacturers can go through bad periods, and good periods.
Those were good drives, too. Most of the CDC/Imprimis/MPI/Seagate stuff in that lineage was good.ralphbsz said:The other indestructible drive was the CDC Wren. Full-heigh form factor 5 1/4", 600 MB. I must have had dozens of them, all in individual cases with 50-pin SCSI connectors in the back, connected to a variety of minicomputers and workstations. I don't remember ever hearing of one failing.