Any Reccomendations For A New Hard Drive Purchase?

This always amaze me how ralphbsz is able to remember the name and model of almost every single piece of hardware he touched in his life. I barely remember the brand of the disk I am using now.
 
In automobile terms you been riding on Empty too long.....
As long as your happy I am happy.
50K mine are retired.

When it has run 5 years, it very likely runs the next 5 years. And there is a mirror, and there is a backup, and there is a copy of the backup, and those things that really cannot be recreated with some effort go on a stick, and there is another stick, and at least one of them should usually be offsite. So there are two possible risks:
1) somebody runs a cruise missile into my home when I'm away. Then maybe only a stick survives, and some valuable things are probably gone, and it takes a week or two to rebuild. (But then there will be other things that take longer to rebuild.)
2) somebody runs a cruise missile into my home when I'm at home. Then there is no problem at all. Anymore.

I just can't throw away good working hardware.
 
1) somebody runs a cruise missile into my home when I'm away. Then maybe only a stick survives, and some valuable things are probably gone, and it takes a week or two to rebuild. (But then there will be other things that take longer to rebuild.)
2) somebody runs a cruise missile into my home when I'm at home. Then there is no problem at all. Anymore.

That's one of the problems with amateurs doing backups: they tend to not think about realistic threat scenarios, and what would matter. For example, I'm not worried about a burglar coming into my house and stealing my server. Because if he does, he will probably steal a lot of other things that are much more emotionally important to me (family heirlooms, musical instruments), much more expensive (musical instruments), much more short-term useful (tools), or much more dangerous (guns and ammo, but those are in a really good safe). Plus, when they're done they will probably set fire to the house. Similarly, if the whole house burns down, and I lose the last few weeks of records from my weather station (because the off-site backup is only updated once in a while), that's really not a big problem. Not having a house, and not having any of my stuff is much more important. At that point, I will be super happy that I have things like bank records and important documents on the off-site backup disk.

On the other hand, if a disk dies (which has happened at least 3 times in the last 5 years, although one of them the problem wasn't the disk, it was the crappy USB-2.0 interface), I really don't want to lose data. Having to spend two days carefully restoring from the offsite backup in that case would be highly annoying. Instead, I just order a spare disk online, it shows up 2 days later, I swap it in (half hour of work), start the resilver, and go back to my glass of wine.

People who make really good backups, and then store the backup right next to their computer, need to understand that even a small fire or electrical problem (lightning) can wipe them out. You need some geographic diversity. And you need to think about what "geographic" means; in view of recent California fires, having the backup in your neighbor's house is probably not good enough. Actually, in my previous job we had a very sad story about a big customer: They had a complete ready-to-go backup data center, in the other tower of the world trade center. Not good.

About two years ago, we actually had to evacuate our house because of a wildland fire that was scarily near (fortunately, ultimately nothing bad happened). The first thing that went into the car was the box with passports etc. from the safe, and the local backup disk. The next thing was a small box of family heirlooms. Only after that did we put in sensible things: Spare clothes, sturdy shoes, blankets, musical instruments (because they are valuable and portable). Eventually, I actually put the whole server into the car. Then we had the smart idea that we should have some drinking water, in case we need to sleep in the car on the side of the road, so we tossed a case of water bottles and a case of soda (coca cola) on top of it. Unfortunately, a sharp corner on the computer poked a hole in a can of soda, and I ended up with a very sticky sugar-coated server. Fortunately, it was all external, and took only half hour to clean up once we were back to unpacking.

P.S. Long-term memory: good. Short-term memory: Not so good. Alzheimer's disease is communicable; you get it from your parents.
 
I also have a stacker disk compression card. It still works, should I put that back in service?

Well, the point is: do You have a FreeBSD driver for it?

Among my valuables is: an Orchid graphics card, built mid'88 and sold als "Designer VGA", 800x600/256 colors, a beautiful short and very compact design for PC/XT, with Tseng ET3000 chip - and there is an X11 driver for that, although no longer actively distributed.
Or, similarly beautiful, built in 1990, very long, very high, full of TTL chips, the WD7000FASST SCSI controller - and that one seems to be fully supported in base:
-rw-r--r-- 1 root wheel 36665 Feb 5 21:37 /usr/src/sys/dev/wds/wd7000.c
Then there is a bunch of beauties in PCI-X-64 design - which are now also getting increasingly difficult to find a place to plug them in.
 
Then there is a bunch of beauties in PCI-X-64 design
That is as far back as I have saved. I am saving an old Gateway server board with serverworks chipset and multiple PCI-X-133 slots.
The main reason I kept it is that it had 3 IDE 40 pin connectors making it ideal for shuffling around disk contents.
I very much hope that your post about Stacker is meant as humor. If yes, you have succeeded, and I'm laughing.
Yes, the last ISA slot device I personally installed was an Intellicall controller card for a business job during Y2K meltdown work..
I had several challenging legacy machines that needed special motherboards in 1999 in preparation for the big impeding doom.
There was lots of money flowing in the computer world due to that hype.

I bought my stacker compression card from 'Service Merchandise' bundled with a drive. I can't remember what drive.
 
Service Merchandise was awesome. Bought my first gun there as well as my first nice watch.
They were a retailer that sold computer parts in the early days. Catalog operation with a retail presence.
Much like Staples that the original poster alluded to.
Sometimes you have to get parts where-ever you can.
I used to buy stuff from the Office(MAX/DEPOT) chains. Mostly little stuff like CD/DVD and memsticks.
Serious hardware it was "Computer Shopper" magazine for mail order buying. Later CompUSA for comparisons.
Lately its all ebay.
 
I like the Toshiba drives on the Blazeback list. Surprised to see them using security DVR drives.
MD04ABA500V Toshiba 5TB 5400RPM
I also could not find one retailer carrying it or the 4TB version also on the list: MD04ABA400V
 
There is newer data from BackBlaze at https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2019/
They are absolute heroes for publishing their failure statistics every quarter. It is the best openly available data for disk reliability, broken down by model and manufacturer. Unfortunately, the per-model data is not very useful for amateurs, since they only have good data after using a model for a considerable period, by which point that drive is often no longer available in the consumer market. They also only use nearline enterprise drives, while many amateurs use consumer drives. But one can easily draw conclusions that are generally predictive. My favorite graph is below (annual failure rate, lower is better). That pretty clearly tells you which disks to buy.

Blog-Q1-2019-Trends.png
 
I don't know if this was such a great suggestion. There are plenty of enterprise 5 year SATA drives.
They do carry a heavy price premium.

On the other side of the coin I don't know that I would feel comfortable with the SMART that PMc is showing.
So, much of this comes down to what you want to spend to be happy.
Yeah, I think I'll settle on WD Blue, since I don't exactly have the luxury to go big on this one. :)
 
There is newer data from BackBlaze at https://www.backblaze.com/blog/backblaze-hard-drive-stats-q1-2019/
They are absolute heroes for publishing their failure statistics every quarter. It is the best openly available data for disk reliability, broken down by model and manufacturer. Unfortunately, the per-model data is not very useful for amateurs, since they only have good data after using a model for a considerable period, by which point that drive is often no longer available in the consumer market. They also only use nearline enterprise drives, while many amateurs use consumer drives. But one can easily draw conclusions that are generally predictive. My favorite graph is below (annual failure rate, lower is better). That pretty clearly tells you which disks to buy.

Blog-Q1-2019-Trends.png
"I see", said the blind man. :) Thanks for that! :) I'll check out Backblaze more often. :D
 
Somewhere in the basement, I have two really good disks. One is a CDC/Imprimis/Seagate Wren, the other a Falcon (and I can't remember whether those were made by Maxtor or Fujitsu). They are both 1GB SCSI, and were bought in the late 80s or early 90s. Last time I booted those computers, they were both working; that was about 5 or 10 years ago.
Wow, that's also impressive! :D
 
Yepp - the Hitachi rsp. HGST drives are IBM drives (the deskstar just as well), until the whole branch went to WD, and will most likely dissolve into the WD portfolio.
I for my part had an Hitachi and an original WD side by side in my desktop for quite a while, and there is a huge difference, and I would trust a half-wrecked Hitachi/HGST a lot more than a brand-new WD.
Ok then... :) Something to consider. Thanks! :)
 
Not a flame, just a philosophical excursion about statistics.

The MTBF of disk drives is spec'ed by manufacturers as a million hours or more (sometimes 1.5 or 2 million for enterprise grade drives). Having been a user of many thousands of disk drives professionally, and not being able to share accurate statistics, my summary is that the disk manufacturers are sort of honest. The actual measured failure rates are perhaps half or two thirds of the specification. Now, some of those failures are probably not the fault of the drive (temporary overheating, too much vibration, flaws in power supplies), so this is not intended as criticism of Seagate, WD, Hitachi, Toshiba and friends. This is experience from drives that are installed in professional-grade enclosures (not computer cases but dedicated disk enclosures), with high-quality power supplies, and installed in well managed data centers. Customers who spend millions on their storage systems tend to not risk those systems due to inadequate environments, that would be dumb. So let's take a real-world MTBF (under professional conditions) of about 1/2 to 1 million hours:

Lesson #1: The MTBF of disk drives in good conditions is very high, about 50-100 years average life time = 1% to 2% annual failure rate, and therefore amateurs with just a handful of drives should very rarely see disk failures.

Quick observation: it is quite possibly to break a disk. Drop it (just a little bit) while it is spinning and seeking (laptop disks are better at that). Drop it from a foot height onto a granite table while powered down. Cook it (the 68 degrees above is bad news). Vibrate it all the time while writing. Connect it to a power supply whose 12V line can be 11V or 14V depending on the phase of the moon. Switch the 5V and 12V pins when making your own power cables (did that once). Some of these things will kill the drive outright, sometimes with smoke coming out; others will just reduce its lifetime (and data reliabilty) massively. But in most cases, these extreme mistakes don't happen: few people solder their own power cables, and most buy good quality cases and power supplies.

Lesson #2: But the MTBF as seen by amateurs is much below what large systems professionals see, because they don't have good environmental controls (temperature goes up and down), good enclosures, good power supplies. Still, it is not catastrophic; drives should not die like flies unless you abuse them.

But even in those professional settings, you have to exclude certain effects to get to these MTBF numbers. First is infant mortality: You have to bake in new drives for a week or two, and drives that fail during that time will be cheerfully replaced by the manufacturer, and does not count towards MTBF. Second is manufacturing problems that escape. We once had a whole delivery of drives (several thousand, single manufacturer single model) that had an infant mortality approaching several % per week, and that mortality kept on going for several months. This was a "manufacturing escape", due to a mistake a whole batch of drives had skipped the manufacturers quality control (oops), and happened to be low quality (double oops). The manufacturer cheerfully took the drives back, and (probably not cheerfully) gave us several M$ to compensate us for our troubles, and to make our customer less unhappy. This is human error, was acknowledged and corrected by the manufacturer, and should not be counted towards the MTBF number. Now, how would an individual amateur user who only bought 1 or 2 drives handled it? They don't have the metrics to demonstrate to the vendor that the problem is pervasive, they don't have hardware and software teams that can do autopsies on defective drives themselves, they don't have teams of lawyers to negotiate settlements.

Another case where drives had high failure rates was a system that was shipped to a city in a tropical country with really bad air quality, and stored there in a non airconditioned warehouse for half a year, unpacked. When it was finally turned on, many disks (about a third!) had electrical shorts, which were due to corrosion from sulfur in the atmosphere. Actually, our field service technicians ended up finding liquid drops of a corrosive liquid on the PC boards of the drives: condensation from the atmosphere, containing sulfuric acid. Again, because we were a big company we were able to diagnose what had gone wrong, and work with all stakeholders to come to an equitable solution. And again, this should not be counted towards the MTBF of the drive itself. But how would an amateur who lives in this city have handled it? He doesn't have ready access to air chemistry, he doesn't know how long the store had the drive on a shelf, and he doesn't have teams of lawyers to negotiate.

Lesson #3: For an amateur, systemic effects can mask the inherent good reliability of quality drives, and they may get lots of failures. Tough luck.

It is now widely known and reported that Seagate Barracuda drives (in particular the 1TB model) have had serious reliability problems. If you average those into Seagate's overall MTBF, then the result looks pretty bad for Seagate. I don't know whether Seagate ever had programs where the refunded those, extended the warranty, or had other arrangements with large users (I never worked with large quantities of that model professionally). Real-world example: Between a colleague of mine and me (he also worked on the disk subsystems for a large storage systems vendor), we went through 7 of these 1TB Barracuda drives at home (he had 5, I had 2), all of which died within a few years (in some cases before 2 years), and we know that our enclosures/power supplies/environment were at least OK. That failure rate is completely incompatible with the quoted 1M hours, and more points towards something in the range of a few 10K hours. He got some replaced under warranty, and we both threw the rest into the trash. But since I had been burned by that, I followed the fate of other Seagate drives later, and found that this problem did not repeat for other models.

Lesson #4: Some drive models just suck, and will die quickly. So quickly that even an amateur with a small number of drives (1...5) will have serious problems in a small number of years (1...5). But you can't extrapolate from a few bad models to all models in a series, and much less to a vendor.

And finally, look at the famous BackBlaze data. It is the best data set on disk quality that is freely accessible; there is better data out there, but it is not accessible. You clearly see that on average, Seagate is less reliable than the others, and that's not just one model, but systematic. But Seagate does not suck: their annual failure rate may be up to 2% and 3% for some models, but it is nowhere near the 30% or 50% that my friend and me saw, and that would be catastrophic.

In summary: The reliability of drives is complicated, and at the amateur level just not predictable. Not enough statistics. You may get very unlucky.
What can you do about this?
  1. Think about the value of your data. If it is worth nothing, and you will not feel bad if it is all gone suddenly, and your time for re-installing the system after a disk failure is worth nothing, then stop reading. All others, keep going.
  2. Use RAID. At the very minimum mirroring of two drives. If this is your only defense, it is not good enough, and a two-fault tolerant system is better or even necessary.
  3. If you are mirroring or RAIDing, consider using different drives (different models or even vendors) in a pair. Like that a systemic problem with a particular model is less likely to wipe you out. But that can be a bit tricky (different capacity, different performance, and in a RAID system the overall performance tends to be dominated by the slowest disk).
  4. Take backups. Like that a disk failure becomes an inconvenience (down for an hour or a day), and perhaps a small data loss (the data from the 23 hours may be gone), but not a catastrophe. Remember that RAID is not a panacea; it does not protect against correlated failure, nor is it 100% reliable, nor does it protect against human error.
  5. You have installed RAID already, right?
  6. Make a plan ahead of time: what will you do if a drive dies? Know the commands to resilver your RAID. Know where your backups are stored. Don't store the only documentation for how to restore from backup on the drive that you are backing up. Do a test run, regularly. A backup that has never been restored is not actually a backup, it might be a blank tape. Not a joke, happened to my wife's company once: after their disks died, they discovered that their clueless sys admin had set them up with RAID-0, and had dutifully written a blank tape every night, labelled it, and put it into the fireproof safe. Not fun.
  7. Your RAID is functioning well, you are monitoring disk health, and have set up an automatic monitoring system, right?
  8. Think about what other disasters you want to protect against (because disk failure is not a disaster, it is an expected operational situation). Are you worried about a fire or flood destroying the place where both your original disks and the backup are physically located? Are you worried about intruders stealing your hardware? Are you worried about someone snooping on you? A good backup system can deal with this, but at some cost.
  9. Inject a test fault into your RAID system (for fun, just pull a disk physically out), and watch it resilvering automatically, and your cellphone beeping because you got an e-mail from the monitoring system. That's when you can stop having anxiety attacks.
True story: About 25 years ago, I interviewed for a job at the storage systems research department of one of the largest and most prestigious computer companies in the world (two letters, not three). My host and future manager gave me a little tour of the computer room for fun, and showed me the main server the group used (in those days, a group of 15-20 people used a single large computer), and the two "big" RAID arrays connected to it (in those days, "big" meant dozens of disks). He then proceeded to pull a disk out of the running production machine, and hand it to me. I was flabbergasted. What this really demonstrated was: the guy was (and continues to be) very smart, and knew the reliability of his systems, and the value of impressing a person they might potential hire. I gave him back the disk, he put it back in, the disk array resilvered for a few more seconds, and everything was fine.
Wow..... That was quite a post, and I loved every second of it! :) Quite engaging, and informative! :) Thanks for sharing your wealth of knowledge and experience! :D
 
Yepp - the Hitachi rsp. HGST drives are IBM drives (the deskstar just as well), until the whole branch went to WD, and will most likely dissolve into the WD portfolio.
I for my part had an Hitachi and an original WD side by side in my desktop for quite a while, and there is a huge difference, and I would trust a half-wrecked Hitachi/HGST a lot more than a brand-new WD.
Thanks, PMc! :) Wait, did I already reply to your post? :(
 
If it’s for a server I would consider the WD reds instead of blue. We’ve been using them for years and they’ve been pretty solid.
 
I have WD reds (NAS) in my NAS - guessing that's what you mean. They are quiet (SATA) and supposed to be long lived. We shall see!
I have five WD reds in my ZFS server. They are 3 TB WD30EFRX. They are 6.5 years old. One failed a couple of years ago. Otherwise no problems (but I wish I had gone RAIDZ2, not RAIDZ1).

The Backblaze stats for 2017 had a fair sample of these exact drives. The annualized failure rate was 5.06%, which is 2.5 times the average across all their drives, so a poor outcome.

"WD Red" covers a multitude of different capacities, and I strongly suspect that one drive may not perform the same as another WD Red of a different capacity.

The HGST drives generally seem to score well, but they are expensive. The virtue of the Backblaze stats is that studying them does provide real insight into the bargains.

Cheers,
 
Back
Top