disk recommendation 2021

I am looking for an upgrade on my disks ... I do not follow the developments on disks. However, I have heard that conventional magnet recording based disks are recommended for usage with zfs. And I have read that the WD red disks I was quite satisfied with had some issues (some users on various places did say they "are not the real WD reds"), don't know if they are rumors or not. I do not need super-fast disks, but found that 5400rpm models are rare and not cheaper either. Reliability is my utmost goal, followed by noise. I will buy 4 of them using them in a stripe of 2 zmirrors on top of geli-encrypted partitions, they will run in my workstation which is basically 24/7 online. Size per disk should be > 10TB. So far the "Toshiba Enterprise Capacity" disks look promising - 5 years warranty and 2,5 million hours MTTF, power drain seems also low which is nice.

According to backblazes disk statistics HGST seems to deliver high quality disks, however, they are out of my budget, their price per TB is more than twice of the Toshiba.

What is your experience with recent(ish) disk purchases (2019, 2020), especially regarding the Toshiba and WD? - any recommendations?
 
However, I have heard that conventional magnet recording based disks are recommended for usage with zfs.
I don't know where you got this recommendation from but it's not entirely correct. There's nothing wrong with using SSDs and ZFS.

However, compared to HDDs SSDs are still quite expensive if you want large storage capacity. So in this respect HDDs are still quite useful.
 
there are various blog entries and sites (e.g. here - final words), especially with resilvering a ZFS SMR disks seem to deliver really bad performance (not that this is a planned scenario but I had bad experience with Seagate Archive HDDs). I am looking for spinning disks for data storage - personal: photo storage and and various VMs and jails for development at my job; everything performance critical is running on NVME SSD storage.
 
First step: Write down your requirements. How much capacity do you need? What speed do you need? Is your performance requirement sequential bandwidth, random seeks, a mix? How reliable do you need it? What is your backup strategy? And how much can you afford?

Reliability is my utmost goal,
...
I will buy 4 of them using them in a stripe of 2 zmirrors on top of geli-encrypted partitions,
You say that you are interested in reliability. But then you build a system that uses 4 physical disks to give you 2 disks' worth of capacity, yet can only tolerate one fault. If the number "4" is determined by budget, and the number "2" by capacity need, then set up a 4-disk RAID-Z2 pool, and you get the ability to tolerate two faults.

According to backblazes disk statistics HGST seems to deliver high quality disks, ...
The Backblaze data is the most accurate disk reliability information that is available to the public. If you want reliability, follow their statistics. If you can't afford that, you won't get reliability.

What is your experience with recent(ish) disk purchases (2019, 2020), especially regarding the Toshiba and WD? - any recommendations?
Disk failure rates are under a percent per year (calculate the annual failure rate from 2.5 million hours). That means that someone would have to have several hundred disks to even see a few failures in two years (purchases in 2019 or 2020). Few amateurs who post here have that many disks. To make accurate measurements of disk failure rates (more than 1 or 2 failures), someone would have to have tens of thousands of disks. There are people who have that many or more (for example EMC, HP, IBM, Oracle, or Amazon, Microsoft, Google, Tencent, Baidu), but they don't publish their statistics. Your best bet is looking at the Backblaze data.

I could tell you that I've had two HGST disks (one since 2014, one since 2016), and neither have failed. Their predecessors were two Seagate disks (bought in 2009 or 2010), and both failed. But (a) you are interested in recent data for different manufacturers, and (b) nobody should reach any conclusions from measurements based on just a handful of disks.
 
First step: Write down your requirements. How much capacity do you need? What speed do you need? Is your performance requirement sequential bandwidth, random seeks, a mix? How reliable do you need it? What is your backup strategy? And how much can you afford?
as I wrote: I need minimum 20TB net storage, as mentioned before performance is not important - data transfer rates and access times from current disks via SATA is enough, backup strategy should not bother you when I am just interested in your experience about recent disks/manufacturers, price: 20-30€ per TB

I am well aware that there is not much interesting statistics data available, just wanted to know about your personal experience with some disks/disk series/manufacturers.

And also one point which leaves some discussions: what is your opinion on helium-filled disks? I mean, the physics behind and the benefits are clear. However, I am a little sceptical about their end of life. A manufacturer generally designs to meet their datasheet and thats it - if a disk survives longer it is "just a nice benefit", but does one share my thought that leaking helium over time leads to earlier failure compared to air-filled disks? I use older disks as cold/archive storage which works fine - saving 160GB of data on 6 pieces of 15 year old sata 160GB drives just to give them some use. The backblaze statistics with helium filled HGST disks shows I am wrong... at least with one highly priced manufacturer.
 
I store my stuff on CMR/PMR Seagate Enterprise drives. Used to be Barracuda ES, Constellation ES, "Enterprise Capacity", and now Exos 7e.

(I recommend these for reliability, you can determine if they meet your other parameters)
 
I am looking for an upgrade on my disks ... HGST seems to deliver high quality disks, however, they are out of my budget, their price per TB is more than twice of the Toshiba.

What is your experience with recent(ish) disk purchases (2019, 2020), especially regarding the Toshiba and WD? - any recommendations?

I used a number of WD Red with ZFS and have had only good experience on half a dozen systems for years. For the past year I've bought only Seagate 4TB Terascale HDD because they are very quiet, low-power, and they give me very fast performance -- and I can get refurbished units from Amazon for $50 each. They are the best HDD I've ever used. (The Terascale is Seagate's older name for their enterprise 5900rpm drives). I would rather run the used Terascale enterprise drives than any of the current new drives (that I can afford).

ZFS also runs great on SSDs -- I switched recently from Samsung EVO to SK hynix Gold S31 1TB drives -- they are a little faster than Samsungs and represent some very good Korean tech. The 1 TB ssd is now about $100.
 
hm, interesting, I haven't thought about refurbished disks, increasing redundancy level at lower costs sounds nice, I might also consider that.
I have had bought any disks since 2015 because I was just consolidating old hardware ... but from the time before I can say that I had a mixed bag with seagate (older ones with lower capacity were better reliability wise; not so good experience with 4 pieces of seagate archive from 2015), bad experience with samsung spinpoints (maybe the reason they do not sell disks any more ;-), good experience with WD red and the WD yellow enterprise storage.
 
In re "bad experiences" the drives of various families from the same company are literally completely different beasts. You're not going to use WD Blues, likewise, I would never use Seagate BarraCuda (no prefix, "DM") drives.
 
In re "bad experiences" the drives of various families from the same company are literally completely different beasts. You're not going to use WD Blues, likewise, I would never use Seagate BarraCuda (no prefix, "DM") drives.
Exactly that. I would never use Seagate BarraCuda and yet am completely happy with
hm, interesting, I haven't thought about refurbished disks, increasing redundancy level at lower costs sounds nice...
A newer, and much faster 7200rpm, enterprise drive is the Seagate Exos 78E. It has 2 million hours mtbf. It is made with every nuance of engineering to get high capacity, high speed, AND very long life in cloud data centers. (e.g. they are filled with helium gas) A new one is about $140 for 4TB (and 8TB / 12TB are options). I saw Exos 78E 4TB refurbs from goHardDrive being sold on Amazon for $90. This drive gives incredible throughput -- about 250 MB/s sustained write.
 
And also one point which leaves some discussions: what is your opinion on helium-filled disks?
Today, they are nearly unavoidable at the higher capacities. So buy them.

A manufacturer generally designs to meet their datasheet and thats it - if a disk survives longer it is "just a nice benefit", ...
No, that's wrong, and way to cynical. Manufacturers first design to meet their legal and financial obligations. So for example, if a drive has a 5-year warranty, you can be pretty certain that it will last that many years, because the cost to a manufacturer of having to replace a drive (even after 4.9) years is way too high. Profit margins on disks are razor thin, and customer returns would destroy those profit margins.

Once a manufacturer has done that, they design their drives to be useful to their customers. Who are their customers? Not you or me. We have to remember that over 90% of all enterprise-grade disks are sold to less than a dozen customers (the usual suspects, FAANG and their Chinese counterparts). So Seagate/WD/Toshiba all design disks that the likes of Microsoft, Amazon and Google want to use in-house, by the million. What do the big customers want? Foremost reliability. While all of them use RAID-like techniques to make sure data isn't lost just because one disk drive fails (or just because a giant data center catches on fire or falls victim to a flood), the cost of having to store redundant information and of replacing disks is very high. Second, the big customers want low cost, over the useful life of the disk, including the cost of providing power/cooling/physical space for the disk.

And somewhere hidden in that sentence is the key phrase: over the useful life of the disk. Today, disk drives are not used for longer than 5-7 years, because after that the cost of providing power and space for the disk exceeds the utility, and it becomes cheaper to replace it. So I'm quite sure that very few 1TB disks are still in use, and 4TB disks are leaving very quickly.

So to answer your helium question: If you buy a new disk now, filled with helium, you can be quite sure that you will get good service out of it for 5-7 years. If you buy it with a 5-year warranty, it is very certain that it will not fail (again, this is statistics only). After that, you might get lucky, or you might not.

I use older disks as cold/archive storage which works fine - saving 160GB of data on 6 pieces of 15 year old sata 160GB drives just to give them some use.
For an amateur who doesn't care about space usage and for whom using computers is a hobby, that's a fine thing to do. I somewhere have a 40MB Conner SCSI disk that was bought about 35 years ago, I should see whether it still works. I also have a handful of 1GB Falcon/Imprimis class disks somewhere, which are probably the same vintage. But please don't expect manufacturers to design things so they remain usable after 30+ years; for them that is a waste of money, brains and time.

In re "bad experiences" the drives of various families from the same company are literally completely different beasts.
And this points out the fundamental problem of using the past (experience such as Backblaze) to predict the future. You can not extrapolate from a disk model 12345 manufactured in 20XY at manufacturing plant ABC having been very reliable to other models of the same manufacturer (different technology, different models, different plants) also being so. This is why people who study disk reliability for a living (there are dozens of us!) use very fine-grained data, for example tracking what manufacturing location was used for different components. So does this mean that there is absolutely no data about disk drive reliability? To first order, yes.

Here's my personal answer: Look at what manufacturer or model line does consistently well on publicly available high-statistics data, such as Backblaze. Do not listen to anecdotes from individuals (like me), because the plural of "anecdote" is not "data". On the contrary: experiences from individuals tend to be biased and blown out of proportion. If a manufacturer or model line does consistently well, for many years, you can then trust them on average to do better in the future.

Finally: GOOD BACKUPS. Your disks will fail.
 
It very much depends on what you're looking for, I would however highly recommend you to avoid SMR HDDs and brands that tends to screw around with SMART data.
In general when it comes to "consumer" 3.5" HDDs I've found Toshiba to be reliable in general but keep in mind that all HDDs do die at some point.

In general their X300 and N300 are good HDDs which ticks all the boxes (there are some differences between the two series however), they also work fine using HBAs such as LSI 2008 ones for instance. If you want more silent drives WD Purple might be of interest, I haven't tested those myself however. Be also aware of that 10TB+ HDDs might have different measurement (height).
 
There is nothing wrong with SMR disks, if one uses them correctly. Throwing traditional usage patterns at SMR disks in a performance critical situation is likely to be frustrating. On the other hand, most users are capacity or price sensitive. And most home users are so far away from being limited by disk performance that the performance impact of SMR doesn't matter.
 
And also one point which leaves some discussions: what is your opinion on helium-filled disks?
I’ve bought two HGST Ultrastar HE12 disks on 2017-10-05, i.e. more than three years ago. Since then, the first one is running 24/7 in my home server, used for storing multimedia files mostly, but also other generic data (the system itself is on a high-end NVMe SSD). The second one is used as a backup disk. No problems whatsoever (I do not use ZFS on them, though). The exact manufacturer ID is “HUH721212ALE600”. These are 12 TB helium disks with SATA-III interface, 7200 rpm and PMR, certified for 24/7 duty. They run surprisingly quiet, even though that was not the highest priority for me (my home server sits in the pantry, so noise is not an issue).
Code:
ada1 at ahcich1 bus 0 scbus1 target 0 lun 0
ada1: <HGST HUH721212ALE600 LEGNT3D0> ACS-2 ATA SATA 3.x device
ada1: 600.000MB/s transfers (SATA 3.x, UDMA6, PIO 8192bytes)
ada1: Command Queueing enabled
ada1: 11444224MB (23437770752 512 byte sectors)
I don’t think you need to worry about helium. According to manufacturers, even if the disks do lose some of the helium, they will just run somewhat slower, but they won’t fail until the gauge is down to 25 %. Note that you can monitor the value with smartctl(8) (SMART attribute 22). For my HGST disks it is still at 100 % after three years.

My recommendation is to avoid SMR disks, unless you have a specific workload that works well with SMR. Using ZFS excludes such workloads ― SMR disks don’t work very well in general with CoW-based file systems (Copy-on-Write, this includes ZFS), and the resilvering time is measured in days or even weeks with SMR disks, whereas it is a matter of hours for CMR or PMR disks. Note that resilvering time is critical, because a disk failure during this time is “not good”.
 
It's not only an issue about brand and time, but your geological location can also be of influence here.

I live in the Netherlands and I'm not a fan of WD disks, at all. I simply have seen too many of these die in my direct and in-direct surroundings, both professionally but also within my hobby. My favorite brand is Seagate, been using those since I had an XT with IDE disks and didn't know anything about anything just yet ;)

Anyway, that same IDE disk which I got around the 90's or so still works today yet I've also had several WD disks die on me. Especially some on those "WD books" (though, mentioned in all fairness, I can't rule out the option that the electronics died on me, because I could still revive some of those disks using a FreeBSD rescue system).

Anyway, point being: I'll take Seagate over WD any day of the week.

Yah... so about that. I have a few friends in the US who have exactly the opposite experience; several Seagate disks which died on them, sometimes in the most bizarre way possible, but WD turned out to be extremely reliable for them.

Therefor I can only conclude that there can be a severe difference between hardware within different continents. These disks are usually shipped in batches and if there's a few issues then it's not unreasonable to assume that more devices could be affected.
 
There is nothing wrong with SMR disks, if one uses them correctly. Throwing traditional usage patterns at SMR disks in a performance critical situation is likely to be frustrating. On the other hand, most users are capacity or price sensitive. And most home users are so far away from being limited by disk performance that the performance impact of SMR doesn't matter.
True but they perform horrible with ZFS in general :/
 
ZFS + SMR is not a good show, agree with that. It might be OK for archival / sequential write workloads; it might even be pretty darn good for that (because of the log-structured writes, as long as there are few deletions and the cleaner doesn't need to run). On the other hand, many file systems run pretty badly on SMR, until you fix them. I think ext4 on SMR is pretty good these days, but I may be biased (since I talk often to the people in charge of ext4, and I met Aghrayev (sp?) at some conference).
 
hehe you are so right ... in the end it boils down to having data of disks I probably won't buy because they are old. So after an adequate amount of time I will have personal experience and data about those disks, however, this will again render useless since this generation of disks will be out of date again when I will by my next batch of disks 🤣

Anyway, I am replacing a zpool of three disks in one of the servers I manage and will have a few weeks time during burn-in phase to do some tests concerning noise and speed ... I will order the Toshiba Enterprice Capacity 12TB disks - the specs seem fine and the price/performance looks excellent. And then I will have at least a feeling about the noise of the disks and whether I like them in my workstation, too.
 
In a case as Yours I would consider if I should buy disks of two different brands. While we don't really know the reliability of current disks, we certainly know that different disks are different - and they would probably not fail at the same moment in a common environment (which is well possible with disks of the same brand and batch).
The downside here is the performance difference. This does not hurt so much in a mirrored setup, where ZFS does some load-leveling; it is more unpleasant in a raid setup, where always a full stripe must be read from all disks, and every kind of even subtle performance difference will always bring a penalty.
 
I am in search for HDD for home ZFS NAS drive and was going to buy Seagate IronWolf 8TB model until I read random post on the net which recommended Seagate Exos. That changed my mind because it has 60 months of warranty (instead of 36) for only 5% cost increase (around $16 in local currency).
 
When comparing Seagate and Western Digital it is weird I hear always conflicting information from different persons on which to choose.
 
I had never considered this: models within brands, as to failure rates. Personally, I have never had either a Seagate or WD drive fail, but my use is on a PC only. My NAS has 2 WD reds with a USB external drive as a backup. The external drive is a Seagate. The only mechanical drive I have ever had fail is a Maxtor and that was 10-15 years ago. I am 100% SSD/NVME now, except for my NAS and I haven't had any of flash media long enough (< 5 years) to see how it lasts.
 
Back
Top