ZFS replace zpool drive - best strategy?

Hi all,

I have a HP Microserver set up as a simple LAN storage unit running FreeBSD 14.2-RELEASE. It has four Seagate Enterprise 12TB drives configured in a ZFS pool.

One drive appears to have failed earlier today:
Code:
$ zpool status pool.0
  pool: pool.0
 state: ONLINE
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 5.62M in 18:22:33 with 0 errors on Mon Feb 24 21:33:09 2025
config:

    NAME        STATE     READ WRITE CKSUM
    pool.0      ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        ada0    ONLINE       0     0     0
        ada1    FAULTED    107     6     0  too many errors
        ada2    ONLINE       0     0     0
        ada3    ONLINE       0     0     0
The company from which I bought the Seagates no longer has this model, they do have some of these: WD Ultrastar DC HC520 (SATA, 4Kn, SE), 12TB.

Can this WD (4Kn) 12TB drive simply be swapped in to replace the failing Seagate?

I don't have much experience with zpool drive replacement, so I appreciate any tips and your patience as I get up to speed on these matters. Any suggestions as to how to proceed most welcome.

TIA

Main drive details from smartctl:
Code:
=== START OF INFORMATION SECTION ===
Device Model:     ST12000NM0117-2GY101
User Capacity:    12,000,138,625,024 bytes [12.0 TB]
 
The only problem you can face is that the new 12TB drive is a few blocks smaller than the old ones, then it will not be possible to use it as a replacement. This is one reason for creating a partition on your drives instead of using the whole drive; you can control the exact size of the partition (well, as long as it is smaller than the drive).
 
Yeah, I've heard that can be a problem, that something like "12GB" can actually vary slightly in exact capacity. I had not heard of creating partitions first, very clever idea!
 
Searching further, I found an external forum thread about the zpool ashift value. Apparently if it is 9, you cannot mix 512 and 4kn drives in a zfs pool; if it is set to 12, it is possible. So, I query this value:

$ zpool get ashift
NAME PROPERTY VALUE SOURCE
pool.0 ashift 0 default
zroot ashift 0 default


As you can see it returns 0. Anyone have any idea why?
As indicated above, I'm running v14.2
 
Thank you! It is 12, so I guess I can use a 4kn drive. The question now is, as tingo points out above, whether the replacement drive would have at least the precise same number of sectors. As he says, if the replacement drive is too small, zpool cannot use it. I wonder if there is anyway of looking up the precise size in bytes of a given drive online...

$ zdb -C | grep ashift
ashift: 12
ashift: 12
 
I've heard it said that the major manufacturers have an agreement to build to standard sizes these days for NAS drives. Certainly my Seagate and Toshiba 8TB drives were exactly the same size.

I still partitioned though, to get swap partitions, and because some sort of label is useful.
 
I've heard it said that the major manufacturers have an agreement to build to standard sizes these days for NAS drives. Certainly my Seagate and Toshiba 8TB drives were exactly the same size.

Interesting! I asked ChatGPT about this:
Hard disk manufacturers have not standardized the exact usable capacity of their drives across different models or brands, even in 2025. While there is some consistency within a single product line or manufacturer, there are still variations in the actual usable capacity of drives due to differences in firmware, formatting, and how manufacturers calculate storage capacity.
[...]

Conclusion:​

Hard disk manufacturers have not fully standardized drive capacities to make them universally interchangeable in RAID setups. However, drives of the same advertised capacity are often close enough in size to work together, especially if they are from the same manufacturer or product line. For best results in RAID configurations, it is still recommended to use identical drives from the same manufacturer and model series.
So I guess it hasn't completely caught on. Oh well.

I still partitioned though, to get swap partitions, and because some sort of label is useful.
Sounds like a good strategy. Won't help me in my present situation, but next time I set up a zpool I will do this too.
 
I had a look at the detailed specifications for the WD Ultrastar® DC HC520.

The exact capacity depends on how the drive is low-level formatted. Page 23 has the main details:
  • Worst case is 11,756,399,230,976 bytes.
  • Best case is 12,000,138,625,024 bytes.
The best case would work for you. However, I could not figure out the default formatting.

Will your supplier advise the exact capacity, as determined by smartctl(8) or similar (any disk formatter would do it)?

Otherwise you could post a new subject asking owners to advise "Exact Capacity of WD Ultrastar DC HC520"?
 
Back
Top