ZFS replace zpool drive - best strategy?

Hi all,

I have a HP Microserver set up as a simple LAN storage unit running FreeBSD 14.2-RELEASE. It has four Seagate Enterprise 12TB drives configured in a ZFS pool.

One drive appears to have failed earlier today:
Code:
$ zpool status pool.0
  pool: pool.0
 state: ONLINE
status: One or more devices are faulted in response to persistent errors.
    Sufficient replicas exist for the pool to continue functioning in a
    degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
    repaired.
  scan: scrub repaired 5.62M in 18:22:33 with 0 errors on Mon Feb 24 21:33:09 2025
config:

    NAME        STATE     READ WRITE CKSUM
    pool.0      ONLINE       0     0     0
      raidz2-0  ONLINE       0     0     0
        ada0    ONLINE       0     0     0
        ada1    FAULTED    107     6     0  too many errors
        ada2    ONLINE       0     0     0
        ada3    ONLINE       0     0     0
The company from which I bought the Seagates no longer has this model, they do have some of these: WD Ultrastar DC HC520 (SATA, 4Kn, SE), 12TB.

Can this WD (4Kn) 12TB drive simply be swapped in to replace the failing Seagate?

I don't have much experience with zpool drive replacement, so I appreciate any tips and your patience as I get up to speed on these matters. Any suggestions as to how to proceed most welcome.

TIA

Main drive details from smartctl:
Code:
=== START OF INFORMATION SECTION ===
Device Model:     ST12000NM0117-2GY101
User Capacity:    12,000,138,625,024 bytes [12.0 TB]
 
The only problem you can face is that the new 12TB drive is a few blocks smaller than the old ones, then it will not be possible to use it as a replacement. This is one reason for creating a partition on your drives instead of using the whole drive; you can control the exact size of the partition (well, as long as it is smaller than the drive).
 
Yeah, I've heard that can be a problem, that something like "12GB" can actually vary slightly in exact capacity. I had not heard of creating partitions first, very clever idea!
 
Searching further, I found an external forum thread about the zpool ashift value. Apparently if it is 9, you cannot mix 512 and 4kn drives in a zfs pool; if it is set to 12, it is possible. So, I query this value:

$ zpool get ashift
NAME PROPERTY VALUE SOURCE
pool.0 ashift 0 default
zroot ashift 0 default


As you can see it returns 0. Anyone have any idea why?
As indicated above, I'm running v14.2
 
Thank you! It is 12, so I guess I can use a 4kn drive. The question now is, as tingo points out above, whether the replacement drive would have at least the precise same number of sectors. As he says, if the replacement drive is too small, zpool cannot use it. I wonder if there is anyway of looking up the precise size in bytes of a given drive online...

$ zdb -C | grep ashift
ashift: 12
ashift: 12
 
Back
Top