ZFS Used 'add 'instead of 'replace' replacing a failing disk drive. need a fix

I mistakenly used add instead of replace in a raidz2 pool:
Code:
# zpool add zroot /dev/ada3
invalid vdev specification
use '-f' to override the following errors:
mismatched replication level: pool uses raidz and new vdev is disk
# zpool add -f zroot /dev/ada3

How do I correct this error and have /dev/ada3 replace /dev/ada3 ? which is what I should have used to begin with.[/cmd]
 
Your pool right now is made up of a RAIDZ and a disk without redundancy (stripe), that is, if that disk fails, your pool will be destroyed.

mismatched replication level: pool uses raidz and new vdev is disk

Post the output of the following command:

zpool status zroot

Edit:

Right now that disk (stripe) cannot be removed since it is part of the pool.
 
# zpool status zroot
pool: zroot
state: DEGRADED
status: One or more devices has been removed by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0B in 09:47:08 with 0 errors on Sat Apr 6 12:32:08 2024
config:

NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz2-0 DEGRADED 0 0 0
ada1p4 ONLINE 0 0 0
ada0p4 ONLINE 0 0 0
ada2p4 ONLINE 0 0 0
ada3p4 REMOVED 0 0 0
ada3 ONLINE 0 0 0

errors: No known data errors
 
To be fair, it did try to warn you:

mismatched replication level: pool uses raidz and new vdev is disk

Any time zfs/zpool requires -f to do something, it's worthwhile to take a beat and make sure you're doing what you intended.

That said, zfs send/recv is your friend now, assuming you have space elsewhere to store a copy of the pool.
 
As other users mentioned, also check how much space your pool has occupied and create a backup and recreate it.

zpool list -v zroot
 
I mistakenly used add instead of replace in a raidz2 pool:
That's why I carry out extensive simulations in a VM (VirtualBox) before I use them on (home) production systems, sometimes even with commands I'm familiar with, just to make sure. Better save than sorry. VM's are for free, create (and destroy) any setup in a few minutes.
 
zpool list -v zroot
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 13.3T 2.43T 10.9T - - 41% 18% 1.00x DEGRADED -
raidz2-0 10.6T 2.43T 8.19T - - 52% 22.9% - DEGRADED
ada1p4 - - - - - - - - ONLINE
ada0p4 - - - - - - - - ONLINE
ada2p4 - - - - - - - - ONLINE
ada3p4 - - - - - - - - REMOVED
ada3 2.72T 19.1M 2.72T - - 0% 0.00% - ONLINE
 
Unfortunately, you will have to destroy and recreate the pool and fill it again from a backup.
There is no information about FreeBSD version that byrnejb has, but recent ZFS has device removal / evacuation support.
It should be possible to remove ada3 and it should not create much trouble if there hasn't been much data written since adding the disk.
Then the disk can be cleared and used to replace the original removed disk.
See some details, for example, here https://www.illumos.org/issues/7614
 
There is no information about FreeBSD version that byrnejb has, but recent ZFS has device removal / evacuation support.
It should be possible to remove ada3 and it should not create much trouble if there hasn't been much data written since adding the disk.
Then the disk can be cleared and used to replace the original removed disk.
See some details, for example, here https://www.illumos.org/issues/7614
zfs version
zfs-2.1.4-FreeBSD_g52bad4f23
zfs-kmod-2.1.4-FreeBSD_g52bad4f23
 
[...] recent ZFS has device removal / evacuation support.
It should be possible to remove ada3 and it should not create much trouble if there hasn't been much data written since adding the disk.
[...] See some details, for example, here https://www.illumos.org/issues/7614
As I read the Illumos' Description, it seems to suggest that you're right. However, as you noted in your directly following message, the text of sets clearly more restrictive limits; Matt Ahrens confirms and proposes extending device removal with RAIDZ in device removal for RAID-Z #9013. It looks like the current implementation is geared to "easy"/efficient redistribution of data. No RAIDZ-s and must have equal ashifts (=sector size) (see also: zpool cannot remove vdev #14312) specifically mentioned at: 7:40 (vdev_removal.c), slide 12 & 13 of:
 
Back
Top