ZFS Can't remove failed ZFS vdev "root pool can not have removed devices, because GRUB does not understand them". ?

Greetings!!

Hey, I have a three way root pool that has a failed device. Fine, I should be able to simply remove the device for now (remotely and without a power cycle) and deal with "it" later.. Got a working 2-way mirror still running - recently tested UEFI system partitions on all three devices, there should be no reason I can't deal with this remotely or without a reboot, right?

sdfsdf@pavlevin:~]% zpool status rpool
pool: rpool
state: DEGRADED
status: One or more devices has been taken offline by the administrator.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Online the device using 'zpool online' or replace the device with
'zpool replace'.
scan: scrub repaired 0 in 0 days 00:01:13 with 0 errors on Fri Nov 29 14:12:26 2019
config:

NAME STATE READ WRITE CKSUM
rpool DEGRADED 0 0 0
mirror-0 DEGRADED 0 0 0
ada1p3 ONLINE 0 0 0
5784112419671348060 OFFLINE 0 0 0 was /dev/ada0p3
ada2p3 ONLINE 0 0 0

errors: No known data errors
[sdfsdfsdf@pavlevin:~]%


Okay, I've offlined the failed vdev, and now I should just be able to remove it, right?

[etimberl@pavlevin:~]% sudo zpool remove rpool 5784112419671348060
cannot remove 5784112419671348060: root pool can not have removed devices, because GRUB does not understand them
[etimberl@pavlevin:~]%


Really.. GRUB..

My question is generally, "What the heck?" I'm using UEFI, thanks. And ZFS, thanks. What part of removing a single VDEV from a ZFS pool does GRUB play, exactly?

Thanks in advance for any advice!!

HW is standard X86, AS Rock Rack, i3-6300, on board SAS and stupid 22mm M.2 socket thing (the failed thing, of course!) using native SATA0 channel (aka ada0 under most circumstances)..

-ET-
 
Oh, issue? RTFM...

I was using "remove", when I should have been using "detach".. Dumb ass.. ;-)


[etimberl@pavlevin:~]% sudo zpool detach rpool 5784112419671348060
[etimberl@pavlevin:~]% zpool status rpool
pool: rpool
state: ONLINE
scan: scrub repaired 0 in 0 days 00:00:52 with 0 errors on Mon Jan 13 02:45:46 2020
config:

NAME STATE READ WRITE CKSUM
rpool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ada1p3 ONLINE 0 0 0
ada2p3 ONLINE 0 0 0

errors: No known data errors
[etimberl@pavlevin:~]%


Ah, an end to those annoying /etc/periodic/zfs emails warning me about crappy hardware!!

Thanks all!
 
Back
Top