ZFS zfs error "only inactive hot spares, cache, top-level, or log devices can be removed"

Hi, I added a disk accidentally to a pool and am trying to force remove it.

Code:
        NAME                     STATE     READ WRITE CKSUM
        vault                    UNAVAIL      0     0     0
          mirror-0               ONLINE       0     0     0
            label/disk1          ONLINE       0     0     0
            label/disk2          ONLINE       0     0     0
          mirror-1               ONLINE       0     0     0
            label/disk3          ONLINE       0     0     0
            label/disk4          ONLINE       0     0     0
          mirror-2               DEGRADED     0     0     0
            label/disk5          ONLINE       0     0     0
            3100350294457416364  OFFLINE      0     0     0  was /dev/label/disk6
          15419466379299849962   REMOVED      0     0     0  was /dev/label/disk7


cannot remove 15419466379299849962: only inactive hot spares, cache, top-level, or log devices can be removed

Would anyone happen to have any suggestion on how to remove and clean it up without rebooting system?

I tried zpool export and import and its not working.
 
You can't remove vdevs from a pool. You'll have to recreate the pool from scratch.
If it is part of a mirror, it can be detached. But with the formatting lost, it was impossible to tell.

Edit: there is formatting now. Odd.
It's a top level vdev and you cannot remove it. Backup, recreate, and restore. You've hit one of the major pain points for ZFS.
 
I managed to get the device back. But now I think I added the disk as a top level vdev device.

Code:
  pool: vault
 state: DEGRADED
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 0 in 4h7m with 0 errors on Fri Jun 19 13:43:46 2015
config:

        NAME                     STATE     READ WRITE CKSUM
        vault                    DEGRADED     0     0     0
          mirror-0               ONLINE       0     0     1
            label/disk1          ONLINE       0     0     1
            label/disk2          ONLINE       0     0     1
          mirror-1               ONLINE       0     0     0
            label/disk3          ONLINE       0     0     0
            label/disk4          ONLINE       0     0     0
          mirror-2               DEGRADED     0     0     0
            label/disk5          ONLINE       0     0     0
            3100350294457416364  OFFLINE      0     0     0  was /dev/label/disk6
          label/disk7            ONLINE       0     0     0
 
Last edited by a moderator:
As it stands you cannot remove the label/disk7 disk anymore without destroying the entire pool.
 
I wound up just rebuilding pool and restoring from backup.

Is there a correct way to limit the amount to bandwidth or resources that zxfer uses when moving data? In my restore process I am using zxfer to mover about 20TB of data from another backup location to the new rebuilt zfs pool. The destination server keeps running out of buffer space. ??
 
You needed a zpool replace pool old_device new_device. Unfortunately you are now looking at backup / recreate / restore.

When dealing with mirrors, it's much better to get in the habit of using attach and detach instead of add, replace, and [b[]remove[/b]. You can attach multiple disks to a mirror vdev; they just expand the vdev to n-way mirroring. Attaching a drive to single-drive vdev will convert it to a mirror vdev, as well. Detaching a drive from a mirror will either shrink the mirror or convert it to a single-drive vdev.

Replacing not-quite-dead drives in a mirror is much nicer when you attach the new drive first (move from 2-way mirror to 3-way mirror), wait for the resilver to complete, then detach the old drive. That way, you never actually lose redundancy in the vdev. And it can read from both drives to find valid data for the resilver to the new drive. Attach/detach make working with mirror vdevs so much nicer than raidz vdevs. :)

To the OP: if you have room for another 2 drives in the chassis, you can attach another drive to label/disk7 to create a fourth mirror vdev. Then attach another drive to label/disk5 to convert it to a 3-way mirror. Then detach 3100350294457416364 to drop it back down to a 2-way mirror.

Otherwise, you'll have to destroy the pool and recreate it using just the 6 drives.
 
When dealing with mirrors, it's much better to get in the habit of using attach and detach instead of add, replace, and remove.

According to zpool(8), zpool replace will "Replaces old_device with new_device. This is equivalent to attaching new_device, waiting for it to resilver, and then detaching old_device."

Assuming the man page is correct, I'd stick with zpool replace rather than doing the same steps manually with more chances for human error. The same verbiage is in the Illumos, Solaris, and ZoL man pages, so hopefully it's correct.
 
Back
Top