ZFS zpool attach no such pool or dataset

Hello!
Yesterday one of my pools which is a mirror was degraded - one of two 2TB disks failed. I replaced the failed one with another disk 4TB (ada3). Now I want to add another 4TB disk (ada1) to enlarge the size of the mirror to 4TB. But unfortunately, instead of just attaching the ada1 to the existing mirror, by stupidity I detached the 2TB disk (ada2) from the pool. Here is the current status:
Code:
# zpool status
  pool: vm
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
    still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
    the pool may no longer be accessible by software that does not support
    the features. See zpool-features(7) for details.
  scan: resilvered 1.10T in 7h1m with 0 errors on Tue Nov 20 01:37:50 2018
config:

    NAME        STATE     READ WRITE CKSUM
    vm          ONLINE       0     0     0
      ada3      ONLINE       0     0     0

errors: No known data errors

Now, when I attach the ada3 to the pool zpool says:

Code:
# zpool attach vm ada3 ada1
cannot attach ada1 to ada3: no such pool or dataset

I tried specifying the absolute path to the device (/dev/ada3, /dev/ada1) with no luck.

How can I correctly attach ada1 to the pool to create a mirror?
Do I correctly understand that if I add ada1 to the pool like so
Code:
zpool add vm mirror ada3 ada1
zpool will create another mirror and the existing data in ada3 will be destroyed?
ada1 is a brand new disk with nothing in it.
Thank you.
 
IIRC for a single-disk pool to be converted into a mirror you only specify zpool attach <poolname> <newprovider>.

Try that using the full path (/dev/gpt/<label>). You should also use disk/gpt-labels whenever possible; this simplifies maintenance _a lot_ because these generic disk/partition names can change around.

zpool attach can always be reverted with zpool detach, so you can try&error with relatively low risk. With zpool add however, you should always use the -n switch first, to see what the command would actually do to your pool. Removing a top-level device is not yet supported; so you'd be stuck with e.g. a striped pool and have to re-create/send|receive the pool.


edit:
I just remembered having fiddled with this not long ago on my desktop: you have to use the <device> as shown in the zpool list output (-> ada3) and the <new_device> must be specified using the full path.
So the full command that should work for you is zpool attach vm ada3 /dev/ada1
 
It's not obvious why attach isn't working, but stay away from add. If you're not careful you will end up creating a stripe across the 2 4TB disks, and you'll need to re-create the pool to undo it.
 
One thing you could try is zdb -l /dev/ada3, which should show the ZFS metadata on the disk. You will hopefully see something like the following, listing the details of the disk.

Code:
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 866417235753756636
        path: '/dev/md0'
        whole_disk: 1
        metaslab_array: 37
        metaslab_shift: 24
        ashift: 9
        asize: 129499136
        is_log: 0
        create_txg: 4
Then try running the following -

Code:
# zpool attach vm {guid_from_above} new_disk
 
There are two issues to address here: first is of course the degraded pool, but the other is that your pool seems out of date with the used FreeBSD version. That can cause issues as well and should also be fixed. Keep in mind that if you upgrade a pool then it's safest to also re-install the bootcode using gpart bootcode.

I'd start there, and when that is fixed you can concentrate on attaching the replacement device.

You might also want to use -v when checking for the pools status; often it'll include some extra information which is sometimes also useful.
 
Thank you all for your advice. I was able to attach the new disk booting into the single-user mode and using the same attach command. The order of how disks appeared in the system was the same. Using attach with the guid gave the same error. Still could not understand why the command would not work in the multi-user mode.
 
Back
Top