ZFS Unable to remove log disk

Hello everyone

I went from managing my ZFS pool with Ubuntu and ZoL to FreeBSD. This works nicely so far.

There was one issue, though: the linux pool had a log device that wasn't available to the new machine. So it was set to unavailable. Since this is a vm, I gave it virtual disk and replaced the log device.

That worked. Now I would like to replace the virtual log device with a hardware ssd, however the ssd is smaller.

Code:
pool: storage
 state: ONLINE
status: Some supported features are not enabled on the pool. The pool can
   still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
   the pool may no longer be accessible by software that does not support
   the features. See zpool-features(7) for details.
  scan: resilvered 0 in 0h6m with 0 errors on Mon Dec  4 22:46:01 2017
   NAME               STATE     READ WRITE CKSUM
   storage            ONLINE       0     0     0
     mirror-0         ONLINE       0     0     0
       gpt/Row1Slot1  ONLINE       0     0     0
       da4p1          ONLINE       0     0     0
     mirror-1         ONLINE       0     0     0
       da3p1          ONLINE       0     0     0
       da5p1          ONLINE       0     0     0
     mirror-2         ONLINE       0     0     0
       gpt/Row2Slot1  ONLINE       0     0     0
       da8p1          ONLINE       0     0     0
     mirror-3         ONLINE       0     0     0
       da7p1          ONLINE       0     0     0
       da9p1          ONLINE       0     0     0
   logs
     da1              ONLINE       0     0     0
   spares
     gpt/Row6Slot4    AVAIL

errors: No known data errors


When trying to remove the log, I get an error:

zpool remove storage da1
cannot remove da1: pool already exists

Now I seem to be reading different things online: Both that I'll have to do the backup, destroy pool, recreate pool routine and that removing a log device should be possible at any time (which I find would make sense).

I see that FreeBSD ZFS obviously has functions ZoL doesn't. I was afraid of corrupting the data if I upgraded. Do you think this would be a wise course and might even solve my issue?

Can you help me make sense of this?

Also as you can see, the naming convention of the volumes isn't uniform. I would like to get that fixed, especially if you think it might have something to do with my issue but it's not a priority otherwise.

Regards,

Marco
 
What about ignoring the logs thing?
I never had problem with sharing ZFS created on Linux or vice versa.

Regarding upgrading, especially when all things just work smooth and one does know no actual reason to upgrade, keep in mind that then things only can get worse.
 
As to upgrading, that's kinda my thinking as well.

I'm not positive what you mean by ignoring the log. Just remove it and live with the error? That way I won't notice when a disk goes bad because I'll get used to the pool being degraded.
 
To be more precise: What matters is that I never had problems with volumes created by FreeBSD when using ZoL.

And all those zfs and zpool commands, including all that regarding with error logging, are commonly said to be most complete on FreeBSD. (Except that ones of Solaris and Illumos, of course...)
So I do believe that you'll lose nothing when ditching that log thing.

I guess that log partition could be some redundant log storage set up by ZoL for some unknown reason, which can be ignored for practical matters.
But I am a ZFS novice, so I might be wrong.

I'd advise to wait for next weekdays. Then more zfs gurus will be online who can probably tell more.
 
I think you are mistaking the ZIL log with a logging partition for ZFS errors.

The ZIL is basically write cache and not at all unique to ZoL.

And at the moment, there is no ditching without the zpool bitching and I certainly don't want that :D.
 
Perhaps try removing with the guid of the log device (which you can find in the output of zdb?


Code:
        children[4]:
            type: 'disk'
            id: 4
            guid: 395862493941760413
            path: '/dev/da1'
            whole_disk: 1
            metaslab_array: 49
            metaslab_shift: 27
            ashift: 9
            asize: 107369463808
            is_log: 1
            DTL: 676
            create_txg: 66320
            com.delphix:vdev_zap_leaf: 675
            com.delphix:vdev_zap_top: 662

zpool remove storage 395862493941760413
cannot remove 395862493941760413: pool already exists


Afraid not...
 
Now, the error message looks a bit weird to me and it makes me wonder if you ran into a bug or if we're overlooking something here. Just for the record: I never bothered with log devices before myself.

Even so, what happens if you use # zpool offline storage da1 (thanks SirDice) first, optionally followed by the remove command (provided no icky errors showed)? (note: I'm well aware that remove is considered to be the regular command to perform this operation).
 
Last edited:
zpool offline storage da1
cannot offline da1: log device has unplayed intent logs

Do you think this might be due to vms running on this pool at the moment? If I stopped all access on it?
 
Back
Top