ZFS ZFS Drive cannot be remove


New Member

Reaction score: 13
Messages: 19

So I have been playing around with ZFS on my test computer before changing all my servers to ZFS.

Currently BOOT device is nvd1p4.
gpart list nvd1
1. Name: nvd1p1
   Mediasize: 209715200 (200M)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   efimedia: HD(1,GPT,ac10b120-37c0-11eb-9882-7085c25db4b0,0x28,0x64000)
   rawuuid: ac10b120-37c0-11eb-9882-7085c25db4b0
   rawtype: c12a7328-f81f-11d2-ba4b-00a0c93ec93b
   label: efiboot0
   length: 209715200
   offset: 20480
   type: efi
   index: 1
   end: 409639
   start: 40
2. Name: nvd1p2
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 209735680
   Mode: r0w0e0
   efimedia: HD(2,GPT,ac19cd27-37c0-11eb-9882-7085c25db4b0,0x64028,0x400)
   rawuuid: ac19cd27-37c0-11eb-9882-7085c25db4b0
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gptboot0
   length: 524288
   offset: 209735680
   type: freebsd-boot
   index: 2
   end: 410663
   start: 409640
3. Name: nvd1p3
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Stripesize: 0
      Stripeoffset: 210763776
   Mode: r1w1e0
   efimedia: HD(3,GPT,ac1f9718-37c0-11eb-9882-7085c25db4b0,0x64800,0x400000)
   rawuuid: ac1f9718-37c0-11eb-9882-7085c25db4b0
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: swap0
   length: 2147483648
   offset: 210763776
   type: freebsd-swap
   index: 3
   end: 4605951
   start: 411648
4. Name: nvd1p4
   Mediasize: 497749590016 (464G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 2358247424
   Mode: r1w1e1
   efimedia: HD(4,GPT,ac23c64c-37c0-11eb-9882-7085c25db4b0,0x464800,0x39f21800)
   rawuuid: ac23c64c-37c0-11eb-9882-7085c25db4b0
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: zfs0
   length: 497749590016
   offset: 2358247424
   type: freebsd-zfs
   index: 4
   end: 976773119
   start: 4605952
1. Name: nvd1
   Mediasize: 500107862016 (466G)
   Sectorsize: 512
   Mode: r2w2e3
       40  976773088  nvd0  GPT  (466G)
       40  976773088        - free -  (466G)

gpart list nvd0

Geom name: nvd0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 976773127
first: 40
entries: 128
scheme: GPT
1. Name: nvd0p1
   Mediasize: 500107821056 (466G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r1w1e1
   efimedia: HD(1,GPT,47a22f3a-3f49-11eb-bf29-7085c25db4b0,0x28,0x3a385fe0)
   rawuuid: 47a22f3a-3f49-11eb-bf29-7085c25db4b0
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 500107821056
   offset: 20480
   type: freebsd-zfs
   index: 1
   end: 976773127
   start: 40
1. Name: nvd0
   Mediasize: 500107862016 (466G)
   Sectorsize: 512
   Mode: r1w1e2

I decided to do some simple testing by adding nvd0 to nvd1 zpool....
zpool add zroot_nvme_1 nvd0p1

I did not try to do RAID nor MIRROR because wanted to do a simple test to add to ZPOOL.
unfortunately after add I cannot REMOVE the drive.....

zpool remove zroot_nvme_1 nvd0p1
cannot remove nvd0p1: invalid config; all top-level vdevs must have the same sector size and not be raidz.
nvd1p4 460G 38.4G 422G - - 4% 8%
nvd0p1 464G 63.6M 464G - - 0% 0%

zpool upgrade -v
async_destroy (read-only compatible)
Destroy filesystems asynchronously.
empty_bpobj (read-only compatible)
Snapshots use less space.
LZ4 compression algorithm support.
Crash dumps to multiple vdev pools.
spacemap_histogram (read-only compatible)
Spacemaps maintain space histograms.
enabled_txg (read-only compatible)
Record txg at which a feature is enabled
Retain hole birth txg for more precise zfs send
Enhanced dataset functionality, used by other features.
Blocks which compress very well use even less space.
bookmarks (read-only compatible)
"zfs bookmark" command
filesystem_limits (read-only compatible)
Filesystem and snapshot limits.
Support for blocks larger than 128KB.
Variable on-disk size of dnodes.
SHA-512/256 hash algorithm.
Skein hash algorithm.
Top-level vdevs can be removed, reducing logical pool size.
obsolete_counts (read-only compatible)
Reduce memory used by removed devices when their blocks are freed or remapped.
zpool_checkpoint (read-only compatible)
Pool state can be checkpointed, allowing rewind later.
spacemap_v2 (read-only compatible)
Space maps representing large segments are more efficient.


My question is why can't I remove it? Do I really need to transfer the zvol into another drive and destroy zpool and recreate as 2 independent zpool?

Any information on past experience will be great....

Above there is a feature = device_removal install on computer, wasn't this feature implemented for my specific test case that is not mirror, cache, hot spare or raidz?Thank you in advance.

Last edited by a moderator:


New Member

Messages: 1

I have the similar issue:
root@rpi-4b:~ # zpool status -v
  pool: tank
 state: ONLINE
  scan: scrub in progress since Mon Nov 15 12:08:22 2021
        771G scanned at 1.23G/s, 507G issued at 830M/s, 16.3T total
        0B repaired, 3.04% done, 05:33:11 to go
remove: Removal of vdev 4 copied 5.13T in 28h18m, completed on Sat Nov 13 16:35:58 2021
    8.23M memory used for removed device mappings

        NAME                        STATE     READ WRITE CKSUM
        tank                        ONLINE       0     0     0
          diskid/DISK-740200010937  ONLINE       0     0     0
          diskid/DISK-NA761XBB      ONLINE       0     0     0
          da1                       ONLINE       0     0     0

errors: No known data errors
root@rpi-4b:~ # zpool remove tank diskid/DISK-NA761XBB
cannot remove diskid/DISK-NA761XBB: invalid config; all top-level vdevs must have the same sector size and not be raidz.


Aspiring Daemon

Reaction score: 390
Messages: 623

If I remember things correctly both of you have added devices in a striped configuration. Basically you've concatentated all the devices to look like one bigger device. Removing a device from that configuration breaks things because it's not RAID or mirror. So I don't think you should be able to do that (at least not easily).

As for the wording of the error message, it sounds like it's not really saying what it should.

Alain De Vos


Reaction score: 752
Messages: 2,488

I think that you cannot remove unless you have specified the device_removal feature flag.


New Member

Reaction score: 13
Messages: 19

I have been using the system with this configuration 2+ years BUT anyone looking for a solution essentially install Zvol repos in a new drive... zpool import the OLD raid drives and after just NUKE it (zpool destroy) and rebuild... I did this with my SSD and NVME simply install the zvol into the SSD and after zpool destroy the nvme raidz (tested on second PC)... Anyways a rookie mistake it makes sense at least to me why ~ this method avoids corruption on the raidz meta data...