ZFS zpool, replace two device with one

I have real zpool with files, but want to test firstly.
I created simple zpool with files:
# zpool status -v
pool: ztst
state: ONLINE
scan: resilvered 19.5K in 0h0m with 0 errors on Thu Mar 14 14:44:20 2019
config:

NAME STATE READ WRITE CKSUM
ztst ONLINE 0 0 0
/home/u/zfs/zpool1 ONLINE 0 0 0
/home/u/zfs/zpool2 ONLINE 0 0 0
/home/u/zfs/zpool3 ONLINE 0 0 0
/home/u/zfs/zpool5 ONLINE 0 0 0

errors: No known data errors

want to replace zpool3,5 with one, but bigger.
Is it possible?
 
I can't think of any way you could do this. Also I'd only use a striped pool for testing. I'd advise using mirrors or raidz if you're actually storing any real data.
 
In theory, the latest version of ZFS in FreeBSD supports removing top-level vdevs. I have not used it, nor tested it, nor have any other info than that. But, in theory, if you have enough free space in the pool, you should be able to add the larger "disk", and remove the other two.

Since you're just playing around and learning, maybe you can be the guinea pig that tests it for us? ;) :) :D
 
It did not work in my test:
Bash:
~ % cd /tmp
/tmp % touch d1
/tmp % touch d2
/tmp % touch d3
/tmp % truncate -s 64M d1
/tmp % truncate -s 64M d2
/tmp % truncate -s 64M d3
/tmp % sudo zpool create tst /tmp/d1
/tmp % zpool status tst
  pool: tst
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tst         ONLINE       0     0     0
          /tmp/d1   ONLINE       0     0     0

errors: No known data errors
/tmp % sudo zpool add tst /tmp/d2   
/tmp % sudo zpool add tst /tmp/d3
/tmp % zpool status tst         
  pool: tst
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tst         ONLINE       0     0     0
          /tmp/d1   ONLINE       0     0     0
          /tmp/d2   ONLINE       0     0     0
          /tmp/d3   ONLINE       0     0     0

errors: No known data errors
/tmp % zpool list tst
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
tst    144M   580K   143M        -         -     3%     0%  1.00x  ONLINE  -
/tmp % sudo zpool remove tst /tmp/d3
cannot remove /tmp/d3: out of space
1 /tmp % sudo zpool remove -n tst /tmp/d3                                                                                                          :(
Memory that will be used after removing /tmp/d3: 24
/tmp % sudo zpool remove tst /tmp/d1   
cannot remove /tmp/d1: out of space
1 /tmp % sudo zpool remove tst /tmp/d2                                                                                                             :(
cannot remove /tmp/d2: out of space
 
You have created a RAID0 (striped) configuration in which you will never be able to delete a device.
The zpool remove feature (IIRC) is only available for pools created in a raidz configuration.
To create a raidz you need to use the following command.

zpool create tst raidz1 /tmp/d1 /tmp/d2 /tmp/d3
 
You have created a RAID0 (striped) configuration in which you will never be able to delete a device.
The zpool remove feature (IIRC) is only available for pools created in a raidz configuration.
To create a raidz you need to use the following command.

zpool create tst raidz1 /tmp/d1 /tmp/d2 /tmp/d3

RAIDZ does not work either:
Bash:
 /tmp % sudo zpool create tst raidz1 /tmp/d1 /tmp/d2 /tmp/d3
 /tmp % zpool status tst
  pool: tst
 state: ONLINE
  scan: none requested
config:

        NAME         STATE     READ WRITE CKSUM
        tst          ONLINE       0     0     0
          raidz1-0   ONLINE       0     0     0
            /tmp/d1  ONLINE       0     0     0
            /tmp/d2  ONLINE       0     0     0
            /tmp/d3  ONLINE       0     0     0

errors: No known data errors
 /tmp % sudo zpool remove tst /tmp/d3
cannot remove /tmp/d3: operation not supported on this type of pool
 
By the way, removing is expected to work exactly in stripe configurations.
It would be cool if we could convert RAIDZ to RAIDZ2 and to RAIDZ3 and vice versa, and also make the RAIDZ bigger by adding drives and smaller by removing devices and reducing the free space.
This is currently not supported, so it limits RAIDZ's flexibility.
 
Did you enable the device_removal feature before running your tests? Check the output of zpool get all <poolname>|grep feature to make sure feature@device_removal is showing as enabled. I would think it's enabled based on the error message, but want to make sure.

zpool set feature@device_removal=enabled <poolname> will enable it, I believe.
 
Some reading (there's also some useful examples in there)
 
Did you enable the device_removal feature before running your tests? Check the output of zpool get all <poolname>|grep feature to make sure feature@device_removal is showing as enabled. I would think it's enabled based on the error message, but want to make sure.

zpool set feature@device_removal=enabled <poolname> will enable it, I believe.

Wow, good point. I did not know a feature must be enabled.
It still does not work though:
Bash:
 ~ % cd /tmp
 /tmp % touch d1
 /tmp % touch d2
 /tmp % touch d3
 /tmp % truncate -s 64M d1
 /tmp % truncate -s 64M d2
 /tmp % truncate -s 64M d3
 /tmp % sudo zpool create tst /tmp/d1 /tmp/d2 /tmp/d3
 /tmp % zpool status tst
  pool: tst
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        tst         ONLINE       0     0     0
          /tmp/d1   ONLINE       0     0     0
          /tmp/d2   ONLINE       0     0     0
          /tmp/d3   ONLINE       0     0     0

errors: No known data errors
 /tmp % sudo zpool set feature@device_removal=enabled tst
 /tmp % sudo zpool remove tst /tmp/d3                   
cannot remove /tmp/d3: out of space
 
Are you looking to do something like:
touch d1 d2 d3;truncate -s 64M d1 d2 d3
zpool create tst /tmp/d1 /tmp/d2 /tmp/d3

[time passes, get new bigger d4 and want to replace old slow d2 and d3]
touch d4; truncate -s 128M d4
zpool add tst d4 remove d2 d3
<- if that worked

It would be an ugly hack but you could build two partitions on d4 and then unmount tst, dd if=d2 of=d4s1;dd if=d3 of=d4s2, then use zpool replace. You also may be able to add d4s1 as a spare and tell zpool that it needs to replace d2 then repeat with d4s2 and d3. That would require a disk that is at least double the size since you may room for a partition table plus d1 plus d2. You might be able to get away with trimming a MBR sized chunk off of d2 or d3 if your new disk is exactly double the size.
 
We're just testing the device removal feature of ZFS. On my box it does not work as advertised. Solaris probably has it working, at least the examples in their documentation state it does.
 
Back
Top