ZFS OpenZFS-2.3.x pkg and vdev expansion

I had posted this on another forum geared towards Linux and not sure why I didn't think to post it here earlier. Modified to include some more FreeBSD-specific information. Hope this helps others!

OpenZFS 2.3 brings with it the ability to expand a RAIDZ(1,2,3) vdev. Previously you'd pretty much be forced to create another vdev if you need to add space. In my case, I had 11 of 12 drive bays populated and in use in my backup NAS so another vdev was not an option. Last night while performing a monthly backup to my cold-storage NAS, I finally ran out of space. Whatever was remaining was used for metadata and reserved space.

zpool list zbackup
Code:
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zbackup  49.1T  48.9T   174G        -         -    46%    99%  1.00x    ONLINE  -

My zbackup array looks like this:
Code:
        NAME        STATE     READ WRITE CKSUM
        zbackup     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da0     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0
            da5     ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da4     ONLINE       0     0     0
            da6     ONLINE       0     0     0
The 7 drives in the raidz-0 vdev are all 8TB SATA NAS drives and the 3 in the raidz1-1 vdev are all 4TB SATA NAS drives.

I dug through my assortment of unused drives and found a few 4TB drives. I grabbed one of them, put it in the last empty slot and figured out what I had to do to expand the raidz1-1 vdev.

In FreeBSD 14.2 (and earlier)I you will have to pkg install openzfs to install OpenZFS 2.3.1 (at the time of this writing). You will then have to make the following changes to disable the built-in ZFS module and enable the newer kmod version.

Edit the /boot/loader.conf to the following:
zfs_load="NO"
openzfs_load="YES"

A reboot later and the the ZFS kernel module version was 2.3.1. I just had to alias zfs and zpool to /usr/local/sbin/zfs|zpool appropriately to use the updated packaged version.
Added to my ~./.cshrc file:
Code:
alias zpool /usr/local/sbin/zpool
alias zfs /usr/local/sbin/zfs

Be aware that this alias did not translate through doas so I used the full path to be sure it's using the correct binary. Here's what I mean:
Code:
% which zpool
zpool:   aliased to /usr/local/sbin/zpool

% doas which zpool
/sbin/zpool

% zpool -V
zfs-2.3.1-1
zfs-kmod-2.3.1-1

% doas zpool -V
zfs-2.2.6-FreeBSD_g33174af15
zfs-kmod-2.3.1-1

% doas /usr/local/sbin/zpool -V
zfs-2.3.1-1
zfs-kmod-2.3.1-1

After this it was pretty simple to upgrade the ZFS pool with the latest flags including vdev_expansion and then attach the new drive located at /dev/da7.
doas /usr/local/sbin/zpool upgrade zbackup
doas /usr/local/sbin/zpool attach zbackup raidz1-1 /dev/da7


The expansion completed after 17 hours and then initiated a scrub to verify data integrity.
zpool status zbackup
Code:
  pool: zbackup
 state: ONLINE
  scan: scrub in progress since Sat Apr 26 13:27:08 2025
        135G / 48.9T scanned at 6.42G/s, 0B / 48.9T issued
        0B repaired, 0.00% done, no estimated completion time
expand: expanded raidz1-1 copied 10.8T in 17:02:49, on Sat Apr 26 13:27:08 2025
config:

        NAME        STATE     READ WRITE CKSUM
        zbackup     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            da0     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0
            da5     ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da4     ONLINE       0     0     0
            da6     ONLINE       0     0     0
            da7     ONLINE       0     0     0

errors: No known data errors

And I now have a harddrive's worth of free space!
zpool list zbackup
Code:
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zbackup  52.8T  48.9T  3.81T        -         -    32%    92%  1.00x    ONLINE  -

The operation was quite easy and went without a hiccup! Thank you FreeBSD and OpenZFS contributors!
 
And I now have a harddrive's worth of free space!
zpool list zbackup
Code:
NAME      SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zbackup  52.8T  48.9T  3.81T        -         -    32%    92%  1.00x    ONLINE  -

Is this a new implementation of raidz expansion? I thought that adding a fourth drive would give 2/3 of a drive of extra storage.
 
The descriptions I saw only mentioned the way the old data is stored, not that new data is stored more efficiently.

What's interesting about it is that it would be possible to free up a lot more space by progressively replacing the old files with new copies. That's less attractive in your case because most of your data is in the other raidz.
 
Is this a new implementation of raidz expansion? I thought that adding a fourth drive would give 2/3 of a drive of extra storage.

Yes, such a limitation exists, but if I understood what I read correctly, it is only partially.

You can read more here: raidz expansion feature #15022 and here: ZFS fans, rejoice—RAIDz expansion will be a thing very soon

As stated in the first link:
After the expansion completes, old blocks remain with their old data-to-parity ratio (e.g. 5-wide RAIDZ2, has 3 data to 2 parity), but distributed among the larger set of disks. New blocks will be written with the new data-to-parity
ratio (e.g. a 5-wide RAIDZ2 which has been expanded once to 6-wide, has 4 data to 2 parity). However, the RAIDZ vdev's "assumed parity ratio" does not change, so slightly less space than is expected may be reported for newly-written blocks, according to zfs list, df, ls -s, and similar tools.


If I understand correctly: 1) the old data is in the same structure as before, and 2) the new data is saved in a structure consistent with the current array structure. How can I get all the data into the new structure? I'm not sure, but I think I'd have to copy everything within the pool and delete the old files. Could anyone confirm or deny this?
 
Back
Top