I had posted this on another forum geared towards Linux and not sure why I didn't think to post it here earlier. Modified to include some more FreeBSD-specific information. Hope this helps others!
OpenZFS 2.3 brings with it the ability to expand a RAIDZ(1,2,3) vdev. Previously you'd pretty much be forced to create another vdev if you need to add space. In my case, I had 11 of 12 drive bays populated and in use in my backup NAS so another vdev was not an option. Last night while performing a monthly backup to my cold-storage NAS, I finally ran out of space. Whatever was remaining was used for metadata and reserved space.
My zbackup array looks like this:
The 7 drives in the raidz-0 vdev are all 8TB SATA NAS drives and the 3 in the raidz1-1 vdev are all 4TB SATA NAS drives.
I dug through my assortment of unused drives and found a few 4TB drives. I grabbed one of them, put it in the last empty slot and figured out what I had to do to expand the raidz1-1 vdev.
In FreeBSD 14.2 (and earlier)I you will have to
Edit the /boot/loader.conf to the following:
A reboot later and the the ZFS kernel module version was 2.3.1. I just had to alias zfs and zpool to /usr/local/sbin/zfs|zpool appropriately to use the updated packaged version.
Added to my ~./.cshrc file:
Be aware that this alias did not translate through doas so I used the full path to be sure it's using the correct binary. Here's what I mean:
After this it was pretty simple to upgrade the ZFS pool with the latest flags including vdev_expansion and then attach the new drive located at /dev/da7.
The expansion completed after 17 hours and then initiated a scrub to verify data integrity.
And I now have a harddrive's worth of free space!
The operation was quite easy and went without a hiccup! Thank you FreeBSD and OpenZFS contributors!
OpenZFS 2.3 brings with it the ability to expand a RAIDZ(1,2,3) vdev. Previously you'd pretty much be forced to create another vdev if you need to add space. In my case, I had 11 of 12 drive bays populated and in use in my backup NAS so another vdev was not an option. Last night while performing a monthly backup to my cold-storage NAS, I finally ran out of space. Whatever was remaining was used for metadata and reserved space.
zpool list zbackup
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zbackup 49.1T 48.9T 174G - - 46% 99% 1.00x ONLINE -
My zbackup array looks like this:
Code:
NAME STATE READ WRITE CKSUM
zbackup ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da5 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
da1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da6 ONLINE 0 0 0
I dug through my assortment of unused drives and found a few 4TB drives. I grabbed one of them, put it in the last empty slot and figured out what I had to do to expand the raidz1-1 vdev.
In FreeBSD 14.2 (and earlier)I you will have to
pkg install openzfs to install OpenZFS 2.3.1 (at the time of this writing). You will then have to make the following changes to disable the built-in ZFS module and enable the newer kmod version.Edit the /boot/loader.conf to the following:
zfs_load="NO"
openzfs_load="YES"
A reboot later and the the ZFS kernel module version was 2.3.1. I just had to alias zfs and zpool to /usr/local/sbin/zfs|zpool appropriately to use the updated packaged version.
Added to my ~./.cshrc file:
Code:
alias zpool /usr/local/sbin/zpool
alias zfs /usr/local/sbin/zfs
Be aware that this alias did not translate through doas so I used the full path to be sure it's using the correct binary. Here's what I mean:
Code:
% which zpool
zpool: aliased to /usr/local/sbin/zpool
% doas which zpool
/sbin/zpool
% zpool -V
zfs-2.3.1-1
zfs-kmod-2.3.1-1
% doas zpool -V
zfs-2.2.6-FreeBSD_g33174af15
zfs-kmod-2.3.1-1
% doas /usr/local/sbin/zpool -V
zfs-2.3.1-1
zfs-kmod-2.3.1-1
After this it was pretty simple to upgrade the ZFS pool with the latest flags including vdev_expansion and then attach the new drive located at /dev/da7.
doas /usr/local/sbin/zpool upgrade zbackup
doas /usr/local/sbin/zpool attach zbackup raidz1-1 /dev/da7The expansion completed after 17 hours and then initiated a scrub to verify data integrity.
zpool status zbackup
Code:
pool: zbackup
state: ONLINE
scan: scrub in progress since Sat Apr 26 13:27:08 2025
135G / 48.9T scanned at 6.42G/s, 0B / 48.9T issued
0B repaired, 0.00% done, no estimated completion time
expand: expanded raidz1-1 copied 10.8T in 17:02:49, on Sat Apr 26 13:27:08 2025
config:
NAME STATE READ WRITE CKSUM
zbackup ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
da0 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0
da5 ONLINE 0 0 0
ada0 ONLINE 0 0 0
ada1 ONLINE 0 0 0
ada3 ONLINE 0 0 0
raidz1-1 ONLINE 0 0 0
da1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0
errors: No known data errors
And I now have a harddrive's worth of free space!
zpool list zbackup
Code:
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zbackup 52.8T 48.9T 3.81T - - 32% 92% 1.00x ONLINE -
The operation was quite easy and went without a hiccup! Thank you FreeBSD and OpenZFS contributors!