Solved Increase ZFS-on-root partition size

jbodenmann

Aspiring Daemon

Reaction score: 320
Messages: 572

I have a FreeBSD 13.0 bhyve VM running on a FreeBSD 13.0 host. Originally, The VM was created with a 64 GB disk but I increased that to 128 GB.

When logging into the VM, I see that the disk was indeed resized correctly:
Code:
root@poudriere01:~ # gpart show -lp
=>       40  268435440    vtbd0  GPT  (128G)
         40     532480  vtbd0p1  efiboot0  (260M)
     532520       2008           - free -  (1.0M)
     534528   16777216  vtbd0p2  swap0  (8.0G)
   17311744  116905984  vtbd0p3  zfs0  (56G)
  134217728  134217752           - free -  (64G)
Now I need to increase the size of partition 3. In the past I've done this using:
Code:
root@poudriere01:~ # gpart resize -i 3 -a 4k -s 64G vtbd0
gpart: Device busy
Now, obviously the device is busy as I'm running zfs-on-root.

What is the proper way of increasing the ZFS partition size from here?
 

covacat

Daemon

Reaction score: 536
Messages: 1,097

if the file is "raw" you can probably use mdconfig and then gpart it from the host
 

sko

Aspiring Daemon

Reaction score: 446
Messages: 747

The partition must not be mounted (i.e. the pool it contains shall not be imported) during resize. You can shutdown the VM and use the host or boot the VM in single-user mode to resize the partition.
After that just issue a zpool online -e <pool> <device> to expand the zfs pool.
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 13,141
Messages: 39,753

After that just issue a zfs online -e <pool> <device> to expand the zfs pool.
Double check if the autoexpand property is set to 'on' on the pool. If I recall correctly it should be on by default but I've had some systems where it was set to 'off'.
 
OP
jbodenmann

jbodenmann

Aspiring Daemon

Reaction score: 320
Messages: 572

Thank you for your input guys.

Booting in single-user-mode allowed me to resize the partition using gpart resize. I am not sure how this works tho - the system is still booting from that location. Does single-user-mode load everything into memory?

After resizing the partition I managed to successfully increase the zpool size as per sko 's post. However, the proper command is zpool online -e <pool> <device>, not zfs online [...] as we need to increase the zpool size, not the dataset size.

Double check if the autoexpand property is set to 'on' on the pool. If I recall correctly it should be on by default but I've had some systems where it was set to 'off'.
I've checked and it is indeed set to `off`.
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 13,141
Messages: 39,753

Does single-user-mode load everything into memory?
Only the root filesystem is mounted and it's read-only. That usually allows access to the partition as there's nothing potentially being written to it.
 

sko

Aspiring Daemon

Reaction score: 446
Messages: 747

However, the proper command is zpool online -e <pool> <device>, not zfs online [...] as we need to increase the zpool size, not the dataset size.
sorry, fixed that. have been dealing with snapshots here the whole morning, so my muscle memory just typed "zfs" instead of "zpool" 🙄


Regarding the "autoexpand" property, IIRC this only dictates if the pool should automagically expand if all providers of a vdev have increased in size.
As per zpoolprops(8) (those split-up manpages are massively annoying BTW...)
expandsize
[...]
The space can be claimed for the pool by bringing it online with autoexpand=on or using zpool online -e.
[...]
autoexpand=on|off
Controls automatic pool expansion when the underlying LUN is
grown. If set to on, the pool will be resized according to the
size of the expanded device. If the device is part of a mirror
or raidz then all devices within that mirror/raidz group must be
expanded before the new space is made available to the pool. The
default behavior is off. This property can also be referred to
by its shortened column name, expand.
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 13,141
Messages: 39,753

Regarding the "autoexpand" property, IIRC this only dictates if the pool should automagically expand if all providers of a vdev have increased in size.
It also controls the automatic resizing of a single vdev pool. With multiple vdevs in a mirror or raid the pool itself will indeed only resize when all vdevs have been resized. For example when I replaced my 4x2TB RAIDZ pool with 4x3TB, I had to replace each disk, one by one and wait for it to resilver. But the entire pool only became larger when all the disks were replaced.
 

grahamperrin

Son of Beastie

Reaction score: 1,049
Messages: 3,528

… if the autoexpand property is set to 'on' on the pool. …

… `off`.

off also found at a 13.0-RELEASE-p4 virtual machine that was upgraded from 12.2-RELEASE-something.


Side note: the corresponding manual page for FreeBSD is in section 8, not 7. <https://www.freebsd.org/cgi/man.cgi?query=zpoolprops&sektion=8&manpath=FreeBSD+13.0-RELEASE>

… Does single-user-mode load everything into memory? …

<https://docs.freebsd.org/en/books/handbook/boot/#boot-singleuser> (13.2.4.1) offers a partial explanation (read in combination with 13.2.4.2).

<https://en.wikipedia.org/wiki/Single-user_mode> includes a nice hint for users of FreeBSD.
 

sko

Aspiring Daemon

Reaction score: 446
Messages: 747

Side note: the corresponding manual page for FreeBSD is in section 8, not 7. <https://www.freebsd.org/cgi/man.cgi?query=zpoolprops&sektion=8&manpath=FreeBSD+13.0-RELEASE>
This is due to the (annoying) splitting of the zfs-manpage into dozens of single manpages. While you could search in the zfs manpage up until 12.2-RELEASE for what you wanted, you now have to find out over which manpages the information you are looking for is scattered. Don't know why this was done, but it is definitely not an improvement...
 

Eric A. Borisch

Aspiring Daemon

Reaction score: 365
Messages: 596

For a situation like this, you can always just partition the new space and zpool-add(8) it to the existing pool. No reboots or anything fancy required.
 

Eric A. Borisch

Aspiring Daemon

Reaction score: 365
Messages: 596

Personally, I would not give partitions 3 and 4 of a single physical device to the same pool. If there's free space for the fourth, better to use the space for a larger third partition.

I’m not suggesting that it’s the cleanest solution, but in the situation where rebooting into a recovery CD or other administrative action is either impossible, undesirable, or confusing to the user, adding an additional partition as another top level device is perfectly legal and ZFS will be quite happy to run with it.

It’s also the only way to go if your partition map was set up such that you have a partition you want to retain between the zpool partition and the extra space you’ve been able to add at the end through VM resize. (Not the case here, to be sure, but a very similar one.)
 
Top