Solved Grow ZFS partition on a mirrored pool

Hello everybody,

I am experimenting with one installation of FreeBSD-9-STABLE/amd64 on a VirtualBox that is using gptzfsboot on a raid-1 (mirrored) zfs pool. My problem is that I need to grow the filesystem size of zfs partitions. I followed this guide, which is for FreeNAS, and encountered a few problems.

Let me give you a few info with regard to my setup, before explaining my problems:

Code:
# gpart show
=>      34  40959933  ada0  GPT  (19G)
        34       128     1  freebsd-boot  (64k)
       162  35651584     2  freebsd-zfs  (17G)
  35651746   5308221     3  freebsd-swap  (2.5G)

=>      34  40959933  ada1  GPT  (19G)
        34       128     1  freebsd-boot  (64k)
       162  35651584     2  freebsd-zfs  (17G)
  35651746   5308221     3  freebsd-swap  (2.5G)

# zpool status
  pool: zroot
 state: ONLINE
  scan: resilvered 912M in 1h3m with 0 errors on Sat Mar 10 14:01:17 2012
config:

	NAME        STATE     READ WRITE CKSUM
	zroot       ONLINE       0     0     0
	  mirror-0  ONLINE       0     0     0
	    ada0p2  ONLINE       0     0     0
	    ada1p2  ONLINE       0     0     0

errors: No known data errors

# zpool list
NAME    SIZE  ALLOC   FREE    CAP  DEDUP  HEALTH  ALTROOT
zroot  5.97G  3.69G  2.28G    61%  1.00x  ONLINE  -

As you can see, gpart shows that my ada0p2 and ada1p2 partitions (used in zroot) are of size 17G, while zfs list shows that zroot has a size of 5.97G (which is the initial size of the virtual machine's disks, before I resized them).

The problem I encountered when following the aforementioned procedure, was that I was unable to export zroot (the procedure says to export the pool, "resize" the paritions with gparted, and then import the pool), because I was receiving a message of some of my filesystems being busy (in single user mode, "/" was busy). Thus, in order to resolve this issue, I booted with a CDROM of FreeBSD 9 RELEASE, I then imported (-f) my zpool, and followed the procedure of resizing my filesystems.

Does anyone have a better idea as to what I should do in order to make zpool see all the available space of the partitions it is using?

Thank you all for your time in advance,

mamalos
 
Ah,

and not to forget: I have enabled the autoexpand property of zpool (to be honest I've enabled, disabled, reenabled, and so forth many times, because somewhere I read that it might be needed, sometimes...), with no luck.
 
Since nobody has an answer that far, let me ask another thing. Instead of deleting ada0p2 and ada1p2, and then recreating them from the same starting block but with a grater size, could I have just created two new filesystems (ada0p3 and ada1p3), and having them added in the pool as a new mirror? Because if that's the case, then I could try that out, since it seems to have the same result.

Not that this answers to my question, but at least it's a workaround.
 
Resolved! As Marco van Tol answered in the freebsd-stable mailing list, by running:

Code:
# zpool offline zroot ada0p2
# zpool online -e zroot ada0p2
# zpool offline zroot ada1p2
# zpool online -e zroot ada1p2

zpool sees all available space.
 
On a non-mirrored pool using a single vdev and autoexpand=on where the same issue occurred (resizing underlying GPT partition to a larger size didn't result in the ZFS pool auto-expanding to occupy the expanded partition) running [cmd=]zpool online -e <pool> <vdev>' resulted[/cmd] in the pool expanding to the partition size.

Thanks to the previous posters for reporting this behavior and posting their solutions!
 
Update: the zpool offline (as of 10.3 at least) seems to not be required. I just ran these commands to replace a pair of mirrored 256G SSD drives with a pair of mirrored 512GB SSD drives:

Code:
zpool replace zroot gptid/fbce0... gptid/a60c30...
zpool replace zroot gptid/fd640... gptid/a660a1...

zpool online -e zroot gptid/a60c308...
zpool online -e zroot gptid/a660a1c...
 
Back
Top