Is it possible to increase the size of /usr/home on ZFS without rebooting?

Hi!

Who can tell how to increase the disk space allocated under /usr/home?


Code:
# freebsd-version
11.0-RELEASE-p8

Code:
 # cat /etc/rc.conf | grep zfs
zfs_enable="YES"

Code:
# cat /etc/rc.conf | grep vmw
vmware_guest_vmblock_enable="NO"
vmware_guest_vmhgfs_enable="NO"
vmware_guest_vmmemctl_enable="NO"
vmware_guest_vmxnet_enable="NO"
vmware_guestd_enable="YES"

Code:
# zfs list

NAME                USED  AVAIL  REFER  MOUNTPOINT
zroot              3,10G  4,59G    96K  /zroot
zroot/ROOT          954M  4,59G    96K  none
zroot/ROOT/default  954M  4,59G   954M  /
zroot/tmp            144K  4,59G   144K  /tmp
zroot/usr          2,16G  4,59G    96K  /usr
[B]zroot/usr/home       136K  4,59G   136K  /usr/home[/B]
zroot/usr/ports     976M  4,59G   976M  /usr/ports
zroot/usr/src      1,21G  4,59G  1,21G  /usr/src
zroot/var            652K  4,59G    96K  /var
zroot/var/audit      96K  4,59G    96K  /var/audit
zroot/var/crash      96K  4,59G    96K  /var/crash
zroot/var/log        172K  4,59G  172K  /var/log
zroot/var/mail        96K  4,59G    96K  /var/mail
zroot/var/tmp        96K  4,59G    96K  /var/tmp
root@gitlab ~ #

Code:
~ # gpart show
=>      40  16777136  da0  GPT  (8.0G)
        40      1024    1  freebsd-boot  (512K)
      1064       984      - free -  (492K)
      2048  16773120    2  freebsd-zfs  (8.0G)
  16775168      2008      - free -  (1.0M)

=>     40  4194224  da1  GPT  (2.0G)
       40  4194224    1  freebsd-swap  (2.0G)

Code:
#  gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 16777175
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   rawuuid: eb5f5a84-1ad0-11e7-a0d8-000c29945625
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gptboot0
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da0p2
   Mediasize: 8587837440 (8.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e1
   rawuuid: eb680ce3-1ad0-11e7-a0d8-000c29945625
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: zfs0
   length: 8587837440
   offset: 1048576
   type: freebsd-zfs
   index: 2
   end: 16775167
   start: 2048
Consumers:
1. Name: da0
   Mediasize: 8589934592 (8.0G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 4194263
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147442688 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r1w1e0
   rawuuid: f0da7df4-1ad3-11e7-b648-000c29945625
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147442688
   offset: 20480
   type: freebsd-swap
   index: 1
   end: 4194263
   start: 40
Consumers:
1. Name: da1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Mode: r1w1e1

Code:
# zpool status
  pool: zroot
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0    0     0
          da0p2     ONLINE       0    0     0
errors: No known data errors

Code:
# zpool list
NAME    SIZE  ALLOC  FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot  7,94G  3,01G  4,93G         -    24%    37%  1.00x  ONLINE  -
Code:
# zpool get autoexpand
NAME   PROPERTY    VALUE  SOURCE
zroot  autoexpand  on      local

upload_2017-4-6_21-41-4.png


Increase disk space up to 15 G (+7G)

upload_2017-4-6_21-40-49.png


Gpart does not see the added disk space

Code:
# gpart show  
=>      40  16777136  da0  GPT  (8.0G)
        40      1024    1  freebsd-boot  (512K)
      1064       984      - free -  (492K)
      2048  16773120    2  freebsd-zfs  (8.0G)
  16775168      2008      - free -  (1.0M)

=>     40  4194224  da1  GPT  (2.0G)
       40  4194224    1  freebsd-swap  (2.0G)

Is it possible to increase the size /usr/home without rebooting the server?
 
I am no ZFS expert, but you should be able to expand the pool by adding more vdevs.
Alternatively, instead of increasing the size of the existing disk, add a separate 15G disk, partition appropriately, newfs and copy everything off the old /home to the new, then remount /home onto the new disk
 
Who can tell how to increase the disk space allocated under /usr/home?
ZFS doesn't use the traditional filesystem hierarchy, all virtual filesystems share one storage pool, which is what you're seeing above. The only way to gain more space is to expand the pool. For example by adding new harddisks to it. Depending on the hardware that might be possible without rebooting.
 
When you expand your drive, you need to reboot the machine in order for that space to become available.
 
I am no ZFS expert, but you should be able to expand the pool by adding more vdevs.
Thank you for reply.

I wouldn't like to increase the number of disks, however your recommandation must be right: https://www.freebsd.org/doc/handbook/zfs-zpool.html
"19.3.2. Adding and Removing Devices
There are two cases for adding disks to a zpool: attaching a disk to an existing vdev with zpool attach, or adding vdevs to the pool with zpool add. Only somevdev types allow disks to be added to the vdev after creation."

If I get it right - for example for my case:
zpool attach zroot da0p2 da2p1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da2

Alternatively, instead of increasing the size of the existing disk, add a separate 15G disk, partition appropriately, newfs and copy everything off the old /home to the new, then remount /home onto the new disk

When it goes about 15G - it's all simple and fast, but when it comes to 15T - it becomes more complicated.
I would like to find a way to increase its size without rebooting and adding disks.
 
When it goes about 15G - it's all simple and fast, but when it comes to 15T - it becomes more complicated. I would like to find a way to increase its size without rebooting and adding disks.
I can't help get the impression that you're using the wrong tool for the job here.

Instead of ZFS I'd look into a NAS based solution instead, one which supports hot disk swapping and dynamic expansion. Then make that accessible to your FreeBSD server and you got what you're looking for.
 
I can think of one possible way, but using UFS not ZFS, which is what you specifically asked for. I've never tried this, it's just an idea:

One problem is GPT. It puts metadata at the start and end of the disk. When you change the size, that's the same then as doing a bitwise copy from the smaller disk to the larger one. GPT will now complain because that metadata is no longer at the end, but I think you can 'gpart recover' that situation.

Now I've never done this, but you can resize a GPT partition, but I have doubts ZFS will like that. However if you are using UFS, you can growfs(8)
 
ZFS doesn't use the traditional filesystem hierarchy, all virtual filesystems share one storage pool, which is what you're seeing above. The only way to gain more space is to expand the pool. For example by adding new harddisks to it. Depending on the hardware that might be possible without rebooting.
Thank you for reply.

Thank you for noticing that I would not be able to expand space only for /usr/home since I have one pool - zroot.

I use VMware ESXi5.5 and I can add space to my existing Hard disk1 (8+7G=15G for this example).
View attachment 3645
But FreeBSD does not see new space.



When you expand your drive, you need to reboot the machine in order for that space to become available.
It's hard :(.

Have you tried a camcontrol reprobe [device id]? camcontrol()
Befor camcontrol readcap

# gpart show
=> 40 16777136 da0 GPT (8.0G)
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 16773120 2 freebsd-zfs (8.0G)
16775168 2008 - free - (1.0M)

=> 40 4194224 da1 GPT (2.0G)
40 4194224 1 freebsd-swap (2.0G)

#camcontrol readcap da0 -h
[U]Device Size: [B]15 G[/B],[/U] Block Length: 512 bytes


root@zfstest ~ # gpart show
=> 40 16777136 da0 GPT [B] (8.0G)[/B]
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
[B] 2048 16773120 2 freebsd-zfs (8.0G)[/B]
16775168 2008 - free - (1.0M)

=> 40 4194224 da1 GPT (2.0G)
40 4194224 1 freebsd-swap (2.0G)

root@zfstest ~ # gpart resize -i 2 da0
da0p2 resized
root@zfstest ~ # gpart show
=> [B]40 16777136 da0 GPT (8.0G)[/B]
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
[B] 2048 16775128 2 freebsd-zfs (8.0G)[/B]

=> 40 4194224 da1 GPT (2.0G)
40 4194224 1 freebsd-swap (2.0G)

Code:
# gpart list
Geom name: da0
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 16777175
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da0p1
   Mediasize: 524288 (512K)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r0w0e0
   rawuuid: d9bc5e16-1c6f-11e7-a547-000c29eb6f79
   rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
   label: gptboot0
   length: 524288
   offset: 20480
   type: freebsd-boot
   index: 1
   end: 1063
   start: 40
2. Name: da0p2
   Mediasize: 8588865536 (8.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 1048576
   Mode: r1w1e1
   rawuuid: d9c88c09-1c6f-11e7-a547-000c29eb6f79
   rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
   label: zfs0
   length: 8588865536
   offset: 1048576
   type: freebsd-zfs
   index: 2
   end: 16777175
   start: 2048
Consumers:
1. Name: da0
   Mediasize: 8589934592 (8.0G)
   Sectorsize: 512
   Mode: r1w1e2

Geom name: da1
modified: false
state: OK
fwheads: 255
fwsectors: 63
last: 4194263
first: 40
entries: 152
scheme: GPT
Providers:
1. Name: da1p1
   Mediasize: 2147442688 (2.0G)
   Sectorsize: 512
   Stripesize: 0
   Stripeoffset: 20480
   Mode: r1w1e0
   rawuuid: 3486078b-1c73-11e7-a758-000c29eb6f79
   rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
   label: (null)
   length: 2147442688
   offset: 20480
   type: freebsd-swap
   index: 1
   end: 4194263
   start: 40
Consumers:
1. Name: da1
   Mediasize: 2147483648 (2.0G)
   Sectorsize: 512
   Mode: r1w1e1

Device Size: 15 G, Block Length: 512 bytes - Can I make gpart to see the partition changes without rebooting?
 
I can't help get the impression that you're using the wrong tool for the job here.

I got the same impression. The first clue for me was that OP does not want to reboot; the second was that OP is unfamiliar with ZFS dataset and pool management. This is a virtual machine, though, so one would think that experimenting freely and rebooting would be trivial----unless the VM is being used in production.

There are several concerns with that. First and foremost, using ZFS with 8GB of storage is not really a bad idea, but there isn't really a benefit to it either. Many of the nifty features that make ZFS worthwhile consume extra disk space, while a UFS filesystem will consume space more predictably and consistently, and may have higher and more consistent performance, as well. A UFS filesystem that small with soft updates enabled is also already going to be pretty resistant to disastrous corruption in the event of an unclean shutdown.

Moreover, many of the features that make ZFS worthwhile just don't fit too well with virtual machines. The environment practically nullifies them. Snapshots and clones? VMWare can already do that on its own, and using ZFS snapshots and clones inside the virtual machine would eat up what little storage space you have. Data checksums? Those will tell you that data on the virtual disk is corrupted, but the most likely cause of corruption on a virtual disk is probably going to be something that happens on the host filesystem---basically, ZFS will tell you that the virtual disk itself is corrupted, and there's nothing ZFS can do about it. Of course, if the host system is already running atop ZFS, then using ZFS inside the virtual machine is superfluous. Guaranteed filesystem/metadata integrity? It's nice, sure, but then that only helps you in the event of a crash, and the most probable cause of a crash on that 8GB virtual machine is going to be ZFS eating up all the storage space and RAM. ZFS will be protecting your data from crashes caused by ZFS. There's also the fact that ZFS likes to have total control over the underlying storage devices, and the virtual machine might interfere with that.

ZFS was designed with storage servers in mind. You can certainly use it on laptops and desktops, but the stricter the resource limitations you put on ZFS are the more impractical it becomes. If you need a small virtual machine that's running all the time without constant supervision, ZFS is almost certainly not the right tool for the job. I don't doubt there are exceptions where ZFS is useful in VMs, but to be frank, if you're asking for help with ZFS basics here then your situation is probably not one of those exceptions.
 
When you expand your drive, you need to reboot the machine in order for that space to become available.

With autoexpand=on set on the pool _before_ the providers get resized, the pool will automatically increase in size as soon as all providers of the vdev have been resized.
A zpool online -e might be necessary to trigger the resize if autoexpand is/was disabled.

I'm using zvols on my storage server as iSCSI and/or FibreChannel targets for VMs/testing and also created several zfs-pools on top of them (again: for testing purposes). I resized them this way on running systems several times without rebooting the guests. Fragmentation however is horrible and performance degrades _massively_ over time and gets worse every time you increase the size of a zvol with zfs on top! ~1/4 of the read/write performance of the hosts filesystem is rather normal with this configuration. With additional load on the host this will drop even more.
Hence, for production use don't create pools on top of virtual block devices - it just spoils all error-detection/correction features of ZFS and often causes more problems than it might solve.
 
With autoexpand=on set on the pool _before_ the providers get resized, the pool will automatically increase in size as soon as all providers of the vdev have been resized.
A zpool online -e might be necessary to trigger the resize if autoexpand is/was disabled.
I was not aware of that although I was aware that zpool online -e is necessary.
 
Back
Top