ZFS How can I increase a sparse zvol's available space?

This is 14.0-RELEASE-p5.

I have a pool with ample free space:

Code:
# zpool list r
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
r     3.48T  2.21T  1.27T        -         -    47%    63%  1.00x    ONLINE  -

There are four zvols (that I'm asking about) in the pool:

Code:
# zfs list -t volume -o space
NAME        AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
r/swapVolA  8.25G  8.25G        0B     12K          8.25G         0B
r/vmfsA      591G   870G     8.10G    271G           591G         0B
r/vmfsB      591G   802G     7.85G    203G           591G         0B
r/vmfsC        0B   938G        0B    938G             0B         0B
r/vmfsD        0B   839G        0B    839G             0B         0B

r/vmfsA and r/vmfsB were made like this:

Code:
# zfs create -V 600G r/vmfsX

While r/vmfsC and r/vmfsD where made like this:

Code:
# zfs create -s -V 2t r/vmfsX

The initiator for these volumes is VMware VMFS. Currently I/O to r/vmfsC and r/vmfsD is blocked.

Given that the two blocked zvols were both made with -s -V 2t, that there are no explicit reservations, and that there's ample free pool space, why are their AVAIL values both 0?

And how do I increase their AVAIL values to unblock I/O to VMFS?

Thank you very much.
 
Update: I lucked out and solved the problem for the moment, though I still don't understand it.

So, I had snapshots of r/vmfsA and r/vmfsB that I thought were inconsequential:

Code:
# zfs list -t snapshot | grep vmfs
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
r/vmfsA@2023Dec13                         8.10G      -   252G  -
r/vmfsB@2023Dec13                         7.85G      -   183G  -

I nuked them on a whim and suddenly had loads of AVAIL on all four zvols.

Code:
# zfs destroy r/vmfsA@2023Dec13
# zfs destroy r/vmfsB@2023Dec13
# zfs list -t volume -o space
NAME        AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
r/swapVolA   443G  8.25G        0B     12K          8.25G         0B
r/vmfsA      782G   619G        0B    271G           347G         0B
r/vmfsB      850G   619G        0B    203G           415G         0B
r/vmfsC      434G   937G        0B    937G             0B         0B
r/vmfsD      434G   839G        0B    839G             0B         0B

ESXi had a panic (PSOD) the moment the AVAIL showed up. But after a reboot it and VMFS were both happy again.

With USED values so small, I don't understand why removing those snapshots made AVAIL jump so drastically on the other zvols.

Maybe I got the meaning of USED and REFER mixed up.
 
Back
Top