ZFS How to extend zfs geli encrypted disk? Space not showing

Have a system almost running out of space - the ssd on which it is installed is larger than the partition on which zfs is installed for the OS.

Here is the output which seems to suggest that the disk is 290-300G only :
gpart show ada0
=> 40 625142375 ada0 GPT (466G) [CORRUPT]
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 16777216 2 freebsd-swap (8.0G)
16779264 608362496 3 freebsd-zfs (290G)
625141760 655 - free - (328K)
However the disk is a 500G ssd
ada0: <Samsung SSD 860 EVO 500GB RVT04B6Q> ACS-4 ATA SATA 3.x device

How do I extend the zfs file system to make use of space on the disk?
 
It looks like when ada0 was initially partitioned only 290G was used for freebsd-zfs.
The odd thing to me is the free space after the freebsd-zfs: it's only 328K.
That could be due to the "corrupt" from the first line; that usually means the backup gpart info is corrupted, which can happen if you do something like create a gmirror or something after you've partitioned. It could also be due to geli, I've not done that so it's speculation on my part.
In theory you could use gpart and resize the freebsd-zfs partition and zfs would expand automatically to fill it. I've done this to expand a mirror: added a bigger disk to create a 3 way mirror, let it resilver, removed one of the smaller disks, added another bigger disk, let it resilver, then removed the last smaller disk. That left me with a mirror of bigger disks.

But I think the GELI complicates the matter.

What I would do is see what is using up the 290G: lots of snapshots or boot environments can start to use up space but keep in mind that destroying snapshots will only free space when "all the snapshots referencing the blocks are destroyed".
 
Looks like a corrupt partition table. Check gparted linux. Otherwise reformat...
Careful about blanket statements. gpart puts information in the beginning of the device, also puts a backup copy in the end of the device. Other tools, like some of the geom classes specifically put things in the end of the device which means order of operations are important. geom first, then gpart it's ok, gpart first then geom corrupt backup copy.

Typically that CORRUPT in the gpart output means "I have a good primary partition table but my backup doesn't match my primary". So not quite corrupt, but "hmm backup doesn't match".

I don't know if GELI or "GELI on ZFS" does this, but if it does, it's not a true corruption.
gpart also has a "recover" command that will "fix" it. But: I don't know if it will screw up the GELI stuff. So fix the gpart corruption and screw up the GELI maybe wind up with unbootable partition.

Anyway, experience has shown me sometimes just a little caution about generalizations. Generalizations hold true 90% of the time, but if you are that 10% you hurt yourself. :)
 
The "CORRUPT" does also happen whenever copying an image to a larger disk. In that case it can be fixed with gpart recover.
Then, when geli is involved, there is an option geli resize. So, given that fixing the gpart GPT data works well, and enlarging the gpart partition works well, and only the freebsd-zfs partition is geli-encrypted below the zfs, then geli should be able to also enlarge the encrypted space. Then, if that works out, ZFS should be able to adjust the useable space via the expandsize feature as usual.

I did this before, it should work in that way. But I strongly suggest, before trying it on live data, grab a usb stick and do some testcase first.

Also, if this is the boot disk, it may complicate matters, and I would suggest to boot from somewhere else for doing this operation.
 
It looks like when ada0 was initially partitioned only 290G was used for freebsd-zfs.
Yes - this was the case as far as I remember. Although there is a non-zero possibility of it having another OS on the remaining space - but it would have shown up, right?
That could be due to the "corrupt" from the first line; that usually means the backup gpart info is corrupted, which can happen if you do something like create a gmirror or something after you've partitioned. It could also be due to geli, I've not done that so it's speculation on my part.
No gmirror was created - I'm not even sure what that means tbh.
What I would do is see what is using up the 290G: lots of snapshots or boot environments can start to use up space but keep in mind that destroying snapshots will only free space when "all the snapshots referencing the blocks are destroyed".
There are a a few snapshots that automatically get added during a system upgrade - from 11 -> 13 a few upgrades do take up space. However when attempting to delete one of them, it complains it relies on the currently mounted snapshot - which I think isn't a wise step to delete? Here is the output of beadm list :
beadm list
BE Active Mountpoint Space Created
default - - 7.9G 2019-04-22 05:03
12.0-p11 - - 127.5M 2019-11-10 23:58
12.3-RELEASE-p1_2022-02-02_032518 - - 24.7G 2022-02-02 03:25
12.3-RELEASE-p1_2022-03-18_164224 - - 267.0M 2022-03-18 16:42
12.3-RELEASE-p3_2022-03-23_175807 - - 83.7M 2022-03-23 17:58
12.3-RELEASE-p4_2022-04-06_232036 - - 656.0M 2022-04-06 23:20
12.3-RELEASE-p5_2022-07-01_212910 - - 8.3G 2022-07-01 21:29
13.0-RELEASE-p11_2022-07-01_213226 - - 90.0M 2022-07-01 21:32
12.3-RELEASE-p5_2022-08-10_011525 - - 610.0M 2022-08-10 01:15
12.3-RELEASE-p6_2022-09-03_171127 - - 525.0M 2022-09-03 17:11
12.3-RELEASE-p5_2022-09-10_190846 - - 3.0M 2022-09-10 19:08
12.3-to-13.1 - - 1.4M 2022-09-10 19:12
12.3-RELEASE-p7_2022-09-10_230907 - - 1.4M 2022-09-10 23:09
13.1-RELEASE-p2_2022-09-10_231247 - - 5.9M 2022-09-10 23:12
13.1-RELEASE-p2_2022-09-10_232433 NR / 100.8G 2022-09-10 23:24
13.1-RELEASE-p2_2022-09-11_220401 - - 36.6M 2022-09-11 22:04

How do I proceed?
 
Ahh so this has been through a few upgrades :) so the current boot environment looks to be 13.1-RELEASE-p2_2022-09-10_232433, then we have 13.1-RELEASE-p2_2022-09-11_220401. Based on that I assume that all of the 12.X and default are boot environments no longer needed.
I would do:
bectl destroy -o XYZ
where XYZ is one of:
default
12.0-p11
12.3-RELEASE-p1_2022-02-02_032518
12.3-RELEASE-p1_2022-03-18_164224
12.3-RELEASE-p3_2022-03-23_175807
12.3-RELEASE-p4_2022-04-06_232036
12.3-RELEASE-p5_2022-07-01_212910
13.0-RELEASE-p11_2022-07-01_213226
12.3-RELEASE-p5_2022-08-10_011525
12.3-RELEASE-p6_2022-09-03_171127
12.3-RELEASE-p5_2022-09-10_190846
12.3-to-13.1
12.3-RELEASE-p7_2022-09-10_230907
13.1-RELEASE-p2_2022-09-10_231247

Snapshots/clones are interesting in use of space. They wind up having "blocks changed" so when you destroy a snapshot/clone/boot environment the space "moves" to be accounted in another one. When the last one referring to a block is deleted, the space is reclaimed.
If you do a quick sum of the space in those snapshots there is at least 40G that is probably recoverable.
The first line is showing it to be corrupt. Should I just run this as sudo?
Because of GELI I would hold off.
 
Based on that I assume that all of the 12.X and default are boot environments no longer needed.
I actually didn't use them much earlier but after install beadm boot environment manager they were automatically being snapshotted at every upgrade (which was kinda nice - but due to my ignorance the default was well .... default). Only recently I read about it again and activated 13.1-RELEASE-p2_2022-09-10_232433 as the active boot environment.
bectl destroy -o XYZ
I've actually never used bectl - used only beadm (after reading Absolute FreeBSD) . That suggested to use the destroy option but this is what happens when I attempt it and I'm not sure if I should type 'y' - looks scary! That's my current boot environment.
beadm destroy 12.0-p11
Are you sure you want to destroy '12.0-p11'?
This action cannot be undone (y/[n]): y
Boot environment '12.0-p11' was created from existing snapshot
Destroy '13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0' snapshot? (y/[n]):
Should I be choosing y? Will my current boot environment get affected?
 
beadm/bectl "same same". 2 different tools doing the same thing. bectl is in base, takes same arguments as beadm.
beadm gives the same output for the list -s command.

It should not affect your current boot environment. It feels backwards but when you do the freebsd-update and it creates a snapshot, it's more of a marker for the previous environment.
If you do bectl list -s on your system, you'll see what boot environment is using what snapshot, it gives you a better idea of what you would be deleting.
Feel free to post the bectl/beadm list -s output before you do anything and we'll try to help you understand what it's showing and what the destroy would do.

"default" is the name given when you do an install.
Look at the dates on them, so start with getting rid of the BE named default, which is basically your 11 install, then your 12.0-p11, ... Delete in date order.


Here's one from my system. bectl list -s shows the snapshots related to the boot environment.
I know the 13.1-RELEASE-p0 is my older one, when I did the update to -p2, freebsd-update created a snapshot.
So if I bectl destroy -o 13.1-RELEASE-p0 it would delete the snapshot 13.1-RELEASE-p2@2022-09-11-06:26:38-0 which is ok because just the snapshot gets deleted, not the dataset that was snapshotted.

bectl list -s BE/Dataset/Snapshot Active Mountpoint Space Created 13.1-RELEASE-p0 zroot/ROOT/13.1-RELEASE-p0 - - 8K 2022-09-11 06:26 zroot/ROOT/13.1-RELEASE-p2@2022-09-11-06:26:38-0 - - 653M 2022-09-11 06:26 13.1-RELEASE-p2 zroot/ROOT/13.1-RELEASE-p2 NR / 7.98G 2022-05-17 10:28 13.1-RELEASE-p2@2022-09-11-06:26:38-0 - - 653M 2022-09-11 06:26
 
beadm/bectl "same same". 2 different tools doing the same thing. bectl is in base, takes same arguments as beadm.
Thanks - wasn't aware of that.

If you do bectl list -s on your system, you'll see what boot environment is using what snapshot, it gives you a better idea of what you would be deleting.
Feel free to post the bectl/beadm list -s output before you do anything and we'll try to help you understand what it's showing and what the destroy would do.

Ok - here is the output below - please let me know if it looks fine?
I'm not sure because this is the first time I'm doing it.
bectl list -s
BE/Dataset/Snapshot Active Mountpoint Space Created

12.0-p11
zroot/ROOT/12.0-p11 - - 126M 2019-11-10 23:58
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0 - - 1.49M 2022-09-10 23:24

12.3-RELEASE-p1_2022-02-02_032518
zroot/ROOT/12.3-RELEASE-p1_2022-02-02_032518 - - 8K 2022-02-02 03:25
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-02-02-03:25:18-0 - - 24.7G 2022-02-02 03:25

12.3-RELEASE-p1_2022-03-18_164224
zroot/ROOT/12.3-RELEASE-p1_2022-03-18_164224 - - 8K 2022-03-18 16:42
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-03-18-16:42:24-0 - - 267M 2022-03-18 16:42

12.3-RELEASE-p3_2022-03-23_175807
zroot/ROOT/12.3-RELEASE-p3_2022-03-23_175807 - - 8K 2022-03-23 17:58
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-03-23-17:58:07-0 - - 83.7M 2022-03-23 17:58

12.3-RELEASE-p4_2022-04-06_232036
zroot/ROOT/12.3-RELEASE-p4_2022-04-06_232036 - - 8K 2022-04-06 23:20
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-04-06-23:20:36-0 - - 656M 2022-04-06 23:20

12.3-RELEASE-p5_2022-07-01_212910
zroot/ROOT/12.3-RELEASE-p5_2022-07-01_212910 - - 8.29G 2022-07-01 21:29
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-07-01-21:29:10-0 - - 7.34M 2022-07-01 21:29
12.3-RELEASE-p5_2022-07-01_212910@2022-09-10-19:08:46-0 - - 2.95M 2022-09-10 19:08

12.3-RELEASE-p5_2022-08-10_011525
zroot/ROOT/12.3-RELEASE-p5_2022-08-10_011525 - - 8K 2022-08-10 01:15
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-08-10-01:15:25-0 - - 610M 2022-08-10 01:15

12.3-RELEASE-p5_2022-09-10_190846
zroot/ROOT/12.3-RELEASE-p5_2022-09-10_190846 - - 8K 2022-09-10 19:08
zroot/ROOT/12.3-RELEASE-p5_2022-07-01_212910@2022-09-10-19:08:46-0 - - 2.95M 2022-09-10 19:08

12.3-RELEASE-p6_2022-09-03_171127
zroot/ROOT/12.3-RELEASE-p6_2022-09-03_171127 - - 8K 2022-09-03 17:11
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-03-17:11:27-0 - - 525M 2022-09-03 17:11

12.3-RELEASE-p7_2022-09-10_230907
zroot/ROOT/12.3-RELEASE-p7_2022-09-10_230907 - - 8K 2022-09-10 23:09
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:09:07-0 - - 1.41M 2022-09-10 23:09

12.3-to-13.1
zroot/ROOT/12.3-to-13.1 - - 8K 2022-09-10 19:12
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-19:12:44 - - 1.40M 2022-09-10 19:12

13.0-RELEASE-p11_2022-07-01_213226
zroot/ROOT/13.0-RELEASE-p11_2022-07-01_213226 - - 8K 2022-07-01 21:32
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-07-01-21:32:26-0 - - 90.0M 2022-07-01 21:32

13.1-RELEASE-p2_2022-09-10_231247
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_231247 - - 8K 2022-09-10 23:12
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:12:47-0 - - 5.88M 2022-09-10 23:12

13.1-RELEASE-p2_2022-09-10_232433
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433 NR / 101G 2022-09-10 23:24
13.1-RELEASE-p2_2022-09-10_232433@2019-07-after-problem - - 5.05G 2019-07-10 19:31
13.1-RELEASE-p2_2022-09-10_232433@2019-11-11-01:28:22 - - 5.84G 2019-11-10 23:58
13.1-RELEASE-p2_2022-09-10_232433@2022-02-02-03:25:18-0 - - 24.7G 2022-02-02 03:25
13.1-RELEASE-p2_2022-09-10_232433@2022-03-18-16:42:24-0 - - 267M 2022-03-18 16:42
13.1-RELEASE-p2_2022-09-10_232433@2022-03-23-17:58:07-0 - - 83.7M 2022-03-23 17:58
13.1-RELEASE-p2_2022-09-10_232433@2022-04-06-23:20:36-0 - - 656M 2022-04-06 23:20
13.1-RELEASE-p2_2022-09-10_232433@2022-07-01-21:29:10-0 - - 7.34M 2022-07-01 21:29
13.1-RELEASE-p2_2022-09-10_232433@2022-07-01-21:32:26-0 - - 90.0M 2022-07-01 21:32
13.1-RELEASE-p2_2022-09-10_232433@2022-08-10-01:15:25-0 - - 610M 2022-08-10 01:15
13.1-RELEASE-p2_2022-09-10_232433@2022-09-03-17:11:27-0 - - 525M 2022-09-03 17:11
13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-19:12:44 - - 1.40M 2022-09-10 19:12
13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:09:07-0 - - 1.41M 2022-09-10 23:09
13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:12:47-0 - - 5.88M 2022-09-10 23:12
13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0 - - 1.49M 2022-09-10 23:24
13.1-RELEASE-p2_2022-09-10_232433@2022-09-11-22:04:01-0 - - 36.6M 2022-09-11 22:04

13.1-RELEASE-p2_2022-09-11_220401
zroot/ROOT/13.1-RELEASE-p2_2022-09-11_220401 - - 8K 2022-09-11 22:04
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-11-22:04:01-0 - - 36.6M 2022-09-11 22:04

default
zroot/ROOT/default - - 2.09G 2019-04-22 05:03
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2019-11-11-01:28:22 - - 5.84G 2019-11-10 23:58

I guess i'll do beadm destroy snapshot - if it looks fine. Just feeling a bit nervous because the prompt says it references the current boot environment and me wanting to "destroy" it😨 Sounds ominous
 
Ok - I have to admit - this is quite weird ..... I ended up destroying it (so far seems to have worked, hopefully my machine restarts too) - but strange thing it destroyed it even when I selected 'n' option !!

Here is the output when performing as sudo (normal user denied permission):
sudo !!
sudo beadm destroy 12.0-p11
Password:
Are you sure you want to destroy '12.0-p11'?
This action cannot be undone (y/[n]): y
Boot environment '12.0-p11' was created from existing snapshot
Destroy '13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0' snapshot? (y/[n]): n
Origin snapshot '13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0' will be preserved
Destroyed successfully
I think it meant to ask if I wanted to destroy the current boot environment too? Why would someone do it?

So confusing.
 
Yep, you need to su or sudo. Yep it is confusing.

It references a snapshot of the current boot environment. Snapshots are sometimes a bit weird/hard to understand. ZFS is Copy On Write.
This is going to be "roughly correct", but not strictly correct.
Lets say you have a dataset XYZ. "Right now" it has a file hello in it. You take a snapshot, the snapshot is basically references to all the blocks that make up XYZ "right now". You delete the file hello. What happens? If you look at dataset XYZ, the file hello is not there. But in your snapshot it exists before you deleted the file.

So snapshots basically have the differences between when the snapshot was taken and the state of the dataset now.

You can always say "n" to destroying the origin snapshot (which is not the boot environment, it's a "this is the state of that dataset at that point in time), you won't reclaim much if any space but then you'll have a bunch of snapshots taking up space and not really used. But that's ok, you can also delete them by hand later. Which may be easier to understand.

Maybe do beadm destroy all the 12.X and the 13.0 boot environments, say "no" to deleting the origin. Then we can clean up from there.
 
Thanks for all the help - I only wish they made it a little more explicit about trying to delete the current boot environmant (Maybe a warning that your system won't boot!)

So another strange thing happened - I deleted a 24.7G snapshot but the disk space available didn't change much 🤔

beadm list
BE Active Mountpoint Space Created
12.3-RELEASE-p1_2022-02-02_032518 - - 24.7G 2022-02-02 03:25
12.3-RELEASE-p1_2022-03-18_164224 - - 267.0M 2022-03-18 16:42
12.3-RELEASE-p3_2022-03-23_175807 - - 83.7M 2022-03-23 17:58
After this checked disk space
df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433 26G 23G 3.0G 88% /
--snip--
Then deleted
sudo beadm destroy 12.3-RELEASE-p1_2022-02-02_032518
Password:
Are you sure you want to destroy '12.3-RELEASE-p1_2022-02-02_032518'?
This action cannot be undone (y/[n]): y
Boot environment '12.3-RELEASE-p1_2022-02-02_032518' was created from existing snapshot
Destroy '13.1-RELEASE-p2_2022-09-10_232433@2022-02-02-03:25:18-0' snapshot? (y/[n]): n
Origin snapshot '13.1-RELEASE-p2_2022-09-10_232433@2022-02-02-03:25:18-0' will be preserved
Destroyed successfully
Then again checked list again - not there as expected
beadm list
BE Active Mountpoint Space Created
12.3-RELEASE-p1_2022-03-18_164224 - - 267.0M 2022-03-18 16:42
12.3-RELEASE-p3_2022-03-23_175807 - - 83.7M 2022-03-23 17:58
However the disk space available didn't change much 🤔 - Why?
df -h
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433 26G 23G 3.1G 88% /

Capacity and Availability colums didn't change - was expecting them to reflect the change. So weird again.
 
gpart show ada0
=> 40 625142375 ada0 GPT (466G) [CORRUPT]
40 1024 1 freebsd-boot (512K)
1064 984 - free - (492K)
2048 16777216 2 freebsd-swap (8.0G)
16779264 608362496 3 freebsd-zfs (290G)
625141760 655 - free - (328K)

625142375 sectors - 40 sectors = 625142335 sectors * 512bytes per sector = 320072875520 bytes = ~298GB

What is the output of diskinfo -v ada0 and gpart list geli list -a
 
However the disk space available didn't change much 🤔 - Why?
Sorry, had to sleep.
The "Why" it's the blessing and the curse of snapshots. The space isn't reclaimed until all snapshots using the blocks are destroyed. Until then, the used blocks just move.
 
What is the output of diskinfo -v ada0
Output :
sudo diskinfo -v ada0
Password:
ada0
512 # sectorsize
500107862016 # mediasize in bytes (466G)
976773168 # mediasize in sectors
0 # stripesize
0 # stripeoffset
969021 # Cylinders according to firmware.
16 # Heads according to firmware.
63 # Sectors according to firmware.
Samsung SSD 860 EVO 500GB # Disk descr.
S4FNNJ0N101593D # Disk ident.
ahcich0 # Attachment
id1,enc@n3061686369656d30/type@0/slot@1/elmdesc@Slot_00 # Physical path
Yes # TRIM/UNMAP support
0 # Rotation rate in RPM
Not_Zoned # Zone Mode

and gpart list
Output :
gpart list
Geom name: ada0
modified: false
state: CORRUPT
fwheads: 16
fwsectors: 63
last: 625142414
first: 40
entries: 128
scheme: GPT
Providers:
1. Name: ada0p1
Mediasize: 524288 (512K)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 20480
Mode: r0w0e0
efimedia: HD(1,GPT,66ddba09-649a-11e9-8598-c80aa9338a75,0x28,0x400)
rawuuid: 66ddba09-649a-11e9-8598-c80aa9338a75
rawtype: 83bd6b9d-7f41-11dc-be0b-001560b84f0f
label: gptboot0
length: 524288
offset: 20480
type: freebsd-boot
index: 1
end: 1063
start: 40
2. Name: ada0p2
Mediasize: 8589934592 (8.0G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 1048576
Mode: r1w1e1
efimedia: HD(2,GPT,6732bd2b-649a-11e9-8598-c80aa9338a75,0x800,0x1000000)
rawuuid: 6732bd2b-649a-11e9-8598-c80aa9338a75
rawtype: 516e7cb5-6ecf-11d6-8ff8-00022d09712b
label: swap0
length: 8589934592
offset: 1048576
type: freebsd-swap
index: 2
end: 16779263
start: 2048
3. Name: ada0p3
Mediasize: 311481597952 (290G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 8590983168
Mode: r1w1e1
efimedia: HD(3,GPT,675f422b-649a-11e9-8598-c80aa9338a75,0x1000800,0x2442e000)
rawuuid: 675f422b-649a-11e9-8598-c80aa9338a75
rawtype: 516e7cba-6ecf-11d6-8ff8-00022d09712b
label: zfs0
length: 311481597952
offset: 8590983168
type: freebsd-zfs
index: 3
end: 625141759
start: 16779264
Consumers:
1. Name: ada0
Mediasize: 500107862016 (466G)
Sectorsize: 512
Mode: r2w2e4
 
Sorry, had to sleep.
The "Why" it's the blessing and the curse of snapshots. The space isn't reclaimed until all snapshots using the blocks are destroyed. Until then, the used blocks just move.
No worries - appreciate all the help. I'm not sure why the space recovered isn't reflecting after the snapshot being destroyed for the 24.7G snapshot. Didn't quite understand what you meant. I still have a few snapshots that I am going to delete - but just trying to understand one step at a time, conceptually.
 
lets try something, non destructive, it may help understand. I left out a little bit, Boot Environments are not just snapshots, they are also clones based on the snapshot. Why? snapshots are not writeable, you can roll back to them, but you can't write them. Clones are writeable, so you take a snapshot then create a clone and modify the clone. Leave that aside for a moment.

You're concerned about the snapshot zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0 which is related to your 13.1-RELEASE-p2_2022-09-11_220401 boot environment.

Take note of the "-nv" the n means "don't do anything" the "v" means verbose. The output tells you what the command would do, things like how much space would be reclaimed, if it's in use by something else.

zfs destroy -nv zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0

Here's what I get on my system. Notice my active BE and the snapshot under it.
bectl list -s BE/Dataset/Snapshot Active Mountpoint Space Created 13.1-RELEASE-p0 zroot/ROOT/13.1-RELEASE-p0 - - 8K 2022-09-11 06:26 zroot/ROOT/13.1-RELEASE-p2@2022-09-11-06:26:38-0 - - 660M 2022-09-11 06:26 13.1-RELEASE-p2 zroot/ROOT/13.1-RELEASE-p2 NR / 7.99G 2022-05-17 10:28 13.1-RELEASE-p2@2022-09-11-06:26:38-0 - - 660M 2022-09-11 06:26

Then see what it says when I do the following:
zfs destroy -nv zroot/ROOT/13.1-RELEASE-p2@2022-09-11-06:26:38-0 cannot destroy 'zroot/ROOT/13.1-RELEASE-p2@2022-09-11-06:26:38-0': snapshot has dependent clones use '-R' to destroy the following datasets: zroot/ROOT/13.1-RELEASE-p0 would destroy zroot/ROOT/13.1-RELEASE-p2@2022-09-11-06:26:38-0

See how it says that my 13.1-RELEASE-p0 depends on the snapshot? That means my -p2 BE does not so I could safely answer "yes" on beadm destroy 13.1-RELEASE-p0 when it asks about the origin snapshot.

If you compare the output when you do the zfs destroy -nv command with the output of bectl list -s you should see a pattern.
Basically, if you beadm destroy all the 12.X and the one named "default", then do the zfs destroy -nv command, it will say roughly "can delete and would reclaim XXX space"
 
Thanks - I wasn't aware of the difference between snapshots and clones to much - will try to keep that in mind.

I tried to follow the commends but it says snapshot doesn't exist - Did I nuke something? 😱
bectl list
BE Active Mountpoint Space Created
12.3-RELEASE-p1_2022-03-18_164224 - - 267M 2022-03-18 16:42
12.3-RELEASE-p3_2022-03-23_175807 - - 83.7M 2022-03-23 17:58
12.3-RELEASE-p4_2022-04-06_232036 - - 656M 2022-04-06 23:20
12.3-RELEASE-p5_2022-07-01_212910 - - 8.29G 2022-07-01 21:29
12.3-RELEASE-p5_2022-08-10_011525 - - 610M 2022-08-10 01:15
12.3-RELEASE-p5_2022-09-10_190846 - - 2.96M 2022-09-10 19:08
12.3-RELEASE-p6_2022-09-03_171127 - - 525M 2022-09-03 17:11
12.3-RELEASE-p7_2022-09-10_230907 - - 1.42M 2022-09-10 23:09
12.3-to-13.1 - - 1.41M 2022-09-10 19:12
13.0-RELEASE-p11_2022-07-01_213226 - - 90.0M 2022-07-01 21:32
13.1-RELEASE-p2_2022-09-10_231247 - - 5.89M 2022-09-10 23:12
13.1-RELEASE-p2_2022-09-10_232433 NR / 101G 2022-09-10 23:24
13.1-RELEASE-p2_2022-09-11_220401 - - 36.6M 2022-09-11 22:04
[c1utt4r@toaster /usr/src]$ zfs destroy -nv 12.3-RELEASE-p3_2022-03-23_175807
cannot open '12.3-RELEASE-p3_2022-03-23_175807': dataset does not exist
[c1utt4r@toaster /usr/src]$ zfs destroy -nv 13.0-RELEASE-p11_2022-07-01_213226
cannot open '13.0-RELEASE-p11_2022-07-01_213226': dataset does not exist
[c1utt4r@toaster /usr/src]$ zfs destroy -nv 13.1-RELEASE-p2_2022-09-10_232433
cannot open '13.1-RELEASE-p2_2022-09-10_232433': dataset does not exist
 
geli list -a
geli list -a
Geom name: ada0p3.eli
State: ACTIVE
EncryptionAlgorithm: AES-XTS
KeyLength: 256
Crypto: accelerated software
Version: 7
UsedKey: 0
Flags: BOOT, GELIBOOT
KeysAllocated: 73
KeysTotal: 73
Providers:
1. Name: ada0p3.eli
Mediasize: 311481593856 (290G)
Sectorsize: 4096
Mode: r1w1e1
Consumers:
1. Name: ada0p3
Mediasize: 311481597952 (290G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 8590983168
Mode: r1w1e1

Geom name: ada0p2.eli
State: ACTIVE
EncryptionAlgorithm: AES-XTS
KeyLength: 128
Crypto: accelerated software
Version: 7
Flags: ONETIME, W-DETACH, W-OPEN, AUTORESIZE
KeysAllocated: 2
KeysTotal: 2
Providers:
1. Name: ada0p2.eli
Mediasize: 8589934592 (8.0G)
Sectorsize: 4096
Mode: r1w1e0
Consumers:
1. Name: ada0p2
Mediasize: 8589934592 (8.0G)
Sectorsize: 512
Stripesize: 0
Stripeoffset: 1048576
Mode: r1w1e1
 
tried to follow the commends but it says snapshot doesn't exist - Did I nuke something?
Nope, you haven't specified the snapshot name correctly. You've specified the name of the BE, which I believe is the name of the clone.

This is based on your output of bectl list -s earlier, this is a snapshot off your currently active BE. notice how it has zroot/ROOT/blahblahblah?

zfs destroy -nv zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0
 
Nope, you haven't specified the snapshot name correctly. You've specified the name of the BE, which I believe is the name of the clone.

This is based on your output of bectl list -s earlier, this is a snapshot off your currently active BE. notice how it has zroot/ROOT/blahblahblah?

zfs destroy -nv zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433@2022-09-10-23:24:33-0
Correct - that makes sense. Although I'm still not sure why after destroying the 24.7G boot environment the disk space isn't reflecting the freed up space. Current environment still shows available space much lesser than what was destroyed.
Filesystem Size Used Avail Capacity Mounted on
zroot/ROOT/13.1-RELEASE-p2_2022-09-10_232433 26G 23G 2.9G 89% /
---snip---
 

read PMc post #5 about resizing the geli first.

I suggest you to make a full backup. Then gpart recover ada0 and see if you see the free space at the end of the disk. If you see the free space after ada0p3 then you can resize it gpart resize -i 3 -a 4k -s 450G ada0 and then set zpool set autoexpand=on zroot followed by zpool online -e zroot ada0p3.eli . Then verify the pool size via zpool list

Note: Your geli partition has AUTORESIZE flag so it should pickup the new partition size automatically

to summarize:

BACKUP first
gpart recover ada0
gpart resize -i 3 -a 4k -s 450G ada0
zpool set autoexpand=on zroot
zpool online -e zroot ada0p3.eli
zpool list
 
Last edited:
Back
Top