How to clean ZFS safely

I have freebsd 13.2 server on ESXI.
There is only 5Gb is in use, but my VM backup is too big.

Code:
# df -m
Filesystem         1M-blocks Used Avail Capacity  Mounted on
zroot/ROOT/default     12375 5038  7337    41%    /
devfs                      0    0     0   100%    /dev
zroot/tmp               7337    0  7337     0%    /tmp
zroot                   7337    0  7337     0%    /zroot
zroot/usr/src           7337    0  7337     0%    /usr/src
zroot/usr/home          7373   36  7337     0%    /usr/home
zroot/var/audit         7337    0  7337     0%    /var/audit
zroot/var/crash         7337    0  7337     0%    /var/crash
zroot/var/log           7363   26  7337     0%    /var/log
zroot/usr/ports         7337    0  7337     0%    /usr/ports
zroot/var/mail          7337    0  7337     0%    /var/mail
zroot/var/tmp           7337    0  7337     0%    /var/tmp

I see that my zfs has a strange size:

Code:
# zfs list
NAME                                            USED  AVAIL     REFER  MOUNTPOINT
zroot                                          19.7G  7.17G       88K  /zroot
zroot/ROOT                                     19.6G  7.17G       88K  none
zroot/ROOT/12.3-RELEASE-p12_2023-08-19_090920     8K  7.17G     12.6G  /
zroot/ROOT/12.4-RELEASE-p4_2023-08-19_091757      8K  7.17G     12.6G  /
zroot/ROOT/12.4-RELEASE-p4_2023-08-20_133240      8K  7.17G     13.7G  /
zroot/ROOT/13.2-RELEASE-p2_2023-08-20_133623      8K  7.17G     13.8G  /
zroot/ROOT/13.2-RELEASE-p2_2023-08-20_150850      8K  7.17G     14.1G  /
zroot/ROOT/default                             19.6G  7.17G     4.92G  /
zroot/tmp                                       328K  7.17G      328K  /tmp
zroot/usr                                      36.8M  7.17G       88K  /usr
zroot/usr/home                                 36.5M  7.17G     36.5M  /usr/home
zroot/usr/ports                                  88K  7.17G       88K  /usr/ports
zroot/usr/src                                    88K  7.17G       88K  /usr/src
zroot/var                                      26.8M  7.17G       88K  /var
zroot/var/audit                                  88K  7.17G       88K  /var/audit
zroot/var/crash                                  88K  7.17G       88K  /var/crash
zroot/var/log                                  26.4M  7.17G     26.4M  /var/log
zroot/var/mail                                  120K  7.17G      120K  /var/mail
zroot/var/tmp                                    88K  7.17G       88K  /var/tmp
Is it possible to clean free spase on zfs?

If I understand correctly, I have additional images, but how do I properly and safely delete them?

Code:
# bectl list
BE                                 Active Mountpoint Space Created
12.3-RELEASE-p12_2023-08-19_090920 -      -          171M  2023-08-19 09:09
12.4-RELEASE-p4_2023-08-19_091757  -      -          7.40M 2023-08-19 09:17
12.4-RELEASE-p4_2023-08-20_133240  -      -          7.17M 2023-08-20 13:32
13.2-RELEASE-p2_2023-08-20_133623  -      -          6.43M 2023-08-20 13:36
13.2-RELEASE-p2_2023-08-20_150850  -      -          31.9M 2023-08-20 15:08
default                            NR     /          19.6G 2017-10-10 14:45
The default snapshot has the largest size, while it is active and has a size of 19.6GB.
I don't understand this because all the disk space I have is 5GB. Why is this snapshot so big?
 
If I understand correctly, I have additional images, but how do I properly and safely delete them?
Code:
     destroy [-Fo] beName[@snapshot]
               Destroy the given beName boot environment or beName@snapshot
               snapshot without confirmation, unlike in beadm(1).  Specifying
               -F will automatically unmount without confirmation.

               By default, bectl will warn that it is not destroying the
               origin of beName.  The -o flag may be specified to destroy the
               origin as well.
bectl(8)
 
I don't understand what the right thing to delete is. The default image is active. Should I delete it? Can I delete all the images, or do I have to keep some?
The oldest picture is the largest in size.
 
If you look at bectl list you'll see that default is marked as the Running BE and the Next boot. So you'll want to keep that one. The others can be removed if you don't need them any more. They're automatically created with freebsd-update(8).

Code:
     CreateBootEnv            The single parameter following this keyword must
                              be “yes” or “no” and specifies whether
                              freebsd-update(8) will create a new boot
                              environment using bectl(8) when installing
                              patches.
freebsd-update.conf(5)
 
df won't give any meaningful information on ZFS.

Also I don't see anything that would imply you are only using 5GB; zfs list clearly states there's 19.7GB used.
What is the output of zfs list -o space? (and maybe also zpool list)
 
df won't give any meaningful information on ZFS.

Also I don't see anything that would imply you are only using 5GB; zfs list clearly states there's 19.7GB used.
What is the output of zfs list -o space? (and maybe also zpool list)
Yes. The df tells you how much space is occupied and used, it's 5GB.

zfs stores snapshots as well, so I don't understand why default is larger in size than there are files on the system.

Code:
 # zfs list -o space
NAME                                           AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
zroot                                          7.17G  19.7G        0B     88K             0B      19.7G
zroot/ROOT                                     7.17G  19.6G        0B     88K             0B      19.6G
zroot/ROOT/12.3-RELEASE-p12_2023-08-19_090920  7.17G     8K        0B      8K             0B         0B
zroot/ROOT/12.4-RELEASE-p4_2023-08-19_091757   7.17G     8K        0B      8K             0B         0B
zroot/ROOT/12.4-RELEASE-p4_2023-08-20_133240   7.17G     8K        0B      8K             0B         0B
zroot/ROOT/13.2-RELEASE-p2_2023-08-20_133623   7.17G     8K        0B      8K             0B         0B
zroot/ROOT/13.2-RELEASE-p2_2023-08-20_150850   7.17G     8K        0B      8K             0B         0B
zroot/ROOT/default                             7.17G  19.6G     14.7G   4.92G             0B         0B
zroot/tmp                                      7.17G   328K        0B    328K             0B         0B
zroot/usr                                      7.17G  36.8M        0B     88K             0B      36.7M
zroot/usr/home                                 7.17G  36.5M        0B   36.5M             0B         0B
zroot/usr/ports                                7.17G    88K        0B     88K             0B         0B
zroot/usr/src                                  7.17G    88K        0B     88K             0B         0B
zroot/var                                      7.17G  26.3M        0B     88K             0B      26.2M
zroot/var/audit                                7.17G    88K        0B     88K             0B         0B
zroot/var/crash                                7.17G    88K        0B     88K             0B         0B
zroot/var/log                                  7.17G  25.8M        0B   25.8M             0B         0B
zroot/var/mail                                 7.17G   120K        0B    120K             0B         0B
zroot/var/tmp                                  7.17G    88K        0B     88K             0B         0B
 
Yes. The df tells you how much space is occupied and used, it's 5GB.

zfs stores snapshots as well, so I don't understand why default is larger in size than there are files on the system.

from the (almost unreadable - please use code-blocks for such outputs) output of zfs list -o space you can see that the zroot/ROOT/default dataset uses 14.7G for snapshots.

Regarding df/du - they can only show (roughly) the amount of referenced data on zfs. both tools have absolutely no concept of anything zfs-related like compression, clones, snapshots, let alone can they take unmounted datasets into account, so they will never show any numbers related to actual on-disk-usage (or disk image).
 
from the (almost unreadable - please use code-blocks for such outputs) output of zfs list -o space you can see that the zroot/ROOT/default dataset uses 14.7G for snapshots.

Regarding df/du - they can only show (roughly) the amount of referenced data on zfs. both tools have absolutely no concept of anything zfs-related like compression, clones, snapshots, let alone can they take unmounted datasets into account, so they will never show any numbers related to actual on-disk-usage (or disk image).
Look. I understand what you're saying.

What I can't understand is why default was bigger than all the files.

So, I deleted everything but default.

Here's what we have:

Code:
# bectl list
BE      Active Mountpoint Space Created
default NR     /          4.92G 2017-10-10 14:45

Now we have 5Gb on ZFS and 5Gb on df.

The question is why the size was shown to be so large. Is it counting the size of all previous snapshots?

171Mb + 7Mb + 7Mb +6Mb +31Mb + 5Gb(df) < 19Gb
Now I have:
Code:
 # zfs list -o space
NAME                AVAIL   USED  USEDSNAP  USEDDS  USEDREFRESERV  USEDCHILD
zroot               21.9G  5.00G        0B     88K             0B      5.00G
zroot/ROOT          21.9G  4.92G        0B     88K             0B      4.92G
zroot/ROOT/default  21.9G  4.92G        0B   4.92G             0B         0B
zroot/tmp           21.9G   328K        0B    328K             0B         0B
zroot/usr           21.9G  36.8M        0B     88K             0B      36.7M
zroot/usr/home      21.9G  36.5M        0B   36.5M             0B         0B
zroot/usr/ports     21.9G    88K        0B     88K             0B         0B
zroot/usr/src       21.9G    88K        0B     88K             0B         0B
zroot/var           21.9G  26.3M        0B     88K             0B      26.2M
zroot/var/audit     21.9G    88K        0B     88K             0B         0B
zroot/var/crash     21.9G    88K        0B     88K             0B         0B
zroot/var/log       21.9G  25.9M        0B   25.9M             0B         0B
zroot/var/mail      21.9G   120K        0B    120K             0B         0B
zroot/var/tmp       21.9G    88K        0B     88K             0B         0B
 
Those BEs are created based on ZFS snapshots.
Was:
snapshot 1: 171Mb
snapshot 2: 7Mb
snapshot 3: 7Mb
snapshot 4: 6Mb
snapshot 5: 31Mb
snapshot default: 19GB

I deleted 1,2,3,4,5 = 222Mb
Why did the default shrink to 5GB after this procedure?

Where did 12GB go, even though I didn't touch the default?
 
This is my understanding, some of it may not be strictly correct according to the code, but should be reasonably close.

ZFS is Copy On Write. Snapshots initially are roughly list of blocks having a specific timestamp. That means the space used by a snapshot is almost nothing.
As things change, the Copy On Write comes into play. Blocks wind up migrating owners. That means the space used now winds up owned by the snapshot because it's been changed in the dataset.

In your specific example, initial install created the boot environment (snapshot/clone) named default. You did freebsd-update which created a new boot environment and then modified files in the BE named default. The blocks that were originally owned by "default", they moved to be owned by the new BE, so the size of that snapshot increases.
Repeat for every file that gets overwritten in the update of "default" and the size of the new BE goes up.

An important aspect of understanding snapshots and the Copy On Write aspect is blocks are not freed until all references to that block have been removed.
If a block is referenced by 5 snapshots, the block is not removed until all 5 snapshots have been deleted.
 
Back
Top