ZFS Snapshot deletion dilemma - What to check before deletion

I am yet another member with the same old question. I do not have a recent backup so I need to ask the following:

Following arethe outputs of
Code:
#zfs list
NAME                                           USED  AVAIL     REFER  MOUNTPOINT
zroot                                          457G  55.2M       96K  /zroot
zroot/ROOT                                     452G  55.2M       96K  none
zroot/ROOT/13.1-RELEASE-p2_2022-11-02_174902     8K  55.2M     80.6G  /
zroot/ROOT/13.1-RELEASE-p3_2022-11-19_121059     8K  55.2M     89.2G  /
zroot/ROOT/13.1-RELEASE-p4_2022-11-23_174834     8K  55.2M     93.1G  /
zroot/ROOT/13.1-RELEASE-p4_2022-11-30_161004     8K  55.2M      117G  /
zroot/ROOT/13.1-RELEASE-p5_2023-02-13_104743     8K  55.2M      201G  /
zroot/ROOT/13.1-RELEASE-p6_2023-02-17_045441     8K  55.2M      202G  /
zroot/ROOT/13.1-RELEASE-p7_2023-06-28_133616     8K  55.2M      191G  /
zroot/ROOT/13.1-RELEASE_2022-10-06_170110        8K  55.2M     1.10G  /
zroot/ROOT/default                             452G  55.2M      173G  /
zroot/tmp                                      468K  55.2M      468K  /tmp
zroot/usr                                     3.75G  55.2M       96K  /usr
zroot/usr/home                                3.00G  55.2M     3.00G  /usr/home
zroot/usr/ports                                 96K  55.2M       96K  /usr/ports
zroot/usr/src                                  760M  55.2M      760M  /usr/src
zroot/var                                     1.29G  55.2M       96K  /var
zroot/var/audit                                 96K  55.2M       96K  /var/audit
zroot/var/crash                               1.29G  55.2M     1.29G  /var/crash
zroot/var/log                                 1.87M  55.2M     1.87M  /var/log
zroot/var/mail                                 160K  55.2M      160K  /var/mail
zroot/var/tmp                                  120K  55.2M      120K  /var/tmp


#zfs list -rt snapshot zroot | sort -k 2
NAME                                       USED  AVAIL     REFER  MOUNTPOINT
zroot/ROOT/default@2023-02-13-10:47:43-0  16.5G      -      201G  -
zroot/ROOT/default@2023-02-17-04:54:41-0  17.7G      -      202G  -
zroot/ROOT/default@2022-11-02-17:49:02-0  19.3G      -     80.6G  -
zroot/ROOT/default@2023-06-28-13:36:16-0  21.1G      -      191G  -
zroot/ROOT/default@2022-11-19-12:10:59-0  3.17G      -     89.2G  -
zroot/ROOT/default@2022-11-30-16:10:04-0  34.5G      -      117G  -
zroot/ROOT/default@2022-11-23-17:48:34-0  4.13G      -     93.1G  -
zroot/ROOT/default@2022-10-06-17:01:10-0  76.9M      -     1.10G  -


#bectl list
BE                                Active Mountpoint Space Created
13.1-RELEASE-p2_2022-11-02_174902 -      -          19.3G 2022-11-02 17:49
13.1-RELEASE-p3_2022-11-19_121059 -      -          3.17G 2022-11-19 12:10
13.1-RELEASE-p4_2022-11-23_174834 -      -          4.13G 2022-11-23 17:48
13.1-RELEASE-p4_2022-11-30_161004 -      -          34.5G 2022-11-30 16:10
13.1-RELEASE-p5_2023-02-13_104743 -      -          16.5G 2023-02-13 10:47
13.1-RELEASE-p6_2023-02-17_045441 -      -          17.7G 2023-02-17 04:54
13.1-RELEASE-p7_2023-06-28_133616 -      -          21.1G 2023-06-28 13:36
13.1-RELEASE_2022-10-06_170110    -      -          76.9M 2022-10-06 17:01
default                           NR     /          452G  2022-10-06 16:51


Q1] What are the additional checks that I need to do to ensure that a particular snapshot is safe to delete?
Q2] Is it advisable to delete a more recent snapshot first than the older one (if it is not Active .. marked as NR)?
Q3] Can I delete the ones which are not NR?
Q4] In case deletes a snapshot that they should not have.. for example one with NR, does that mean their data is lost OR is it that one can't boot but the data is safe?

I deleted a lot of large files and realized it's of no help. I need to make space. My disk is reaching a point where I will need space to make some space.

Any suggestions?
 
Snapshots that are related to Boot Environments, should be deleted via the bectl command.

From the outputs you show I think you are safe doing

bectl destroy -o

every BE listed except for the one named default since that's what you're currently using.
I tend to delete BE's in chronological order, oldest to latest.
 
Thank you for responding.
Okay so I'll keep this one
default NR / 452G 2022-10-06 16:51

And the rest will be deleted.


Since be = boot environment, even if someone uses a -F and deletes the active one, the max damage that would happen is them being unable to boot, right? I take the rest of the data namely /usr /root etc. would be intact. Correct?

I'll update you after deletion and a reboot.
 
Never use -F unless you have a very good (known issue you are working around) reason.

Snapshots are only there to enable you to get to old versions, in general, so, if you have no need to get to versions, it is “safe” to delete the snapshots.

One twist is with clones (which are used for boot environments) — you can’t (ZFS won’t let you) delete a snapshot that is the source for an existing clone.
 
  • Like
Reactions: mer
Thank you mer for helping out.

Here's what it looks like now
Code:
# bectl list
BE                                Active Mountpoint Space Created
13.1-RELEASE-p7_2023-06-28_133616 -      -          21.8G 2023-06-28 13:36
default                           NR     /          195G  2022-10-06 16:51

# df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    430G    173G    257G    40%    /
devfs                 1.0K    1.0K      0B   100%    /dev
/dev/nvd0p1           260M    1.8M    258M     1%    /boot/efi
fdescfs               1.0K    1.0K      0B   100%    /dev/fd
procfs                4.0K    4.0K      0B   100%    /proc
linprocfs             4.0K    4.0K      0B   100%    /compat/linux/proc
linsysfs              4.0K    4.0K      0B   100%    /compat/linux/sys
tmpfs                  30G    4.0K     30G     0%    /compat/linux/dev/shm
zroot/tmp             257G    9.1M    257G     0%    /tmp
zroot/var/audit       257G     96K    257G     0%    /var/audit
zroot/var/log         257G    1.9M    257G     0%    /var/log
zroot/usr/ports       257G     96K    257G     0%    /usr/ports
zroot                 257G     96K    257G     0%    /zroot
zroot/usr/src         258G    760M    257G     0%    /usr/src
zroot/usr/home        260G    3.0G    257G     1%    /usr/home
zroot/var/crash       259G    1.3G    257G     0%    /var/crash
zroot/var/mail        257G    160K    257G     0%    /var/mail
zroot/var/tmp         257G    120K    257G     0%    /var/tmp
linprocfs             4.0K    4.0K      0B   100%    /compat/ubuntu/proc
linsysfs              4.0K    4.0K      0B   100%    /compat/ubuntu/sys
devfs                 1.0K    1.0K      0B   100%    /compat/ubuntu/dev
fdescfs               1.0K    1.0K      0B   100%    /compat/ubuntu/dev/fd
tmpfs                  30G    107M     30G     0%    /compat/ubuntu/dev/shm
/tmp                  257G    9.1M    257G     0%    /compat/ubuntu/tmp
procfs                4.0K    4.0K      0B   100%    /proc


I let the most recent one live in case something went wrong and as a future means to reclaim 21 GB if needed. Also, too much free space is tempting. It sucks how a naive, uninformed user who just uses the OS, running update after update would end up in a situation by no doing of their own where the disk would get full leaving them sans updates. FreeBSD should know better than to act like MS Windows. At the same time, this space hoarding is rewarding for those who know what to delete.
I feel like someone bought me an additional SSD. :) Thank you for helping me reclaim around nearly 250 GB of disk space.
 
  • Like
Reactions: mer
them being unable to boot, right? I take the rest of the data namely /usr /root etc. would be intact. Correct?
They would not be able to boot.
Anything in a dataset that is not part of the BE would be preserved.
The default install creates a number of datasets, one for "/usr" but the one for /usr is typically not mounted.
That means anything created under /usr like say /usr/local winds up in the "root dataset" which is your BE.
Some things like /usr/home are their own mounted dataset so they should still be there, a lot of things under /var should be too.

It sucks how a naive, uninformed user who just uses the OS, running update after update would end up in a situation by no doing of their own where the disk would get full leaving them sans updates. FreeBSD should know better than to act like MS Windows.
Well one could argue that "If you are running FreeBSD, probability that you are an uninformed user is low" :)
But I don't disagree with your point. I'm not sure if there are plans or anything regarding "modifying freebsd-update to only keep X number of previous BE's".

Why? Some people like to keep them around for testing, others like to have as few as possible. I don't think there is a single correct opinion on this.
Me personally? I tend to keep only one. Run freebsd-update, get a new BE, run the new one for a bit to truly verify everything works, then delete the previous.
If the update brings in a need to update bootloader bits like say from 12.x to 13.x, I'm more cautious. But I also will create new BEs if I am mucking with system config or anything that could make a system unbootable.

Boot Environments are one of the best features of using ZFS in my opinion. They can make upgrading as painless as possible.
 
Back
Top