ZFS How to create a graphical menu,like the grub one,where one can choose which zfs snapshot choose to boot FreeBSD.

Hello.

I'm a FreeBSD user since some years. After having used Linux for 20 years, I took the decision to stop using it a little bit because of some technical reasons,mainly tied to the package management. Now that I'm using FreeBSD,I don't regret that choice,because the package management of FreeBSD rocks. That of some Linux distributions is unnecessarily complicated and problematic because of the problem of the unresolved dependencies. I've used nixos for some time and I was impressed by that system management and I would ask if is possible,in some way,to create a graphical menu for FreeBSD,like the grub one,where users can choose,before that FreeBSD start booting,which zfs (or UFS) snapshot they want to use. Thanks.
 
I would ask if is possible,in some way,to create a graphical menu for FreeBSD,like the grub one,where users can choose,before that FreeBSD start booting,which zfs ... snapshot they want to use.
The FreeBSD boot menu has it already. When boot environments are create with bectl(8) or sysutils/beadm, those appear as
Code:
Options:
6.
7.
8. Boot Environments
When chosen 8. another menu will be displayed with the 2. Active: option to choose the boot environment.

(or UFS) snapshot they want to use.
UFS has no official boot environment support. vermaden has written a shell script to create UFS boot environments, but last time I checked it works on BIOS systems, not UEFI. Multiple parallel installed whole FreeBSD systems are required, no boot menu entry is created, the UFS boot environment needs to be activated from a running system by command line.
 
depends on what is meant by "graphical".
Standard grub, provides a text list of the boot options.
Standard FreeBSD loader has a text representation of boot environments, but if I recall correctly, it may not provide a list of all of them, but there is an option to cycle through all of them.
 
I still do not understand why,when I remove files on a ZFS disk using commands like (rm and rm -r),the left space does not increase,but it decreases. I would like to know what I can do to free some space if removing files does not work. And I don't understand why a ZFS disk goes fast out of space. This does not happen with an UFS disk. From what I've understood every snapshot created with the ZFS mechanisms contains a full FreeBSD system. Am I right ? Do you confirm this ? If this is true,ZFS is too much space consuming and it will explain why the space will end very fast on the disk. That's not good for sure. Recently I've found a Linux distro called "NixOS" which uses another method to create snapshots. It is based on a large use of sim/soft and hard links to backup different versions of the system libraries. This method is useful when the system is upgraded. I would like to ask you if you would like to see this kind of solution also on FreeBSD.
 
ZFS is copy on write, blocks are not deleted until they are completely dereferenced.
Lets say you have create a snapshot of a dataset (a snapshot is of a dataset not the entire system, unless you recursively snapshot the root pool).
The snapshot basically has references to the dataset at the time you created the snapshot. If you now start deleting files in the dataset, the "space" used by those files are claimed by the snapshot (because that's what snapshots do), but the space used by the dataset goes down.
If you delete the snapshot, then the space used by those files is reclaimed.

If you never create a snapshot, never create a boot environment ZFS does not magically use disk space.
Problems happen because a user creates hundreds of snapshots, thousands of boot environments and forgets that the blocks used simply move around.

As for what you found in NixOS, my opinion, no I don't want to see that on FreeBSD.
 
Do you confirm this ?
No.

The snapshot holds nothing when created, it only holds things overwritten/deleted from there on. So if you delete a large file, the file will be gone from the file system but the snapshot will retain it. Of you overwrite a file, the changes are written to the visible file system, the snapshot retains the original content.

And when you remove a file, it may take some seconds for the space to be reclaimed. In case a power failure occurs directly after the delete, the file will be there again because the updated meta data was not written for that directory and maybe all higher levels of directories.
 
I never created a snapshot manually. It does that automatically. Do you know what's the command to disable the automatic generation of the snapshots ? I don't want to remove again and again the old snaphots to gain space. Even if I don't delete the old snapshots,I go out of space very fast. I need to find a solution for this problem,otherwise I will stop using ZFS and I will use only UFS. And...why don't you like the NixOS backup method ? Maybe ZFS is not good for me,because I don't have a large space disk.
 
Last edited:
The only automatic snapshot stuff in the base system that I'm aware of is freebsd-update will create a new boot environment if you are using ZFS for the root filesystem. There are ports that can be configured to create snapshots as part of a backup routine (Alain De Vos has a few threads about this), but one has to install the port configure it and set it up to run.
 
No.

The snapshot holds nothing when created, it only holds things overwritten/deleted from there on. So if you delete a large file, the file will be gone from the file system but the snapshot will retain it. Of you overwrite a file, the changes are written to the visible file system, the snapshot retains the original content.

ok. but what's the best practice to prevent that the disk becomes quickly full ? Actually I have a zfs disk with a very little space available and 1 only snapshot. I can no longer remove files. What should I do to free some space ?
 
Find out where that space is consumed.
You can find snapshots under "/.zfs", for example (any ZFS mountpoint has this directory). From a user perspective, "gdmap" can show you nicely where your space is wasted. The directory ".local/Trash" or something like that should be a prime candidate. You may look at the output of "df -h" to get a human readable list of disk fill. On ZFS, the "available" may be misleading as this is free space in the pool which can be assigned to datasets. It is the same for all (exceptions may apply for some settings). If you can't understand the "df" output, post it here. We will explain things.
 
I would start with:
get the output of following as root:
bectl list
zfs list -t snapshot

The first one tells us how many boot environments you have, the second should list all that snapshots.
 
This is the situation. This is the disk :

Code:
=>       40  976773095  ada0  GPT  (466G)
         40     532480     1  efi  (260M)
     532520       1024     2  freebsd-boot  (512K)
     533544        984        - free -  (492K)
     534528    4194304     3  freebsd-swap  (2.0G)
    4728832  972044288     4  freebsd-zfs  (464G)
  976773120         15        - free -  (7.5K)

on the directory /mnt/zroot/.zfs/snapshot I have two snapshots :

1) 2022-12-27-16:07:33-0 = 403 GB
2) 2023-01-12-23:57:31-0 = 502 GB

first of all,how can be that the disk can contains 900 GB of data if the total space of the disk can't be more than 466 GB ? I'm thinking to remove the first snapshot (giving a simple rm -r 2022-12-27-16:07:33-0) ,since it seems older than the second. What happens if I do that ? The space on the disk will increase ?

Its not so easy :

Code:
cd /mnt/zroot/.zfs/snapshot/2022-12-27-16:07:33-0
rm -r Backup
override rwxr-xr-x root/wheel uarch for Backup? y
override rwxr-xr-x root/wheel uarch for Backup/UZFS? y
override rwxr-xr-x root/wheel uarch for Backup/UZFS/lib64? ^C
 
Before removing the snapshots, please do the bectl list command and make sure they aren't part of a BE.
 
I can't give a "bectl list". I'm on the UFS disk. Anyway :

Code:
root@marietto:/mnt/zroot/.zfs/snapshot/2022-12-27-16:07:33-0 # zfs list -t snapshot

NAME                                                                 USED  AVAIL     REFER  MOUNTPOINT

zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2022-12-27-16:07:33-0  16.7G      -      300G  -
zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2023-01-12-23:57:31-0  77.3G      -      376G  -
 
"First of all,how can be that the disk can contains 900 GB of data if the total space of the disk can't be more than 466 GB ?"

Copy on write is the magic.
 
I'm confused as to what your system has.
Is your "boot device"/system disk USF and you have a data disk that is ZFS?
 
zfs destroy zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2022-12-27-16:07:33-0 is the magic,too ?
 
I have more FreeBSD installations. One of them is on the ZFS disk,the second on a UFS system. Now I'm using the UFS disk and I'm trying to remove the unuseful data located on the ZFS disk.
 
Code:
root@marietto:/mnt/zroot/.zfs/snapshot/2022-12-27-16:07:33-0 # zfs list

NAME                                           USED  AVAIL     REFER  MOUNTPOINT

zroot                                          446G  20.6M       96K  /mnt/zroot/zroot
zroot/ROOT                                     419G  20.6M       96K  none
zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731   419G  20.6M      139G  /mnt/zroot
zroot/tmp                                     32.2M  20.6M     32.2M  /mnt/zroot/tmp
zroot/usr                                     23.7G  20.6M      120K  /mnt/zroot/usr
zroot/usr/home                                14.4G  20.6M     14.4G  /mnt/zroot/usr/home
zroot/usr/ports                               9.29G  20.6M     9.29G  /mnt/zroot/usr/ports
zroot/usr/src-                                  96K  20.6M       96K  /mnt/zroot/usr/src-
zroot/var                                     2.46G  20.6M      136K  /mnt/zroot/var
zroot/var/audit                                 96K  20.6M       96K  /mnt/zroot/var/audit
zroot/var/crash                               1.11G  20.6M     1.11G  /mnt/zroot/var/crash
zroot/var/log                                 4.16M  20.6M     4.16M  /mnt/zroot/var/log
zroot/var/mail                                1.33G  20.6M     1.33G  /mnt/zroot/var/mail
zroot/var/tmp                                 18.1M  20.6M     18.1M  /mnt/zroot/var/tmp
 
How many space I free if I destroy this snapshot ? --->

zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2022-12-27-16:07:33-0
 
Code:
root@marietto:/mnt/zroot/.zfs/snapshot/2022-12-27-16:07:33-0 # zfs destroy -vn zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2022-12-27-16:07:33-0

would destroy zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2022-12-27-16:07:33-0
would reclaim 16.7G

not enough space freed. So,if I would destroy zroot/ROOT/13.1-RELEASE-p5_2023-01-12_235731@2023-01-12-23:57:31-0 I will free only 77.3G. not good in any case. There is a lot of hidden datas and I don't understand where.
 

With all of this you should be able to find what you are looking for: find the largest snapshots and removing them.
 
Did you modify the value of copies ?
doas zfs get copies zpool_name
If it is not set to 1(default setting) the space occupied is reduced as well.
 
Back
Top