How to manually setup FreeBSD on zfs root?

Some things cannot be done later, like changing the utf8only property.
Can you back that up with a link to documentation, please? And BTW, it's better to use [ICODE]utf8only[/ICODE] (utf8only) rather than [cmd]utf8only[/cmd]... that makes it more readable:
1727220304805.png
 
Can you back that up with a link to documentation, please? And BTW, it's better to use [ICODE]utf8only[/ICODE] (utf8only) rather than [cmd]utf8only[/cmd]... that makes it more readable:
View attachment 20433
 
man -P cat 7 zfsprops

"… three properties cannot be changed after the file system is created, …"
Tried Ctrl-F on "properties cannot be changed after the file" in the manpage. And that's just not true. If you carefully read
zfsprops(7), you'll discover that some properties cannot be changed after the dataset is created. A dataset is NOT same as a filesystem. And, datasets can be created or destroyed any time after installation. Not to mention that with a bit of planning, you can even preserve the contents of the dataset, like with tar. 😤 :p

Correction: OK, discovered one instance... And, per the manpage: This is still per-dataset. Which means you can still re-do the dataset after the installation, and even preserve all the data on it (if you do a bit of planning)
1. Back up the data
2. destroy the dataset. with zfs destroy
3. re-create the dataset with zfs create my_dataset -o utf8only=on (Turn on utf8only (if by default the dataset was created with utf8only set to OFF)
4. Put data back in the re-created dataset.

Yep, ZFS is that convenient! 😂
 
Tried Ctrl-F on "properties cannot be changed after the file" in the manpage. And that's just not true. If you carefully read
zfsprops(7), you'll discover that some properties cannot be changed after the dataset is created. A dataset is NOT same as a filesystem. And, datasets can be created or destroyed any time after installation. Not to mention that with a bit of planning, you can even preserve the contents of the dataset, like with tar. 😤 :p

Correction: OK, discovered one instance... And, per the manpage: This is still per-dataset. Which means you can still re-do the dataset after the installation, and even preserve all the data on it (if you do a bit of planning)
1. Back up the data
2. destroy the dataset. with zfs destroy
3. re-create the dataset with zfs create my_dataset -o utf8only=on (Turn on utf8only (if by default the dataset was created with utf8only set to OFF)
4. Put data back in the re-created dataset.

Yep, ZFS is that convenient! 😂
Luck migrating your entire zfs hierarchy to a new root.
Not easy and, for what I know, can destroy the system on a mistake.
 
Tried Ctrl-F on "properties cannot be changed after the file" in the manpage. And that's just not true. If you carefully read
zfsprops(7), you'll discover that some properties cannot be changed after the dataset is created. A dataset is NOT same as a filesystem. And, datasets can be created or destroyed any time after installation. Not to mention that with a bit of planning, you can even preserve the contents of the dataset, like with tar. 😤 :p

Correction: OK, discovered one instance... And, per the manpage: This is still per-dataset. Which means you can still re-do the dataset after the installation, and even preserve all the data on it (if you do a bit of planning)
1. Back up the data
2. destroy the dataset. with zfs destroy
3. re-create the dataset with zfs create my_dataset -o utf8only=on (Turn on utf8only (if by default the dataset was created with utf8only set to OFF)
4. Put data back in the re-created dataset.

Yep, ZFS is that convenient! 😂
Also datasets are kind of the same as a filesystem as ZFS terminology, id est they are a superset of the term filesystem.
 
Also datasets are kind of the same as a filesystem as ZFS terminology, id est they are a superset of the term filesystem.
Datasets are NOT the same thing as a filesystem.

They are a subset of the ZFS... Some ZFS commands apply to pools, and zvols ... but not to datasets.
 
dataset
A generic name for the following ZFS components: clones, filesystems, snapshots, and volumes.

Each dataset is identified by a unique name in the ZFS namespace. Datasetsare identified using the following format:

pool/path[@snapshot]

pool
Identifies the name of the storage pool that contains thedataset

path
Is a slash-delimited path name for the dataset component

snapshot
Is an optional component that identifies a snapshot of a dataset
Source: https://docs.oracle.com/cd/E36784_01/html/E36835/ftyue.html#scrolltoc
 
Luck migrating your entire zfs hierarchy to a new root.
Not easy and, for what I know, can destroy the system on a mistake.
Well, no need to move the entire hierarchy - especially with hardware drivers living in /boot/modules in one of the datasets, and the like. And migrating any given dataset - it's doable, just google around for instructions. :p
 
A dataset is NOT same as a filesystem.
zfs(8):
Code:
       A dataset can be	one of the following:

	     file system  Can  be mounted within the standard system namespace
			  and behaves like other file systems.	While ZFS file
			  systems are designed to  be  POSIX-compliant,	 known
			  issues  exist	that prevent compliance	in some	cases.
			  Applications that depend  on	standards  conformance
			  might	 fail due to non-standard behavior when	check-
			  ing file system free space.

	     volume	  A logical volume exported as a raw or	block  device.
			  This	type  of  dataset  should  only	be used	when a
			  block	device is required.  File  systems  are	 typi-
			  cally	used in	most environments.

	     snapshot	  A  read-only version of a file system	or volume at a
			  given	 point	in   time.    It   is	specified   as
			  filesystem@name or volume@name.

	     bookmark	  Much	like  a	 snapshot, but without the hold	on on-
			  disk data.  It can be	used as	the source of  a  send
			  (but	not  for  a  receive).	 It  is	 specified  as
			  filesystem#name or volume#name.
 
did you know that zfs destroy can be used on a ZFS dataset, but not a ZFS volume?
Eh?

And as Erichans correctly quoted, a ZFS volume is a sub-case of a ZFS dataset.
So, your statement is not only incorrect, it simply does not make sense.

And a ZFS filesystem is a sub-case of ZFS dataset as well.
In fact, ZFS developers and users frequently use "dataset" to mean either a filesystem or a volume (as opposed to a snapshot or bookmark),
but sometimes they use dataset to mean all types.
It's a little bit confusing but it's usually clear from the context.

BTW: what do you think a 'Filesystem' even means?

It can mean UFS, ReiserFS, BTRFS, NTFS - or it can mean the hierarchy of folders and filepaths like /usr/ports/. In case of ZFS manpages, filesystem means the latter.
In common usage, "filesystem" means either a "filesystem [type]" or a "filesystem [instance]".
When someone says that FreeBSD has native support for UFS, FAT32, ZFS filesystems, they mean filesystem types, of course.
When they say that /var and /usr can be separate filesystems, they mean filesystem instances (without specifying their types).
No need to muddy the waters by conflating those two usages.
 
In fact, ZFS developers and users frequently use "dataset" to mean either a filesystem or a volume (as opposed to a snapshot or bookmark),
but sometimes they use dataset to mean all types.
It's a little bit confusing but it's usually clear from the context.
E.g., see zfs-destroy(8) (relevant to the recent comments):
NAME
zfs-destroy – destroy ZFS dataset, snapshots, or bookmark

SYNOPSIS
zfs destroy [-Rfnprv] filesystem|volume
zfs destroy [-Rdnprv] filesystem|volume@snap[%snap[,snap[%snap]]]…
zfs destroy filesystem|volume#bookmark
Here 'dataset' is used to mean only a filesystem or volume.
While, as quoted earlier, zfs(8) uses 'dataset' to mean all four things.
 
are you sure you mounted zfs before thatif mnt is empty maybe it didn’t mount correctly.try running zfs mount manually after entering the shell or check if the system even sees the zfs pool.
 
Andriy : Thanks for the corrections. Yeah, that was me not reading the manpages very carefully. Live and learn.

All this still doesn't change the fact that one can do just about anything (even turn the utf8only property on or off) at any time, after the OS installation.

Some tasks take a bit more manpage reading and planning than others. Yes, a mistake can destroy the whole system.
 
After the geli setup, create the ZFS pool with the correct mountpoint:
I have been following this guide https://www.c0ffee.net/blog/freebsd-full-disk-encryption-uefi to setup full disk encryption in my FreeBSD machine, until the part of the root, as I want it to be a zfs filesystem. Thus, in resume, I have wrote the following commands, in the shell partitioning part:
Code:
gpart show
gpart destroy -F ada0
gpart create -s gpt ada0
gpart add -t efi -l freebsd-efi -a 4k -s 800k ada0
newfs_msdos /dev/gpt/freebsd-efi
mount -t msdosfs /dev/gpt/freebsd-efi /mnt
mkdir -p /mnt/EFI/BOOT
cp /boot/boot1.efi /mnt/EFI/BOOT/BOOTX64.efi
echo BOOTx64.efi > /mnt/EFI/BOOT/STARTUP.NSH
umount /mnt
gpart add -t freebsd-ufs -l freebsd-boot -a 4k -s 1g ada0
newfs -U -L bootfs /dev/gpt/freebsd-boot
gpart add -t freebsd-ufs -l freebsd-root -a 4k ada0
geli init -b -e AES-XTS -l 256 -s 4096 /dev/gpt/freebsd-root
geli attach /dev/gpt/freebsd-root
This being the moment where I deattach form the guide and start to setup a zfs root like
Code:
zpool create rpool /dev/gpt/freebsd-root.eli
zfs create -o mountpoint=none rpool/ROOT
zfs create -o mountpoint=/mnt -o compression=lz4 rpool/ROOT/default
then I go back to the guide and run
Code:
mkdir /mnt/bootfs
mount /dev/gpt/freebsd-boot /mnt/bootfs
cd /mnt
mkdir bootfs/boot
ln -s bootfs/boot
add /dev/gpt/freebsd-boot /bootfs ufs rw 0 0 to the fstab file
but have not added anything to the /boot/loader.conf, to configure it latter.
When installation ended I selected the option to open shell to make some final adjusments and /mnt was empty.

Thus have no idea of what went wrong.

I think the there's some issues with your mounts. Check your mounts in order. You set zfs create -o mountpoint=/mnt for the ZFS dataset, but this should be adjusted to use / as the mountpoint. The ZFS dataset should mount at root (/) when the system boots, not at /mnt



$ zpool list -H -o name 2>/dev/null | grep -q "^rpool$" || zpool create -R /mnt rpool /dev/gpt/freebsd-root.eli; zfs list -H -o name 2>/dev/null | grep -q "^rpool/ROOT/default$" || zfs create -o mountpoint=/ rpool/ROOT/default; zfs get -H mountpoint rpool/ROOT/default | awk '{print $3}' | grep -q "^/$" || zfs set mountpoint=/ rpool/ROOT/default; mount | grep -q "/dev/gpt/freebsd-boot" || { mkdir -p /mnt/bootfs; mount /dev/gpt/freebsd-boot /mnt/bootfs; }; grep -q "/dev/gpt/freebsd-boot" /mnt/etc/fstab || echo "/dev/gpt/freebsd-boot /bootfs ufs rw 0 0" >> /mnt/etc/fstab; grep -q 'zfs_load="YES"' /mnt/boot/loader.conf || echo 'zfs_load="YES"' >> /mnt/boot/loader.conf; grep -q 'geom_eli_load="YES"' /mnt/boot/loader.conf || echo 'geom_eli_load="YES"' >> /mnt/boot/loader.conf; grep -q 'vfs.root.mountfrom="zfs:rpool/ROOT/default"' /mnt/boot/loader.conf || echo 'vfs.root.mountfrom="zfs:rpool/ROOT/default"' >> /mnt/boot/loader.conf

 
… use / …

No, the mountpoint value should be none.

bectl(8)

An example:

Code:
% bectl list -c creation | head -n 1 ; bectl list -c creation | tail -n 12
BE                        Active Mountpoint Space Created
1500023-073-base          -      -          2.80G 2024-09-19 19:26
1500023-074-base-ports    -      -          26.4M 2024-09-21 18:40
1500023-075-kde6          -      -          606M  2024-09-21 22:44
1500023-076-base          -      -          2.79G 2024-09-22 05:37
1500023-077-base          -      -          2.83G 2024-09-22 16:10
1500023-078-base          -      -          2.79G 2024-09-23 07:11
1500023-079-base          -      -          2.80G 2024-09-24 10:07
1500023-080-base          -      -          49.0M 2024-09-24 19:13
1500023-081-ports         -      -          40.3M 2024-09-24 23:12
1500023-082-base          -      -          2.81G 2024-09-25 07:39
1500023-083-base          NR     /          419G  2024-09-26 01:51
1500023-084-ports         T      -          373M  2024-09-26 06:37
%
  • 1500023-074-base-ports was a pkg upgrade to both FreeBSD and ports (first base, then ports)
  • 1500023-083-base was an upgrade to FreeBSD alone, again using pkg – note the mountpoint, /
  • 1500023-084-ports is a new environment, not yet booted, ports alone were upgraded.
Code:
% zfs get canmount,mountpoint august/ROOT/1500023-083-base
NAME                          PROPERTY    VALUE       SOURCE
august/ROOT/1500023-083-base  canmount    noauto      local
august/ROOT/1500023-083-base  mountpoint  none        inherited from august
%
  • note the mountpoint, none.
 
Cath O'Deray, that's debatable. Because of canmount=noauto either way works.

I prefer to have mountpoint set, so that I can use zfs mount <dataset> when I need to mount an inactive BE for whatever reason.
To me, it's just more convenient than mount -t zfs <dataset> <mountpoint>.
 
In addition to some settings not being adjustable after creation of a pool or dataset, and adjusting them after data is written will not apply the new properties to the old data, you also can only adjust some properties during a zfs send/recv. Attempting to decrease recordsize from 128 to 64 will put the new recordsize property to the filesystem but the received data maintains the old record size but changing compression setting will recompress the old data to the new setting (using zfs send+recv's compression can further interfere with results). My understanding is you can decrease a larger record down to 128k by putting transferring it without the property for larger record sizes which makes it confusing since that means you can "sometimes" modify record sizes.

When you modify settings that you want to be allowed to reapply, https://github.com/pjd/filerewrite may help. This will be new writes in the file which can have an impact on space used from block cloning and snapshots. You will be able to achieve new compression settings but I don't know if a file's record size is adjusted during such a rewrite in place.
 
Back
Top