FreeBSD on ZFS without Boot and Swap.

As Menelkir points out, UFS also uses cache/buffers. It's just the way filesystems in general work. Physical device as the ultimate backing store, read/write data flows through in memory structures (buffers/cache). Typically reads and writes are held around in that memory structure for some amount of time to satisfy a "next" read because it's quicker going to memory than to physical device.

Obviously you need buffers for buffered I/O etc, but we are talking about writing-out a VM memory page to a swap backing store. The data is in a page of RAM and it's not obvious that that page would need to be copied to another page of RAM - particularly when cache, VM and swap are so closely integrated in FreeBSD if UFS is used.

ZFS does need to get memory to write out to swap, which can make swap on ZFS useless under very low memory conditions. This has been well publicized, but I've never seen any such warning about swap-files. Do you actually know for a fact that this is so?
 
Thanks to Alain De Vos I was able to remove /tmp and /home datasets from ZFS-pool:
Scrn-001.png

/tmp is moved to RAM(tmpfs) and
/home is moved to separate SSD addressed by GPT-label 'homes'

My next goal is to change the way how RAID-Z1's drives are addressed:
Scrn-002.png

Instead of ada0p2, ada2p2 and ada3p2 I want to use GPT-labels also.

And I also interested to encrypt SSD drive that contain /home partition.
 
Gpt-labeling,
Code:
gpart modify -i 2 -l zdiska /dev/ada0
gpart modify -i 2 -l zdiskb /dev/ada2
gpart modify -i 2 -l zdiskc /dev/ada3
I don't know if you can safely detach one disk from the pool otherwise it might be needed to recreate the whole pole with gpt labels.

encrypt a zfs dataset,
Code:
zfs create -o encryption=on -o keylocation=prompt -o keyformat=passphrase zroot/myencrypt
 
I don't know if you can safely detach one disk from the pool otherwise it might be needed to recreate the whole pole with gpt labels.
If you are careful and maintain enough redundancy, it's possible. I've done this with mirrors, you need to pay attention to the commands you use.
 
ZFS does need to get memory to write out to swap, which can make swap on ZFS useless under very low memory conditions. This has been well publicized, but I've never seen any such warning about swap-files. Do you actually know for a fact that this is so?

On using a swap file with ZFS, zfs-create(8):
Code:
ZFS Volumes as Swap
     ZFS volumes may be	used as	swap devices.  After creating the volume with
     the zfs create -V enable the swap area using the swapon(8)	command.
     Swapping to files on ZFS filesystems is not supported.
 
My next goal is to change the way how RAID-Z1's drives are addressed:
Scrn-002.png

Instead of ada0p2, ada2p2 and ada3p2 I want to use GPT-labels also.

GPT labels as names for ZFS storage pool partitions won't work with geli(8) encrypted providers (i.e. ada0p2.eli), at least not permanently.

It's possible to rename them on a running system, but they won't persist a reboot.

And I also interested to encrypt SSD drive that contain /home partition.
Are you asking for a how to?
 
GPT labels as names for ZFS storage pool partitions won't work with geli(8) encrypted providers (i.e. ada0p2.eli), at least not permanently.

It's possible to rename them on a running system, but they won't persist a reboot.

This is really sad to hear. Maybe I could use another type of labels, I mean not GPT-based?
Or it doesn't matter which type of labels I use?
Is there any alternative to GELI?
 
Maybe I could use another type of labels, I mean not GPT-based?
There is glabel(8), but it works only on none-OS disks. The loader can't unlock a glabel(8)ed GELI encrypted OS disk (as it can't unlock GPT labeled GELI encrypted disks), and as follows can't find a kernel to boot.

None-OS glabel(8)ed disks aren't problematic. Those are unlocked after a kernel is booted.

I did some testing, created one GELI (none-label) encrypted OS disk and 3 glabel(8)ed GELI encrypted whole disks (all ZFS). The ZFS pools label persist a reboot:

geli-glabel.png

Is there any alternative to GELI?
There is no other full disk OS encryption for FreeBSD.

There is gbde(8), but it's not recommended to use it.
Code:
DESCRIPTION
     NOTICE: Please be aware that this code has not yet received much review
     and analysis by qualified cryptographers and therefore should be
     considered a slightly suspect experimental facility.
 
Thank you T-Daemon for taking the time to clarify these moments.
I'm afraid I have to drop encryption for the moment, because it wasn't requirement for me, but an option.
 
Meanwhile, I have to reinstall my FreeBSD, because I messed up something with my VGA driver and I forgot to make a snapshot.
This time I want to install FreeBSD without installer. I have these drives in my PC.
IMG_20230219_005814.jpg

1. ada0 (GPT label firstSSD240), ada2 (GPT label secondSSD240) and ada3 (GPT label thirdSSD240) will be used as ZFS-partition with RaidZ1 on it, but without swap, tmp and home.
2. ada1 (GPT label Home) will be used as home-partition also based on ZFS, but in different pool(zhome). Later will be substituted with RaidZ1.
3. da1 (GPT label EFI_boot) is a USB-MicroSD-Card-Reader with 4GB SD card in it. I want to use it as a EFI-partition (if possible).
4. da0 is an installation media.

So I read approximately 15 installation instructions, but didn't find how to install FreeBSD with separate EFI-partition. Is it possible anyway?
Can some one help me to start with number 3?

I must say that I have never met such a kind community like here. I am embedded software engineer, but never had a chance to meet FreeBSD. I used for awhile Ubuntu Linux, but most of my life I worked with Windows.
 
This time I want to install FreeBSD without installer.
...
how to install FreeBSD with separate EFI-partition. Is it possible anyway?
I have tried many configurations by creating the EFI partition on a different disk than the OS disk(s), but apparently the loader expects the kernel to be on the same disk as the ESP is. What works is creating a boot partition on the same disk as the ESP.

That boot partition needs special attention when updating the system. Every time the system is updated and includes the kernel, kernel modules, etc., the boot partition needs to be synchronized with the updated.

On a major update the loader in the ESP needs to be updated as well.

Try the following guide. It includes additional steps and options missing in the linked wiki. The ZFS pool and dataset creation is taken from the guided installation installer log /var/log/bsdinstall_log.

Boot FreeBSD installer, drop to "Live CD", enter as user root.

Create EFI and boot partition:
Code:
 gpart destroy -F da1
 gpart create -s gpt da1
 gpart add -t efi -a 4k -s 260m -l EFI_boot da1
 gpart add -t freebsd-ufs da1

 newfs_msdos -c 1 -F 32 /dev/da1p1
 mount_msdosfs /dev/da1p1 /mnt
 mkdir -p /mnt/efi/boot
 mkdir /mnt/efi/freebsd
 cp /boot/loader.efi /mnt/efi/boot/bootx64.efi
 cp /boot/loader.efi /mnt/efi/freebsd
 umount /mnt

 newfs -jU /dev/da1p2

Create raidz1 OS pool and datasets:
Code:
 gpart destroy -F  all disks below
 gpart create -s gpt ada0
 gpart create -s gpt ada2
 gpart create -s gpt ada3

 gpart add -t freebsd-zfs -a 1m -l firstSSD240 ada0
 gpart add -t freebsd-zfs -a 1m -l secondSSD240 ada2
 gpart add -t freebsd-zfs -a 1m -l thirdSSD240 ada3

 zpool create -o ashift=12 -o altroot=/mnt -O compress=lz4 -O atime=off -m none -f zroot raidz1 gpt/firstSSD240 gpt/secondSSD240 gpt/thirdSSD240

 zfs create -o mountpoint=none zroot/ROOT
 zfs create -o mountpoint=/ zroot/ROOT/default
 zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
 zfs create -o mountpoint=/usr -o canmount=off zroot/usr
 zfs create -o setuid=off zroot/usr/ports
 zfs create  zroot/usr/src
 zfs create -o mountpoint=/var -o canmount=off zroot/var
 zfs create -o exec=off -o setuid=off zroot/var/audit
 zfs create -o exec=off -o setuid=off zroot/var/crash
 zfs create -o exec=off -o setuid=off zroot/var/log
 zfs create -o atime=on zroot/var/mail
 zfs create -o setuid=off zroot/var/tmp
 zfs set mountpoint=/zroot zroot

 mkdir -p /mnt/tmp
 chmod 1777 /mnt/tmp
 mkdir -p /mnt/var/tmp
 chmod 1777 /mnt/var/tmp
 
 zpool set bootfs=zroot/ROOT/default zroot

 mkdir -p /mnt/boot/zfs
 zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
 
 zfs set canmount=noauto zroot/ROOT/default

 mkdir /mnt/etc
 Create /mnt/etc/rc.conf:
    zfs_enable="YES"
 
 Create a empty fstab file (otherwise the system complais about it missing):
    touch /mnt/etc/fstab

 Create /mnt/boot/loader.conf
    kern.geom.label.disk_ident.enable="0"
    kern.geom.label.gptid.enable="0"
    zfs_load="YES"

Install system:
 cd /usr/freebsd-dist
 tar xfC base.txz /mnt
 tar xfC kernel.txz /mnt

 Edit /mnt/etc/sysctl.conf:
    vfs.zfs.min_auto_ashift=12

Copy /mnt/boot to boot partition on SD-card:
 mkdir /tmp/a
 mount /dev/da1p2 /tmp/a
 cp -a /mnt/boot      /tmp/a

 Edit /tmp/a/boot/loader.conf
   ...
   currdev="zfs:zroot/ROOT/default:"   Mind the colon at the end of the variable.

Create "Home" pool:
Code:
 gpart destroy -F ada1
 gpart create -s gpt ada1
 gpart add -t freebsd-zfs -a 1m -l Home ada1

 zpool create -o ashift=12 -O compress=lz4 -O atime=off -m none zhome gpt/Home
 zfs set mountpoint=/usr/home zhome

Check boot order (efibootmgr(8)), SD-Card must be the first.

Reboot system.

Configure new system (bsdconfig(8)).
 
Last edited:
So, today I had a time to implement all commands that T-Daemon provided.
Everything went ok, but after restart system stopped booting at this point:
Scrn-004.png

If I enter "zfs:zroot/ROOT/default" FreeBSD continues to boot and after a couple of seconds I see "login" prompt.
After I enter my login everything seems working good.
After restart booting stops at the same point.
So, what it could be?
I appreciate any help. THX.
 
what it could be?

Have you added the "currdev" variable to the SD-Card /boot/loader.conf (not raidz1 pool /boot/loader.conf) and made sure there is a colon at the end of the variable (that's important)?
Rich (BB code):
currdev="zfs:zroot/ROOT/default:"


I have changed the order of one of the steps in the guide. Please check /etc/sysctl.conf, see if vfs.zfs.min_auto_ashift=12 is in place.

That file must be edited after base.txz is extracted. I didn't realize base.txz comes with it. A present /etc/sysctl.conf gets overwritten when it is created before the extraction.

I would like to remind once again, that after a system update involving /boot files, to syncronize SD-Card and raidz1 boot directories. After synronizing, ensure SD-Card /boot/loader.conf contains all variables you wish to set.

Also kernel modules from third party applications (package, ports), which are installed under raidz1 pool /boot/modules, need to be synchronized.
 
Thank you very much T-Daemon for fast reply.
It really helped.
I have added
Code:
currdev="zfs:zroot/ROOT/default:"
and
Code:
vfs.zfs.min_auto_ashift=12
and now booting goes directly to login prompt.
 
ZFS comes with its specific "ZFS versions" of some of the usual filesystem commands: use zfs-mount(8) and try zsf mount.

I'm not sure at what moment in time you've set vfs.zfs.min_auto_ashift=12 but to be sure check it with zdb -C <poolname> | grep ashift. Once set at creation, ashift is an immutable VDEV property; have a look at Tuning recordsize in OpenZFS.
 
I'm not sure at what moment in time you've set vfs.zfs.min_auto_ashift=12 but to be sure check it with zdb -C <poolname> | grep ashift. Once set at creation, ashift is an immutable VDEV property
Bummer, I forgot about ashift.

Allow me to respond that question, since kazham was following my tutorial.

No ashift property was set during creation of the pools. vfs.zfs.min_auto_ashift=12 was set later in /etc/sysctl.conf, after creation of the pools. The current ashift value is probably 9.

kazham, I'm afraid those two pools need to be recreated with ashift specified. My apologies.
Code:
zpool create -o ashift=12 ...

I've corrected the tutorial. See zpoolprops(7) about ashift.
 
Hi everyone,
for clean experiment I decided to make installation again, from the very beginning, because T-Daemon has made a couple of fixes in his tutorial.
I just removed this line:
zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
This time everything went smooth, without errors.
TWIMC
Scrn-006.png


Scrn-007.png


1678228309379.png


1678228362810.png


Now, I think we can consider this thread as closed. As a next step I want to learn how to make snapshots, but for this I need to read a bit.
Thank you very much, greatest community of the world!
 
Hi!
First of all, I am complete newbie in FreeBSD.
I would like to install FreeBSD on ZFS. Modern installer have everything to install FreeBSD on ZFS without any problems, but I have a one "Ideafix", if you know what I mean.
I want to remove Boot and Swap partitions from ZFS pool(RAIDZ1) and move them to separate drives (Boot - USB or SSD, Swap - separate small, but fast SSD or remove Swap completely, because I have 64GB RAM).
Is it possible? I mean isn't it against ZFS/FreeBSD ideology?
If this is possible, can someone point me to some article where I can read about this kind of configuration.
I appreciate any help and have a good day/night to everyone!
THX.
🤣 No need to worry about ideology... Just worry about what actually works... After reading through this thread, I realize my remarks may be late to the party, but, I'd still suggest frankly, accepting the defaults of the ZFS-based installation. I somehow always end up with 2 GB of swap, no matter the RAM... and swap is kind of a pain to add afterwards.

/tmp and /var/tmp are actually kind of important to have in the normal course of using FreeBSD. So I'd suggest leaving them alone unless some good troubleshooting of actual problems points in the direction of messing with those datasets.

When I compiled ports with just 8GB of RAM, I did have errors that pointed to swap issues (As mentioned earlier, I have just 2 GB of swap). But then I checked how much swap I have on a different machine that I normally use for Poudriere: also just 2 GB. RAM is the important component when compiling ports, not swap. And with your 64 GB of RAM, OP shouldn't have a hard time compiling anything.

So, OP, just accept the defaults, and follow the Handbook... and I'd suggest not doing unnecessary tweaks unless you run into trouble and your hardware is not cooperating with the defaults.

I know that on these Forums, following the Handbook is strongly encouraged, but it's not an ideology, it's a Best Practice that has sound technical reasoning behind it. :)

Edit: After re-reading the initial post, I gotta add: /bootand swap are best left alone. Swap is not even visible in zfs list, because it's not exactly a dataset, but It's not impossible to move the /usr/home dataset, though, and can be a fun challenge.
 
Back
Top