I've installed FreeBSD into a slice of a BSD disklabel partition, on an SSD in a laptop that uses BIOS/MBR firmware. After some consideration, I've installed the main FreeBSD boot loader to the SSD's start bytes - overwriting any thing Windows 10 had installed there.
I've installed zfsboot to the disklabel partition. The first stage of the boot loader boots successfully, then initializes zfsboot.
Due to what seems to be a side effect of the filesystem layout as I'd created under the root ZFS filesystem for this installation, zfsboot fails then, unable to continue the boot process.
In the zfsboot console, I'm presented with what looks like a prompt for selecting a boot filesystem. When I enter a question mark as the first character in the prompt and then enter some string of filesystem name characters after that, I can then browse the filesystems from the perspective of zfsboot. Browsing the filesystem, I see that a directory exists for 'boot' under the root filesystem, but what was installed to the boot partition is not visible there. It looks like the boot filesystem has not been mounted by zfsboot.
When I'd created the root filesystem for this installation, it was under an assumption that zfsboot would do something like a normal 'zpool import' on the pool, thus automatically mounting any mountable datasets under the filesystem's root dataset. After taking a look at that boot failure however, I'm not certain if zfsboot has done so.
Could it be that I've missed a parameter somewhere, such that would cause zfsboot to run something like 'zpool import?
As a workaround to this issue, I may try to reinstall the base system, kernel, and packages so far, but with a substantially different filesystem layout in the ZFS pool. I'm not entirely certain of how to put that layout together, however, compared to the filesystem layout that I'd tried. I believe I should at least try to put the /etc and /boot partitions on the same root filesystem, together with anything that would be needed for any host init processes, as at least up to the time when fstab will be processed and any automatic mount commands would run.
The following is the present layout for this ZFS pool, it was created with some hacks in BASH. The pool is mounted under an altroot '/mnt' here
After seeing how zfsboot handles this filesystem layout, I'm afraid it would not be a recommended layout however.
I believe I've discovered at least two possible workarounds for the behavior in zfsboot:
1) move the affected volume out of the way, e.g 'zfs rename tpool/ROOT/boot tpool/ROOT/boot.orig'
2) transfer any files from the initial filesystem into a directory on the root filesystem having the same name as the initial name of the affected volume
3) `ls -lod the_initial_filesystem` to check for anything that may need to be set with chflags on the new filesystem directory
4) reboot, and repeat until figuring out how many volumes need to be folded into the root filesystem here
5) optionally, delete the volume for the_initial_filesystem
Alternately: To reinstall, using a substantially different ZFS filesystem configuration.
If it may be possible to retain the root filesystem as installed so far: May there be any knob to toggle that would cause zfsboot to mount more filesystems before it tries to find the kernel?
On another laptop, I've installed FreeBSD as a dedicated OS under a gpt partition. That laptop has EFI firmware, and boots with gptzfsboot.
I plan on eventually using this other laptop as a dedicated FreeBSD machine too, once the installation can boot. After that, I can work on moving all my user files from the Windows 10 installation into something under a virtual machine system.
Personally, I'm at least glad to see that it's possible to boot at least to zfsboot on this machine. Concerned about whether it would be possible to boot ZFS under something not a primary MBR partition on this BIOS machine, I'd been considering trying to install Arch Linux alongside FreeBSD and Windows 10, to have Grub2 available there. That would entail a whole other lot of complexity however - there are certain ZFS pool features that aren't compatible with Grub. So, it would need separate boot and root ZFS pools as well as some new configuration tooling for keeping the multiboot system manageable under Grub. Then, there are particular quirks of ZFS on Linux and Systemd.
This issue with dataset mounting aside, zfsboot works, out of the box. I'm certain it'll work out even nicer once it boots the next stage of the bootloader process on this machine.
I've installed zfsboot to the disklabel partition. The first stage of the boot loader boots successfully, then initializes zfsboot.
Due to what seems to be a side effect of the filesystem layout as I'd created under the root ZFS filesystem for this installation, zfsboot fails then, unable to continue the boot process.
In the zfsboot console, I'm presented with what looks like a prompt for selecting a boot filesystem. When I enter a question mark as the first character in the prompt and then enter some string of filesystem name characters after that, I can then browse the filesystems from the perspective of zfsboot. Browsing the filesystem, I see that a directory exists for 'boot' under the root filesystem, but what was installed to the boot partition is not visible there. It looks like the boot filesystem has not been mounted by zfsboot.
When I'd created the root filesystem for this installation, it was under an assumption that zfsboot would do something like a normal 'zpool import' on the pool, thus automatically mounting any mountable datasets under the filesystem's root dataset. After taking a look at that boot failure however, I'm not certain if zfsboot has done so.
Could it be that I've missed a parameter somewhere, such that would cause zfsboot to run something like 'zpool import?
As a workaround to this issue, I may try to reinstall the base system, kernel, and packages so far, but with a substantially different filesystem layout in the ZFS pool. I'm not entirely certain of how to put that layout together, however, compared to the filesystem layout that I'd tried. I believe I should at least try to put the /etc and /boot partitions on the same root filesystem, together with anything that would be needed for any host init processes, as at least up to the time when fstab will be processed and any automatic mount commands would run.
The following is the present layout for this ZFS pool, it was created with some hacks in BASH. The pool is mounted under an altroot '/mnt' here
Code:
[gimbal@tblk ~ ]$ zfs list -t filesystem -o name,canmount,mountpoint -r tpool/ROOT/freebsd
NAME CANMOUNT MOUNTPOINT
tpool/ROOT/freebsd on /mnt
tpool/ROOT/freebsd/boot.orig on /mnt/boot.orig
tpool/ROOT/freebsd/compat on /mnt/compat
tpool/ROOT/freebsd/compat/linux on /mnt/compat/linux
tpool/ROOT/freebsd/dev on /mnt/dev
tpool/ROOT/freebsd/etc on /mnt/etc
tpool/ROOT/freebsd/media on /mnt/media
tpool/ROOT/freebsd/mnt on /mnt/mnt
tpool/ROOT/freebsd/opt on /mnt/opt
tpool/ROOT/freebsd/opt/local on /mnt/opt/local
tpool/ROOT/freebsd/proc on /mnt/proc
tpool/ROOT/freebsd/tmp on /mnt/tmp
tpool/ROOT/freebsd/usr on /mnt/usr
tpool/ROOT/freebsd/usr/local on /mnt/usr/local
tpool/ROOT/freebsd/usr/local/etc on /mnt/usr/local/etc
tpool/ROOT/freebsd/var on /mnt/var
tpool/ROOT/freebsd/var/audit on /mnt/var/audit
tpool/ROOT/freebsd/var/cache on /mnt/var/cache
tpool/ROOT/freebsd/var/cache/ccache on /mnt/var/cache/ccache
tpool/ROOT/freebsd/var/cache/squid on /mnt/var/cache/squid
tpool/ROOT/freebsd/var/crash on /mnt/var/crash
tpool/ROOT/freebsd/var/db on /mnt/var/db
tpool/ROOT/freebsd/var/db/entropy on /mnt/var/db/entropy
tpool/ROOT/freebsd/var/db/etcupdate on /mnt/var/db/etcupdate
tpool/ROOT/freebsd/var/db/freebsd-update on /mnt/var/db/freebsd-update
tpool/ROOT/freebsd/var/db/mysql on /mnt/var/db/mysql
tpool/ROOT/freebsd/var/db/pkg on /mnt/var/db/pkg
tpool/ROOT/freebsd/var/db/ports on /mnt/var/db/ports
tpool/ROOT/freebsd/var/db/samba4 on /mnt/var/db/samba4
tpool/ROOT/freebsd/var/empty on /mnt/var/empty
tpool/ROOT/freebsd/var/log on /mnt/var/log
tpool/ROOT/freebsd/var/mail on /mnt/var/mail
tpool/ROOT/freebsd/var/run on /mnt/var/run
tpool/ROOT/freebsd/var/run/user on /mnt/var/run/user
tpool/ROOT/freebsd/var/service on /mnt/var/service
tpool/ROOT/freebsd/var/spool on /mnt/var/spool
tpool/ROOT/freebsd/var/tmp on /mnt/var/tmp
I believe I've discovered at least two possible workarounds for the behavior in zfsboot:
1) move the affected volume out of the way, e.g 'zfs rename tpool/ROOT/boot tpool/ROOT/boot.orig'
2) transfer any files from the initial filesystem into a directory on the root filesystem having the same name as the initial name of the affected volume
3) `ls -lod the_initial_filesystem` to check for anything that may need to be set with chflags on the new filesystem directory
4) reboot, and repeat until figuring out how many volumes need to be folded into the root filesystem here
5) optionally, delete the volume for the_initial_filesystem
Alternately: To reinstall, using a substantially different ZFS filesystem configuration.
If it may be possible to retain the root filesystem as installed so far: May there be any knob to toggle that would cause zfsboot to mount more filesystems before it tries to find the kernel?
On another laptop, I've installed FreeBSD as a dedicated OS under a gpt partition. That laptop has EFI firmware, and boots with gptzfsboot.
I plan on eventually using this other laptop as a dedicated FreeBSD machine too, once the installation can boot. After that, I can work on moving all my user files from the Windows 10 installation into something under a virtual machine system.
Personally, I'm at least glad to see that it's possible to boot at least to zfsboot on this machine. Concerned about whether it would be possible to boot ZFS under something not a primary MBR partition on this BIOS machine, I'd been considering trying to install Arch Linux alongside FreeBSD and Windows 10, to have Grub2 available there. That would entail a whole other lot of complexity however - there are certain ZFS pool features that aren't compatible with Grub. So, it would need separate boot and root ZFS pools as well as some new configuration tooling for keeping the multiboot system manageable under Grub. Then, there are particular quirks of ZFS on Linux and Systemd.
This issue with dataset mounting aside, zfsboot works, out of the box. I'm certain it'll work out even nicer once it boots the next stage of the bootloader process on this machine.