Bootstrap booting from a root on ZFS

I have a PCI SSD which is not recognized by my BIOS. This means I can't select it as boot device.
But since FreeBSD has no problem to use this disk I can install FreeBSD on it.

So my idea was to have just boot from a USB stick and load then my root on ZFS.
Is this possible? Is there any documentation for it?

I have tried:
1. Created /boot/zfs/zpool.cache for my pool on the USB stick (resulted in a endless loop).
2. https://wiki.freebsd.org/RootOnZFS/UFSBoot (I tried to adapt that to FreeBSD 10.2 with root on ZFS) didn't work.
 
Provided the FreeBSD kernel recognises your SSD it is possible to boot as you describe. If your computer uses MBR boot you can use either ZFS or UFS on the USB stick. If your computer uses UEFI boot, you must use UFS on the USB stick (at time of writing, the UEFI boot loader only supports booting from UFS).

In short you will need a UFS filesystem or ZFS dataset on the USB stick that contains the /boot directory and contents, including the kernel. In /boot/loader.conf you need to include a line to tell the loader where to find the root filesystem. If your root filesystem were located on the ZFS pool foo in the dataset bar, the line would be:
Code:
vfs.root.mountfrom="zfs:foo/bar"
You should also consider mounting the UFS filesystem or ZFS dataset on the USB stick during boot and setting up a symbolic link back to the /boot directory so that for system updates the kernel and related modules appear in the expected place in the directory structure.

For more detail, have a look at related threads where I have described the technique. You will need to pick through a bit as the first one contains responses specific to two particular users' issues and the second contains a lot of information that likely isn't relevant for you:
Feel free to come back with any questions and when I have more time available to write a response I'll try to help :)
 
Last edited:
Thank you very much. This post helped me a lot https://forums.freebsd.org/threads/zfs-boot-from-usb.45880/#post-257321

There is only one problem left where I failed. Imported pools doesn't get auto mounted. I think because they get mounted according the /boot/zfs/zpool.cache file which is on the pool on my USB stick. Do you have a solution for that problem as well?

ps. I believe I found a error in you post (https://forums.freebsd.org/threads/zfs-boot-from-usb.45880/#post-257321)
zfs set bootfs=usbboot/boot usbboot
Should be:
zpool set bootfs=usbboot/boot usbboot
 
I'm pleased you found it helpful and thank you for the correction. bootfs is indeed a property of the pool not a dataset within it.
 
I realised I didn't address your auto-mounting pools issue. Whether datasets in ZFS pools are automatically mounted is a setting in the pool, dataset or during the import. What I think you mean, though is that the pools aren't being imported. I had an issue like that with FreeBSD 9.2 but not since.

Do you have lines in /boot/loader.conf that look something like the following?
Code:
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"
Did you set up the symbolic link so /boot appears in the correct place (likely linking to /usbboot/boot or similar)?

I suspect you need to do something a bit cunning to generate the zpool.cache file with the correct values manually and then put it on the USB memory stick. It's possible to set the location of the cache file on an imported pool. When a ZFS pool is imported, it's possible to specify a cache file location with the -c flag. Try the following:
  • Boot your system
  • If you have imported the pool on your USB stick (which I've assumed is called usbboot), export it with zpool export usbboot
  • Generate a new cache file for the pool on your SSD (which I've assumed is called ssdroot) with zpool set cachefile=/tmp/zpool.cache.new ssdroot
  • Import the pool on the USB stick, adding it's information to the new cache file with zpool import -c /tmp/zpool.cache.new usbboot
  • Back up the existing cache file just in case (I've assumed you mount the dataset from your USB memory stick at /usbboot) with mv /usbboot/boot/zpool.cache /usbboot/boot/zpool.cache.old
  • Copy your new cache file to the correct place with cp /tmp/zpool.cache.new /usbboot/boot/zpool.cache
  • Put the cachefile property back to normal on the pool (I'm not sure whether this setting persists across boots) with zpool set cachefile=/boot/zfs/zpool.cache ssdroot. This also assumes you set up the symbolic link back to /boot.
 
I had a thought that your issue might be much simpler. Did you remember to add zfs_enable="YES" to /etc/rc.conf? From the rc.conf(5) man page:
zfs_enable (bool) If set to ``YES'', /etc/rc.d/zfs will attempt to auto-
matically mount ZFS file systems and initialize ZFS volumes
(ZVOLs).
 
First of all sorry for the late reply.

Here is my /boot/loader.conf
Code:
zfs_load="YES"
vfs.root.mountfrom="zfs:ssdboot/ROOT/default"
vfs.mountroot.timeout="10"
zpool_cache_name="/boot/zfs/zpool.cache"
zpool_cache_type="/boot/zfs/zpool.cache"

Yeah my /boot is a symlink to /uboot

My zpool.cache has the right values. I checked that with zdb -C.

And I have
Code:
zfs_enable="YES"
in my /etc/rc.conf.

But still after the boot only my main disk where my system is, is imported. Then I need manually import usbboot and every other pool. And I'm still a bit clue less how to fix that.
 
Two thoughts for things to try:

Firstly, the man page for zdb(8) says of the -C argument:
Display information about the configuration. If specified with no
other options, instead display information about the cache file
(/etc/zfs/zpool.cache). To specify the cache file to display,
see -U
Since it explicitly mentions the cachefile as being at /etc/zfs/zpool.cache it might be worth using the -U option to explicitly check your cachefile.

Secondly, since the cachefile is currently located in a dataset on your ssdboot pool it may be that it is needed after the ZFS module is loaded but then cannot be found on your root filesystem, since the datasets from the ssdboot pool are not yet mounted. It might be worth moving the cachefile to somewhere like /var/zpool.cache then updating your /boot/loader.conf values to see whether that makes a difference.
 
I guess the man page is wrong. If I find the time I will dig in the source code and fix the doc if it's really wrong.
But on my system -C uses
Code:
/boot/zfs/zpool.cache
I confirmed it with the -U option.

Code:
$ zdb -C   
cannot open '/boot/zfs/zpool.cache': No such file or directory

I tried to move the zpool.cache to /var/zfs and update the loader.conf. Which does nothing. At least nothing I can see.


I checked my that /boot is a symbolic link to /uboot/boot.

And I found this: https://lists.freebsd.org/pipermail/freebsd-hackers/2010-November/033710.html
Which basically says that zpool_cache_name & zpool_cache_type are useless.

So I tried also what the author there is suggesting using:
Code:
zpool_cache_name="zfs/zpool.cache"
zpool_cache_type="zfs/zpool.cache"
in my loader.conf and symlink the /boot/zfs to /zfs. But without luck.
 
Let's assume your ZFS cachefile is fine and not causing the issue. I had problems relating to the cachefile on FreeBSD 9.2 where the cachefile was needed to mount the root ZFS pool on the GELI container. That is likely why it was mentioned in the guide you read. However, I have had no issue with the cachefile in more recent FreeBSD versions.

An ugly hack would be to add the import commands to /etc/rc.local but I have a feeling we are missing something obvious here... I will summarise and you can correct me if I misunderstood anything:
  • You can boot the system successfully from your uboot pool on the USB stick, which holds the contents of /boot
  • The slightly confusingly named ssdboot pool holds your root filesystem
  • All datasets in the ssdboot pool are mounted automatically as expected
  • Datasets in the uboot pool and other ZFS pools are not automatically mounted
  • These datasets are mounted correctly when you run zpool import <poolname> for the relevant pools.
Another thought: Are you manually exporting the pools? I would expect them to then stay exported until a manual import.
 
Are you manually exporting the pools?

Not that I'm aware of unless a reboot does something I don't expect.

Your summary is correct, exactly that is the behavior. The pool name ssdboot is a typo I did in the setup, I probably should rename it to ssdroot or
something similar to prevent confusion for myself and others.
 
Not that I'm aware of unless a reboot does something I don't expect.
I would say it's better to get into the habit of using shutdown(8) but I doubt that is causing your issue. I just looked at the source code for the ZFS rc.d script, which essentially runs zfs mount -a. I'm not sure why your pools are being forgotten, so the best I can offer is an ugly workaround where you put the commands to import the pools into /etc/rc.local.
 
Well Instead of a ugly workaround I can import my pools by hand. It's a NAS/Server so I reboot anyway only for kernel updates. And since sshd starts I can do it even remotely.
So it's good enough for me. If I have to much time sometimes I guess I will dig into it, but for now thanks for your help!
 
Back
Top