USB boot to existing root on ZFS

I have and old 2u HP Proliant DL385p gen8 server. I used to run ESX. However, I got the idea to use it to learn ZFS as it has 10 - 900 GB SAS drives.

There's always a catch...

It seems that there's a built in hardware RAID controller that all of the drives attach to.
The gui based config utility for said RAID controller provides no options other than RAID 0, 1, 5, 6, 50, or 60.
There is no JBOD option to allow individual disk access.

I was able to find a linux based cli tool (hpssacli) which allowed me to put the controller in HBA mode.

Viola, individual disk access.

I installed freebsd to a ZFS RAIDZ2 pool and rebooted. No joy!

Apparently, the only way to boot from the disks is when configured in a RAID controlled by the previously mentioned RAID controller. <sigh>

If one creates a bootable USB stick which indicates that ROOT is on the ZFS pool, things work well.

This can be done with either MBR or GPT formatting of the USB stick (teachable moment).

The trick is that the boot isn't ZFS based, it's UFS. Only when it gets to the mounting of ROOT (/) does it start using ZFS. (gptzfsboot did not work... YMMV)

This requires a copy of the /boot directory and the directories under it on the USB stick. (CAVEAT: this copy MUST be done every time you apply patches unless you use a symbolic link)

I used several instructive posts to arrive at a final solution.

Regards
 
I have and old 2u HP Proliant DL385p gen8 server. I used to run ESX. However, I got the idea to use it to learn ZFS as it has 10 - 900 GB SAS drives.

There's always a catch...

It seems that there's a built in hardware RAID controller that all of the drives attach to.
The gui based config utility for said RAID controller provides no options other than RAID 0, 1, 5, 6, 50, or 60.
There is no JBOD option to allow individual disk access.

I was able to find a linux based cli tool (hpssacli) which allowed me to put the controller in HBA mode.

I've done that. I think HBA is hp for JBOD. if the controller is looked at in their gui controller thing it says no hds attached (which they aren't, from the POV of smartarray). I'm no closer to getting it to boot properly though

If one creates a bootable USB stick which indicates that ROOT is on the ZFS pool, things work well.

This can be done with either MBR or GPT formatting of the USB stick (teachable moment).

How did you do that?

The trick is that the boot isn't ZFS based, it's UFS. Only when it gets to the mounting of ROOT (/) does it start using ZFS. (gptzfsboot did not work... YMMV)

This requires a copy of the /boot directory and the directories under it on the USB stick. (CAVEAT: this copy MUST be done every time you apply patches unless you use a symbolic link)

so, you boot to usb, then zfs load (?) then you were able to copy the stuff under /boot on the usb stick over to /boot on the zfs array?
The freebsd installer sees all the disks, presumably when it's finished making the (in my case, raidz3) array it's meant to write that config somewhere.

I used several instructive posts to arrive at a final solution.
Can you post links to them please?
 
I worked around the problem by giving up on root-on-zfs and instead installing a standard ufs install onto the first disk which booted up fine. After the installation has updated, I'll try and zpool the rest of the disks.
 
If you want to kick FreeBSD installed in HDDs/SSDs/RAIDs from USB memstick and you can boot using UEFI, using boot1.efi (until it's removed from tree) could help. Create ESP only (or at least no FreeBSD bootable partitions) memstick and copy /boot/boot1.efi into ESP's EFI/BOOT/ as BOOT[arch].EFI. For amd64, BOOTx64.EFI.

boot1.efi looks for ZFS pool, then freebsd-ufs partitions containing /boot/loader.efi and configured as bootable with the order below.
  1. The drive where boot1.efi itself is loaded from.
  2. All drives with the order UEFI firmware found.
Then chain load /boot/loader.efi which is found first.

Note that this shouldn't work if using hardware RAID (by RAID controller board or onboard) and UEFI firnware cannot handle it.
 
I worked around the problem by giving up on root-on-zfs and instead installing a standard ufs install onto the first disk which booted up fine. After the installation has updated, I'll try and zpool the rest of the disks.

This works. Actually, it's preferable as i consider root-on-zfs more risky than standard ufs-for-the-OS-disk + zfs-for-data. The OS disk can always be zeroed of things go horribly wrong.

If you want to kick FreeBSD installed in HDDs/SSDs/RAIDs from USB memstick and you can boot using UEFI,
i doubt this hardware can use uefi as it's almost a decade old.
 
One thing to mention.
The size of BIOS (legacy) boot codes are quite limited and adding some features (including ZFS pool features which are not read compatible) forces something to be dropped.

The limitation is relaxed on UEFI boot codes, but to allow BIOS boots, adding new features into loader.efi (as boot code in ESP) and/or boot1.efi needs special considerations (if the feature is also needed for zfsboot and/or gptzfsboot).
And when Intel actually switches to X86S, brand-new CPUs from Intel no longer support BIOS boots (as technically impossible).
 
Back
Top