Separating boot from OS

Greetings all,

I have a server running on Supermicro X9 series board. All the SATA ports are used by a (data) pool, the OS is installed on (internal) USB flash drive. I wanted to install some additional application, but the flash drive was getting rather full. Since there are unoccupied PCI slots on the board, my initial thought was to boot from SSD via PCI/M.2 adapter. However, a Supermicro engineer advised me that such is not supported by the X9 series.

I was thus wondering whether I could boot from the internal USB flash drive, and then somehow hand-over to the SSD on which I would install the OS. I have found a Wiki page https://wiki.freebsd.org/UEFI, so I think that the structure on the USB flash drive should look like the following:
Code:
# Set boot disk:
DISK="da0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK
gpart add -t efi -l efiboot -a 4k -s 100M $DISK

# Format the efi partition to hold the the small MS-DOS efifilesystem for UEFI bootcode.
# Copy the FreeBSD /boot/boot1.efi bootcode file into the efi filesystem.
echo "Preparing the efi partition"
newfs_msdos /dev/ada0p1
mount -t msdosfs /dev/ada0p1 /mnt
mkdir -p /mnt/EFI/BOOT
cp /boot/boot1.efi /mnt/efi/boot/BOOTx64.efi
umount /mnt

Then I would install the remaining portion of the OS on the SSD:
Code:
#Set installation disk:
DISK="ada0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK1

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK1
gpart add -t freebsd-swap -l swap -a 1m -s 6G $DISK1
gpart add -t freebsd-zfs -l zfspool $DISK1

echo "Creating pool system"
# Create new ZFS root pool (/mnt, /tmp and /var are writeable)
zpool create -m none -R /mnt -f system /dev/DISK1{p1}
zfs set atime=off system
zfs set checksum=fletcher4 system
zfs set compression=lz4 system

echo "Configuring zfs filesystem"
# The parent filesystem for the boot environment.  All filesystems underneath will be tied to a particular boot environment.
zfs create -o mountpoint=none system/BE
zfs create -o mountpoint=/ -o refreservation=2G system/BE/default

# Set bootfs
zpool set bootfs=system/BE/default system

# Datasets excluded from the bootenvironment:
. . .

# Temporary directory on a disk
zfs create -o mountpoint=/tmp -o exec=off -o setuid=off -o quota=6G system/tmp

# Set sticky bit to and make /tmp and /var/tmp accessible
chmod 1777 /mnt/tmp
chmod 1777 /mnt/var/tmp

# Set ftp for fetching the installation files
FTPURL="ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/11.1-RELEASE"

# Install the files
echo starting the fetch and install
cd /mnt/tmp
export DESTDIR=/mnt
for file in base.txz kernel.txz
do
  echo fetching ${file}
  fetch ${FTPURL}/${file}
  echo extratcting ${file}
  cat ${file} | tar --unlink -xpJf - -C ${DESTDIR:-/}
  rm ${file}
done
echo "finished with fetch and install"

# Create /etc/fstab file with encrypted swap
cat << EOF > /mnt/etc/fstab
# Device            Mountpoint    FSType    Options    Dump    Pass#
/dev/$DISK1{p2}.eli    none         swap        sw         0    0
EOF

I do not know how to continue from here. As best understood from reading loader(8), I need to install /boot/loader and configure /boot/loader.conf. Judging by the leading /, the /boot/loader is installed under the zpool, thus:
Code:
# Create /boot/loader
cat << EOF >> /mnt/boot/loader.conf
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
zfs_load="YES"
vfs.zfs.min_auto_ashift=12
EOF

However, I am unsure how to let the efi bootcode know where to find the /boot/loader. The loader(8) contains in the section ZFS FEATURES:

If /etc/fstab does not have an entry for the root filesystem and
vfs.root.mountfrom is not set, but currdev refers to a ZFS filesystem,
then loader will instruct kernel to use that filesystem as the root
filesystem.

but I do not remember ever setting either. The loader.conf(8) contains the following:

vfs.root.mountfrom
Specify the root partition to mount. For example:

vfs.root.mountfrom="ufs:/dev/da0s1a"

loader(8) automatically calculates the value of this tun-
able from /etc/fstab from the partition the kernel was
loaded from. The calculated value might be calculated in-
correctly when /etc/fstab is not available during loader(8)
startup (as during diskless booting from NFS), or if a dif-
ferent device is desired by the user. The preferred value
can be set in /loader.conf.

The value can also be overridden from the loader(8) command
line. This is useful for system recovery when /etc/fstab
is damaged, lost, or read from the wrong partition.
Regretfully, I cannot understand how this helps. I can set the variable, presumably:
Code:
vfs.root.mountfrom="zfs:/dev/ada0p1
, in the /boot/loader.conf but I still do not understand how does the efi bootcode find it; furthermore, as noted above, there is no /etc/fstab. Therefore, any help would be appreciated.

Kindest regards,

M
 
However, I am unsure how to let the efi bootcode know where to find the /boot/loader.
No idea about your other questions, but nowadays, boot1.efi is deprecated. You just place the loader (loader.efi) directly on the EFI partition instead.
(use the same name, so, for amd64 efi/boot/bootx64.efi)

I just assume this is all you need if the loader then finds a pool with a bootfs.
 
but I still do not understand how does the efi bootcode find it
The answer is in the uefi(8) man page.
Code:
     The UEFI boot process proceeds as follows:
           1.   UEFI firmware runs at power up and searches for an OS loader
                in the EFI system partition.  The path to the loader may be
                set by an EFI environment variable.  If not set, an
                architecture-specific default is used.

                      Architecture    Default Path
                      amd64           /EFI/BOOT/BOOTX64.EFI
                      arm             /EFI/BOOT/BOOTARM.EFI
                      arm64           /EFI/BOOT/BOOTAA64.EFI

                The default UEFI boot configuration for FreeBSD installs
                loader.efi in the default path.
           2.   loader.efi reads boot configuration from /boot.config or
                /boot/config.
           3.   loader.efi searches partitions of type freebsd-ufs and
                freebsd-zfs for loader.efi.  The search begins with partitions
                on the device from which loader.efi was loaded, and continues
                with other available partitions.  If both freebsd-ufs and
                freebsd-zfs partitions exist on the same device the
                freebsd-zfs partition is preferred.
           4.   loader.efi loads and boots the kernel, as described in
                loader(8).

Don't set vfs.root.mountfrom, it interferes with bectl(8) and beadm(1). Unless you intent to boot from UFS and don't need a BE.
 
Hi SirDice,

thank you for the reply. I have read the portion of uefi(8) that you posted, and even re-reading it based on your post, I still do not (fully) understand it.

The link to uefi(8) does recite how boot1.efi finds a loader.efi. But, my scripts nowhere mention loader.efi, so how is the loader.efi installed?

Furthermore, If I use loader.efi as Zirias proposed, the man-page does not make sense.

Kindest regards,

M
 
mefizto , I found the wording in this manpage confusing as well (especially given boot1.efi was used in earlier versions). I also have doubts step (3) in the manpage is actually correct: Why should loader.efi search *for itself* instead of a kernel it can boot? I guess this might be a leftover from describing boot1.efi.

At least, FreeBSD 13 added manpages boot1.efi(8) and loader.efi(8) (you have to select FreeBSD 13 to find them online until official release) that really clarify things. A rework of uefi(8) would still be nice as well.
 
Since there are unoccupied PCI slots on the board, my initial thought was to boot from SSD via PCI/M.2 adapter. However, a Supermicro engineer advised me that such is not supported by the X9 series.
The M.2 to PCIe adapter cards are meant for NVMe or Wifi cards with native PCIe interfaces.
M.2-SATA modules are not PCIe interfaces and will not work on any computer with a PCIe to M.2 adapter.

I suppose a manufacturer could make such a card with an integrated SATA controller chip for M.2 SATA but I have not seen those.
All I have seen are just dummy cards with straight PCIe to M.2 lane passthru meant for NVMe.

Why not just pick up a cheap SATA controller for booting? I don't have long term faith in USB sticks.

@@@EDIT@@@
I found this adapter has a controller and supports dual M.2 SATA and two more channels SATA
So they do exist but are uncommon.
 
Hi Zirias,

exactly. I am not sure where did SirDice found the reference, because when I followed the link he posted, it does refer to the files you mentioned. I am still little confused, though, since I am running 12.2-Rlease, and do not plan to update till at least six months after 13.0 is released, is your recommendation re loader.efi valid or should I still use boot1.efi?

Hi Phishfry,

I am not sure that I follow. As I understand it, there are HDs, with NVMe interface, that will plug into a PCIe to NVMe adapter. I have already ordered such a hard drive (Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB) and now I am researching adapters.

Can you please clarify? Also, since you appear to be knowledgeable, can you recommend an adapter? The X9 board BIOS does not enable bifurcation, thus i think that a single card adapter would be sufficient.

Kindest regards,

M
 
I am still little confused, though, since I am running 12.2-Rlease, and do not plan to update till at least six months after 13.0 is released, is your recommendation re loader.efi valid or should I still use boot1.efi?
Putting loader.efi in the ESP was the correct thing to do on 12 as well, it just wasn't clearly documented (and I'm unsure whether the installer might still have used this boot1.efifat, containing boot1, but I think that was only the case on 11).
 
Hi Zirias,

thank you for letting me know. I have 12.2-RELEASE, updated from 12.1-RELEASE installed via the above-reproduce script using the boot1.efi. Looking at the /boot, there exist boot1, boot1.efi, boot1.efifat as well as loader.efi, all having the same date.

Kindest regards,

M
 
Yes, they're all still built, but boot1.efi is deprecated and on 13, boot1.efifat is gone. I guess boot1.efi will be gone on 14.
 
Hi Zirias,

it is rather confusing, is it not? It will be "interesting" to sort through that when my hardware comes.
Kindest regards,
M
 
Actually, UEFI boot got extremely simple. You *just* need the loader (efi version), nothing else, no multiple stages any more. I'd just say the uefi(8) manpage could be improved, because ppl will know the older approach with boot1.efi and the wording is IMHO somewhat misleading.
 
Hi Zirias,

thank you again. Well, the hardware should be here next week, so we'll see.

Kindest regards,

M
 
I am not sure that I follow. As I understand it, there are HDs, with NVMe interface, that will plug into a PCIe to NVMe adapter. I have already ordered such a hard drive (Samsung 980 PRO PCIe 4.0 NVMe SSD 250GB) and now I am researching adapters.

Can you please clarify? Also, since you appear to be knowledgeable, can you recommend an adapter? The X9 board BIOS does not enable bifurcation, thus i think that a single card adapter would be sufficient.
Exactly correct. Single M.2 socket adapter card only. Can be x4 PCIe.
The Supermicro dual M.2 adapter for NVMe does require bifurication and some X9 boards have support and some don't.
Even on the LGA2011 boards. Some BIOS were updated for bifurcation feature and some were not.

If you are doing NVMe to PCIe slot than any adapter any is fine. They are transparent to the hardware. Passed straight thru.
I thought you were talking SATA M.2 not NVMe.

I will really be curious to see how an PCIe-4x NVMe works in PCIe-3x slot. Where is the bottleneck...PCIe or CPU...They are 2x as fast.
Are you are using an X9 board LGA2011 I assume ?? With V2 Xeon CPU for PCIe 3.0???
 
Hi Phishfry,

thank you for the reply.

According to the manual, my motherboard has PCIe 2.0 x4, which I understand limits the throughput to 2000 MB/s. Still beats the USB flash drive I used before. The socket is LGA 1155.

Kindest regards,

M
 
Hi 6502,

interesting idea. The question is, what would be the motivation? In the solution I am considering both the USB drive with the boot part and the SSD running the OS is on the same machine.

Kindest regards,

M
 
An other solution would be a USB-to-SATA plugged to a SSD SATA disk that you attach where you can in the enclosure. It is easy and cheap.
 
Hi blanchet,

I had considered that. It is not much cheaper, as I have to buy an SSD anyway and the difference in price between an USB-to-SATA and NVMe-to-PCIe is not significant. On the other hand, the setup would, indeed, be easier.

But the speed difference had swayed me to the latter solution.
Kindest regards,
M
 
The PCIe 4x drive gives you future potential.
Are you sure you can't eek PCIe 3.x out of her? Give me the board model number.
You see many of the X9 boards shipped with SandyBridge CPU support.
But with a firmware upgrade you could run Ivy Bridge CPU on many X9 boards.
Ivy Bridge brings PCIe 3.x with it. So it is worth the boost to Ivy Bridge.
(Especially for NVMe we are talking the difference between 700MB/sec PCIe 2.x to 2000+MB/sec on PCIe 3.x)
What CPU are you running now?
 
Hi Phishfry,

yes, I understand, that there are differences among the X9 boards. Apparently some of them can even boot from the NVMe drive in UEFI mode. But, according to the Supermicro engineer, not mine.

Nevertheless, thank you for the generous offer. The board is X9SCM-F. The processor is XEON E3-1230, 3.2 GHz, 8MB.

The PCIe slots are described in the manual as ""PCI-E 2.0 x4 on x8 slot". I take it that the potential change of the processor also affect the cannot change this, so I cannot run two SSDs per adapter, correct?

Regarding the adapter, can you recommend a specific model?

Kindest regards,

M
 
Checkout the last line of the product page.
*** BIOS rev. 2.0 or above is needed to support new E3-1200 v2 CPUs, which supports PCI-E 3.0 & DDR3 1600.

So with an Ivy Creek CPU you could have PCIe 3.0.
Like for instance E3-1230V2
Any E3-12xxV2 CPU is 1155 Ivy Creek Xeon.

It could give you a performance boost. Not necessary but it is a good upgrade path. Ivy Creek Xeons are cheap.
 
so I cannot run two SSDs per adapter, correct?
Correct (No bifrucation on X9 1155 boards) and if you look at the product page notice only 2 of 4 slots are PCIe 3.0.
You could put a single NVMe in each PCIe 3.0 slot.

I don't have any recommendations on slot adapters. I have like 4 different styles.
Some were x16 adapters and I milled off the extra fingers to fit x4 slot.
None cost more that 10 bucks from china.
 
Hi Phishfry,

thank you for the news. There is absolutely nothing about it in the printed manual that I was referring to, either in the BIOS section or the PCIe setup section.
I looked at the product page, and it also states:
2 (x8) PCI-E 3.0 in x8 slots***
Again, nothing of that sorts in the manual. Does it mean that I could buy an adapter holding two NVMe SSDs, each using x4 of the x8 lines and they would work? Or does the BIOS need to support bifurcation?

Kindest regards,

M
 
Back
Top