Separating boot from OS

holding two NVMe
No that won't work at all. The slots are only x4 electrical in a x8 physical slot so that would not work..

For the 2 slots with PCIe 3.0 they will only work at PCIe 3.0 with an Ivy Creek 1155 CPU.
These CPU have a suffix of V2 to annotate Version 2 of the 1155 model.
 
Hi Phishfry,

yes, I know, I just downloaded the latest BIOS from Supermicro together with the release notes and upgrade instructions.

Kindest regards,

M
 
SuperMicro is really good about numbering the PCIe expansion slots.
Problem is how do you know which slots are the PCIe 3.0 slots.

Worse comes to worse you might need to look at pciconf to see what the pci device is running at. It shows modes.
Then shuffle cards around to suit the slots. For example a video card that only does PCIe 2.0 nativly would be a waste in a PCIe 3.0 slot.
But that topic is probably better for another thread.
 
If you checkout the diagram from the pdf manual on page 13 has the slots labeled. The two slots closest to the CPU are PCIe 3.0x.
Slots 6 and slot 7.
 
Hi Phishfry,

that is actually an excellent point. Currently, I am using the machine as a headless server due to the limitation of the USB. But, if I can make it work, I might consider to buy a video card, and promote it to a workstation.

And yes, in my hardcopy of the manual, the slots are clearly marked on p. 1-5.

Kindest regards,

M
 
Greeting all,

thanks to several people in the other thread: https://forums.freebsd.org/threads/cannot-install-bios-and-or-efi-bootcode.79592/, I successfully installed both the UEFI and the legacy BIOS bootloader on a USB drive /dev/da0.

The other script, installing the OS on an NVMe drive /dev/nvd0:
Code:
#!/bin/sh
# FreeBSD installation script 03/30/2021, no encryption, Beadm compatible
set -Cefu

# Set installation disk:
DISK="/dev/nvd0"

echo "Destroying old partitions on the destination drive"
gpart destroy -F $DISK

echo "Configuring zfs for ashift=12"
# Force ZFS to use 4k blocks, i.e., ashift=12 before creating the pool
sysctl -i vfs.zfs.min_auto_ashift=12

# Create the gpt structure on the drives.
echo "Partitioning the destination drive using gpt"
gpart create -s gpt $DISK
gpart add -t freebsd-swap -l swap -a4k -s 4G $DISK
gpart add -t freebsd-zfs -l zfspool -a4k $DISK

# Create new ZFS root pool, mount it, and set properties
#(/mnt, /tmp and /var are writeable)
echo "Creating pool system"
zpool create -f -o altroot=/mnt -m none system "/dev/gpt/zfspool"
zfs set atime=off system
zfs set checksum=fletcher4 system
zfs set compression=lz4 system

echo "Configuring zfs filesystem"
# The parent filesystem for the boot environment.
# All filesystems underneath will be tied to a particular boot environment.
zfs create -o mountpoint=none system/BE
zfs create -o mountpoint=/ -o refreservation=2G system/BE/default

# Datasets excluded from the bootenvironment:
.
.
.

# Set sticky bit to and make /var/tmp accessible
chmod 1777 /mnt/var/tmp

# Configure boot environment bootfs
zpool set bootfs=system/BE/default system

# Configure NIC
.
.
.
# Set ftp for fetching the installation files
FTPURL="ftp://ftp.freebsd.org/pub/FreeBSD/releases/amd64/12.2-RELEASE"

# Install the files
echo "Starting the fetch and install"
cd /mnt
export DESTDIR=/mnt
for file in base.txz kernel.txz
do
  echo "Fetching ${file}"
  /usr/bin/fetch ${FTPURL}/${file}
  echo "Extracting ${file}"
  cat ${file} | tar --unlink -xpJf - -C ${DESTDIR:-/}
  rm ${file}
done
echo "finished with fetch and install"

# Create /etc/fstab file with encrypted swap
echo "Creating /etc/fstab"
cat << EOF > /mnt/etc/fstab
# Device            Mountpoint    FSType    Options    Dump    Pass#
/dev/gpt/swap.eli    none         swap        sw         0    0
EOF

# Create /boot/loader
echo "Creating /boot/loader"
cat << EOF >> /mnt/boot/loader.conf
kern.geom.label.disk_ident.enable="0"
kern.geom.label.gptid.enable="0"
zfs_load="YES"
vfs.zfs.min_auto_ashift=12
EOF

# Define variables
# Hostname:
HOSTNAME=". . ."

# Primary IP address:
IP=". . ."

# the netmask for this server
NETMASK=". . ."

# the default gateway for this server i.e. defaultrouter
GATEWAY=". . ."

# Create basic /etc/rc.conf
cat << EOF >> /mnt/etc/rc.conf
hostname="${HOSTNAME}"
ifconfig_em0="inet ${IP} netmask ${NETMASK}"
defaultrouter="${GATEWAY}"
sshd_enable="YES"
dumpdev="AUTO"
zfs_enable="YES"
EOF

cd
umount -f /mnt
zfs set mountpoint=/system system

echo "Rebooting system"
reboot
executes until the umount -f /mnt, reporting:
Code:
cannot mount '/mnt/system': failed to create mountpoint
property may be set but unable to remount system

The pool system is created, but is mounted on altroot, i.e., /mnt. First, I tried zpool export system, which works, zpool import shows the pool system, but zpool import system return no response, the subsequent zpool list yields:
Code:
internal error: failed to initialize ZFS library
Second I tried to just reboot. The machine clearly boots from the USB drive /dev/da0, which reports
Code:
gptzfsboot: No ZFS pool located, can't boot

There are two issues here. (1) despite my motherboard having plurality of boot options - UEFI, Legacy BIOS, and combination prioritizing one over the other, UEFI: Built-in EFI Shell, UEFI: 1100, 1100, CD/DVD, HD, USB, regardless what I select, i.e., the the option UFEI first, the machine stubbornly tries to boot from legacy BIOS. (2) The reboot somehow loses the pool system.

The primary issue is to resolve (2); I can revisit (1) later, or just live with booting from legacy BIOS.

Again, any help would be appreciated.

Kindest regards,

M
 
If you look at your manual it is on page 4-10
PCI ROM Priority
You want EFI Compatibility ROM on all the choices in that section. Jam all the controls to EFI.
You also need to set the boot option filter to UEFI from the boot tab in the BIOS (page 4-18 in manual).
 
Some were x16 adapters and I milled off the extra fingers to fit x4 slot.
Do you know any good instruction how to do this with cheap John Doe equipment?
For adapting a video card to X4 I once tried sawing off part of the fingers.
But there was a SMD thingy near the slot fingers, which popped off and flew into the nowhere.
Card didn't work anymore :'‑(
 
Greetings all,

I lied, the pool system is not lost upon reboot, zpool import shows it, but any attempt to import it zpool import system returns a prompt #, but again any zfs based command returns the error:
Code:
internal error: failed to initialize ZFS library

The zfs.ko is already loaded into the kernel. I was wondering whether I am not trying to mount over an existing mountpoint. zpool import -N system seems to import the pool because zpool list now shows the pool system not mounted on altroot. However, despite the zfs(8) reciting:
-N Import the pool without mounting any file systems.
zfs list shows all the proper mount-points. Upon reboot, the machine attempts to boot from pool storage, which (1) is not mounted and (2) is a data storage only. This indicates that the system still cannot be found, so I tried to force it: zpool set bootfs=system/BE/default system, but to no avail.

Time to call it a night.

Kindest regards,

M
 
Hi Phishfry,

sorry, I forgot to answer in my frustrating attempt to make the boot work.

Part of the problem is, that the new BIOS has new options that are not in the manual for BIOS 1.0, additionally, some described options are not there. Furthermore, they are not very well explained. For example, what the heck is the (PCI) OptionROM, and since it affects boot process, why is it in a section different from boot. regardless, I fumbled through it, and was able to make UEFI work; nevertheless, the boot still fails since the bootloader cannot find the pool system, since it does not appear to be mounted on startup.

I do not know, what else to try, so perhaps it is time to throw the towel and abandon the idea.

Kindest regards,

M
 
You have to mount that manually, respective script it, because your system pool is probably canmount=off.
 
Hi snurg,

thank you for the reply. However, the problem is that I cannot import it, because of the noted error
Code:
internal error: failed to initialize ZFS library
. I can import it with the -N option, which, of course does not mount any of the datasets.

I also tried
Code:
# zpool import -N pool
# zfs mount -va

However, instead of the expected response
Code:
service mountd reload
, I get back a prompt # and the ZFS library error.

I thnik the basic problem is that since the script fails after umount -f /mnt, with
Code:
Cannot unmount /mnt: Device busy
.

Kindest regards,

M
 
Greetings all.

after another several attempts, I decided to use bdsinstall. I have tried:
  1. boot installer, setup keyboard etc
  2. configured network
  3. invoked shell for partitioning
  4. run the script just partitioning the USB and the PCI drive
  5. exit the shell and continued with the installer
The result is exactly the same as with my script; the boot fails with no system zpool being found; zfs list reports no system zpool found, although zfs import clearly shows the system zpool.

So if even the bdsinstaller fails, there is no hope for me to figure it out.

Kindest regards,

M
 
M.
Your thread title says separating boot from OS. That seems to be appropriate.
I can't help you with ZFS. I use UFS and gmirror on SATA-DOM to boot my two 24 bay ZFS fileservers.
I can rebuild my fileservers UFS disk in minutes. I seriously doubt I could lose both drives in a gmirror.
I also have a rescue USB stick for my fileservers just as an option. I got burned on a ZFS version update on FreeNAS many years ago and it made me approach ZFS on FreeBSD much more cautiously.
The problem is that ZFS is very complex and you can really get yourself into trouble if you don't understand what you are doing.


Do you know any good instruction how to do this with cheap John Doe equipment?
I have bought x1 and x8 video cards for my servers. I have never butchered a video card by cutting.
I have cut the back out of some PCIe slots with a dremmel though.
The M.2 to PCIe adapters I cut were ridiculous. x16 lanes for a device that can only use x4 lanes.
I should have noticed when I bought them.
I bought like 4 batches from China via ebay when I first started messing with NVMe.
I was looking for low profile adapters for 1U chassis.


Not real proud of cutting any pcb but sometimes I get in project mode and just do it.
 
mefizto
I'd then check the usb installer image, how it is configured.
There seems something missing. Maybe things like 'zfs_enable="YES"' or the like.
Such would be completely reasonable, as an USB installer image does not really need a full config in first place.

Phishfry
So you used the dremel to carefully mill off the unwanted part of the slot?
Maybe I should have done that this way. I used a hacksaw. High vibration, flexing... not good.
 
Snurg
Hacksaw would work. Go long with the cut and trim with file or nail file.
Use a piece of duct tape as a cutting guide.
You really need a good work holder and perhaps a magnifying glass (definatly need a steady hand).
Maybe you could use the side of the jaw of a pair of vice grips as a cutting guide for the hacksaw blade.

I used a milling machine and some rubber in the vice to keep the pcb + smd from getting damaged.
It did raise hell being so thin and the bridgeport only goes to 1500rpm. You need some higher rpms for pcb work.
 
Hi Phishfry,

thank you again for the reply.

Your thread title says separating boot from OS. That seems to be appropriate.
Yes, I corrected the title to more clearly describe the goal, since I did a lot of searching, and it it surprising how often the title does not reflect the subject matter discussed - just like mine before.

I can't help you with ZFS. I use UFS and gmirror on SATA-DOM to boot my two 24 bay ZFS fileservers.
Do I understand it correctly that you have the OS on a UFS file system and the data on ZFS file system?

I cannot run SATA_DOM, since, as noted, all my SATA ports are taken. Hence my idea with the NVMe drive. My concern is, running the OS from the USB drive, since (1) the frequent writing into it will wear it out and (2) it is rather slow since I wanted to move additional processing on the machine.

Since I - or even the installer - cannot make it work, I had another idea - installing only minimal filesystems Physically on the USB drive, wherein the remaining filesystems will be on the NVMe drive and mounted at appropriate mount-points on the USB drive.

The problem is, I do not know what is the minimal filesystem that must physically reside on the USB drive, so that I can still attempt to repair the system in case the NVMe drive has a failure. So I opened another thread, and hopefully someone will help.

Hi Snurg,

thank you for the suggestion, but the USB does not appear to be a problem because as noted the messages indicate that the loader is looking for a pool that is not mounted.

Kindest regards,

M
 
Do I understand it correctly that you have the OS on a UFS file system and the data on ZFS file system?
Exactly. I have two Chenbro 24bay chassis. I use SATADOM for OS with UFS and 24 bays for Zpool.
I use nvme for L2ARC and SLOG.
gmirror across the two SATADOM.


running the OS from the USB drive
These are all valid concerns. What about USB DOM in gmirror.. Innodisk makes real deal DOM.
USB3 if you have that onboard is also available as DOM.

One thing I have found is many motherboards stuff the connectors very close together so multiple USB-DOM may not be feasible. DOM have some bulk and you must plan accordingly.
What is nice are short extension cables to break the connector out. I did that with SATA DOM.

I do think booting from NVMe is a good idea. Maybe put UFS there and the pool on your SATA.

The problem with DOM are power connectors. They are not standard. SuperMicro has a SATA DOM power socket on many motherboards but the power connector is different from the Innodisk. So it helps to be handy with a soldering iron.

Some motherboards have a 'power over SATA' connector meant for SATA DOM (not supported by all SATA-DOM).
 
Hi Phishfry,

yes, on my board, the USB type A connector is very close to the SATA connectors. The manual is silent on whether it is USB 2 or 3. However, I found that there are USB DOMs that pug to the 9-pin header.

I would still prefer the USB to NVMe solution, especially if I could find an SLC USB and I already bough the NVMe hardware. However, the USB DOM is a potential solution. Where do you buy yours?

Why do you run a mirror? I remember I had asked about it a while ago, and the people that I consider knowledgeable argued against it. I do not remember why, i will try to see if I still have the notes.

Kindest regards,

M
 
Why do you run a mirror
For redundancy. You can lose a whole drive and you have a hot replacement.

One backup box I use 2 drives in a gmirror. I have three drives that I use and I keep one on a shelf.
I rotate a drive into the gmirror maybe weekly and shelf the backup.
gmirror handles it all seamlessly. It notes old data and refreshes the disk.
My Off-line backup. Maybe a week old backup but better than nothing.
 
if I could find an SLC USB
I have some of these and they were not very quick. (25MB/sec)

USB 3.0 DOM have superior throughput.
 
Back
Top