Solved UEFI won't always boot drives larger than 2TB without the right BIOS setting

You most probably already checked that hhartzer, but can you confirm you're driving the latest board's BIOS version, and checked whether there are any firmware updates for your 16TB drives?.
Just my 2¢.
 
Error you see is from FreeBSD zfs loader itself. This means you're beyond the FW (bios or uefi).

The loader is using libsa to find and load the kernel. That's where the issue occurs. Quick search doesn't show the limits anywhere; source code doesn't mention this either.
 
Error you see is from FreeBSD zfs loader itself. This means you're beyond the FW (bios or uefi).

The loader is using libsa to find and load the kernel. That's where the issue occurs. Quick search doesn't show the limits anywhere; source code doesn't mention this either.
There is no magic, though. libsa works with disks through the "platform code", i.e., BIOS or UEFI or etc.
 
hhartzer many old BIOSes (in legacy or UEFI mode) are known to not be able to access disks beyond 2TB.
That's because an LBA (or a sector / block number) would exceed 2^32.
In many cases the block number just wraps around and the firmware reads some wrong data.

That's a well known problem. I also experienced it firsthand.
If there is no newer BIOS version with a fix, then you can work around by using smaller disks.
Or partitioning larger disks and creating a separate boot pool at a start of a disk.
 
There is no magic, though. libsa works with disks through the "platform code", i.e., BIOS or UEFI or etc.
I was in the impression that loader.efi is calling ExitBootServices() and hence it's on its own.
I agree that with BIOS libsa still relies on BIOS services to get the ball rolling.
 
I presume that you are on the most recent BIOS / BMC / Bundle Firmware for MBD-X9SBAA-F (revision 1.1)?

Noticing that suspicions of the firmware have been mentioned and you have multiple 16TB disks available, perhaps try installing a different OS (Windows/Linux). If it is solely a firmware/UEFI issue then you should experience the same problems; if on the other hand, the boot problems are not present it would suggest that a modification in the FreeBSD loader could make booting >2TB possible.
 
Finally, I wrote over a terabyte of "random" data. I pressed the reset button.

Inspired by this I've created a VM with 14.0 FreeBSD, default install (by hand in shell, not by bsdinstall):
Code:
[15:35:17] fbsdforums(/usr)# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
rpool  16.0T  1.07T  14.9T        -         -     0%     6%  1.00x    ONLINE  -
[15:35:20] fbsdforums(/usr)#

[15:35:22] fbsdforums(/usr)# gpart show
=>         40  34359738288  vtbd0  GPT  (16T)
           40         1024      1  freebsd-boot  (512K)
         1064      1048576      2  efi  (512M)
      1049640          984         - free -  (492K)
      1050624  34358685696      3  freebsd-zfs  (16T)
  34359736320         2008         - free -  (1.0M)

=>         40  34359738288  vtbd1  GPT  (16T)
           40         1024      1  freebsd-boot  (512K)
         1064      1048576      2  efi  (512M)
      1049640          984         - free -  (492K)
      1050624  34358685696      3  freebsd-zfs  (16T)
  34359736320         2008         - free -  (1.0M)

[15:35:23] fbsdforums(/usr)#
And booted it using either FW - bios or UEFI. It went just fine. I wrote above 1TB data and tested it again. Went without problems.
I've updated the system (wanted to have BE to see if maybe that plays a role) but again all tests are OK.

I guess it would be nice to test the >2TB limit just in case. The copy speed was not that great, about 100MBps so it took some time. But I might test it just out of curiosity.
 
I am quite sure that loader (efi or not) does not implement controller drivers, disk drivers, etc, so it must rely on the firmware.
Exactly.
Because of this, for example, gpart(8) has lenovofix attribute for irregular (mostly on Lenovo PCs) firnwares (basically on legacy boot with MBR scheme, though).
Boot codes related with ZFS implements codes for non-read-compatible attributes, but it's a filesystem driver, not any of devide drivers, which firmwares does not aware of.
 
Thank you all for the replies!

I really appreciate the help. This problem has been very odd. Wouldn't take much to feel crazy over this.

UFS is nice because it seems to "fail right away" on the hardware, whereas ZFS seems like it must rearrange some blocks, or something, and become unbootable.

This is the latest BIOS, v1.1. I see no options about CSM/Legacy Boot.

I'll try installing Linux on a larger drive and see what happens, just to confirm.

I did try (with UFS, because it "fails faster" in this regard) making a smaller partition for /, but couldn't get something workable with the larger drive.
 
I installed Debian 12 to a 16TB drive. Rebooted and landed right in the grub prompt. I'm assuming the exact same issue.

Unless I can figure out a workable way to partition the bigger drives, I think I'm just going to roll with 2TB or smaller. This is definitely a UEFI issue specific to this hardware.

I do feel like a snippet about this possibility should be added to the FreeBSD Handbook. I'd be happy to contribute that if there's interest.

PS: The FreeBSD installer is so much nicer and faster than Debian's!
 
I feel like a moron... but pretty sure I got it working.

Under PCIe/PCI/PnP Configuration in the BIOS:

"Launch Storage OpRom Policy"

It was "Legacy Only". I set it to "EFI Compatible."

Debian on 16TB now boots. UFS on 16TB boots. I'm doing some writes to try and confirm it, but am finally feeling pretty confident.

I guess, to be fair, this isn't the most obvious BIOS setting and it certainly deserves some additional documentation in the BIOS itself and the manual (which I've poured over quite a bit.)

I hope this helps someone else!
 
I feel like a moron... but pretty sure I got it working.

Under PCIe/PCI/PnP Configuration in the BIOS:

"Launch Storage OpRom Policy"

It was "Legacy Only". I set it to "EFI Compatible."

Debian on 16TB now boots. UFS on 16TB boots. I'm doing some writes to try and confirm it, but am finally feeling pretty confident.

I guess, to be fair, this isn't the most obvious BIOS setting and it certainly deserves some additional documentation in the BIOS itself and the manual (which I've poured over quite a bit.)

I hope this helps someone else!
Are you able to repeat your first tests re ZFS on FreeBSD with a single 16 TB partition?
 
I did a reinstall with a 2x 16TB ZFS mirror. I've put 8TB total on the mirror. It reboots just fine now :).
Glad to hear.:)

Additional note:
CSM stands for Compatibility Support Module, which provides compatibility for legacy BIOS. It's an optional component and some firmwares implements just to add legacy BIOS calls, but some could apply ALL limitations of BIOS unnecessarily, even if booted via new UEFI mode. This would be the unfortunate case here.
 
Back
Top