Solved Max size of boot/root partition?

Hi,

I have a UEFI booting Dell T610 with 8x3TB SAS drives in a RAID 6. I can boot from the 12.1-RELEASE ISO and install FreeBSD taking the defaults for all the prompts. On reboot, I get this error:

Code:
Setting currdev to disk0p2:
Loading /boot/defaults/loader.conf
Loading /boot/device.hints
Loading /boot/loader.conf
Loading /boot/loader.conf.local
Startup error in /boot/lua/loader.lua:
LUA ERROR: /boot/lua/menu.lua:37: module 'drawer' not found:
    no field package.preload['drawer']
    no file '/boot/lua/drawer.lua'
    no file '/boot/lua/5.3/lib/drawer.so'
    no file '/boot/lua/5.3/lib/loadall.so'
    no file './drawer.so'.

can't load 'kernel'

Type '?' for a list of commands. 'help' for more detailed help.
OK

I have rebooted via the CD into a shell, mounted the slice, and confirmed the file /boot/lua/drawer.lua is there. The other files are not present on the disk nor on the CD. Using less than the full disk permits FreeBSD to install. The disk is about 17TB in size.

Are there limits on the boot/root size?
 
  • Did you check the checksum of the downloaded install image?
  • What do you mean with RAID6? Does that mean you're using a RAID controller?
  • Do you have ZFS or UFS filesystem? For ZFS, it's better to disable the controller's RAID facilities & give ZFS the raw disks.
  • Even for non-ZFS, many sysadmins prefer software-RAID, because in the event of a hardware failure you're stuck to the service of your vendor. In some cases, you need the exact same controller model & firmware version to access your data...
  • Max. volume size for UFS is ~8 ZB (2^(64+9), 64 bit + 9 bit for blocksize=512); more precise 8 ZiB (ZebiBytes). For ZFS it's ~256 quadrillion zebibytes.
  • Off-topic, but you may consider a setup like 2 or 3-way mirror for the root filesystem + 5 (6) disks in a RAIDZ1 or RAIDZ2 for data (/home etc.). With disks larger than ~8TB, a 2-way mirror or RAIDZ1 can not be considered safe anymore for statistical reasons (likelihood of errors). So if you plan to subsequently replace the disks with larger ones in the future, it's better to choose RAIDZ2 right away.
 
  • Did you check the checksum of the downloaded install image?
  • What do you mean with RAID6? Does that mean you're using a RAID controller?
  • Do you have ZFS or UFS filesystem? For ZFS, it's better to disable the controller's RAID facilities & give ZFS the raw disks.
  • Even for non-ZFS, many sysadmins prefer software-RAID, because in the event of a hardware failure you're stuck to the service of your vendor. In some cases, you need the exact same controller model & firmware version to access your data...
  • Max. volume size for UFS is ~8 ZB (2^(64+9), 64 bit + 9 bit for blocksize=512); more precise 8 ZiB (ZebiBytes). For ZFS it's ~256 quadrillion zebibytes.
  • Off-topic, but you may consider a setup like 2 or 3-way mirror for the root filesystem + 5 (6) disks in a RAIDZ1 or RAIDZ2 for data (/home etc.). With disks larger than ~8TB, a 2-way mirror or RAIDZ1 can not be considered safe anymore for statistical reasons (likelihood of errors). So if you plan to subsequently replace the disks with larger ones in the future, it's better to choose RAIDZ2 right away.

1. Yes, after download via SHA256 and SHA512. Then verify the CD after burn (with ImgBurn).
2. Yes, I am using a hardware RAID controller with RAID 6 (Striping With Dual Distributed Parity). I have a Dell PowerEdge T/R 610. It has a Dell PERC H700 1GB with BBU raid card (an LSI/Broadcom made card). This chassis has a storage card slot which is what the H700 is in. The drives are Hitachi Ultrastar 7K3000 (near line enterprise 6GB/s SAS 7200RPM / 512 sector drives). RAID 6 is N+2.
3. I am using UFS (this is the default AFAICT). The installer detected UEFI (and not BIOS). I choose all the defaults during the install. I didn't change any option, just hit the enter key. Well, I typed in my password twice.
4. I am limited to the RAID configurations offered by the H700 and the number of spindles (8) I can fit into the chassis.

I am not a fan of ZFS or JBOD/target, hence why I have what I have. I have always used hardware RAID. I have plenty of spares.

I think there's either a bug in gpart or the loader, but how do you debug those types of issues? Once FreeBSD is running (e.g. LiveCD boot), I can provision the entire disk (about 17TB) with a GPT partition and UFS (newfs works fine). It's only after a fresh install and then a reboot that FreeBSD suddenly can't "read" the UFS filesystem it created during the install.

I am replacing 10K 450GB SAS drives and FreeBSD ran fine with those.
 
ZFS has several advantages:
  • administration offers much more flexibility since the volume manager is integrated
    E.g. boot environments (sysutils/beadm) & works well for jail(8)s & virtualization; it's much easier to secure your system (read-only datasets)
  • data integrity though checksums
  • optional data & metadata compression
  • next version (OpenZFS) will support native encryption; until then you can use sysutils/pefs-kmod (of course that works on top of any filesystem)
  1. Update your system from the live CD (since that boots fine): freebsd-update -b <sysroot> fetch install. Note that with ZFS you could create a boot env beadm create ... for secure & instant rollback...
  2. Install from the live CD pkg -c <sysroot> install devcpu-data to cope with any CPU bugs ( -r <sysroot> if that fails), & enter into loader.conf(5):
    Code:
    verbose_loading="YES"
    cpu_microcode_load="YES"
    cpu_microcode_name="/boot/firmware/intel-ucode.bin"
  3. Then reboot without CD & let's see if that leads to better results.
 
Well, I am not getting far enough in the loader processing to enable loader.conf options. My issue is that after the install, on reboot, the loader can't read the disk slice it just got done setting up and installing on:

I hit "3" at the loader menu prompt.
Code:
OK lsdev
cd devices:
    cd0:        0 blocks (no media)
disk devices:
    disk0:        35156656128 x 512 blocks
      disk0p1:  EFI
      disk0p2:  FreeBSD UFS
      disk0p3:  FreeBSD swap
net devices:
    net0:
    net1:
OK ls disk0p2:
open 'disk0p2:/' failed: no such file or directory
OK

When I boot from the CD or a USB, I hit "3" at the loader menu and type:
Code:
OK set currdev=disk0p2:
OK autoboot

This works and boots up fine and I can ls from the loader OK prompt:
Code:
OK ls disk0p2:
disk0p2:/
 d  .snap
 d  dev
 d  user
     COPYRIGHT
 d  boot
{...}
--more--  <space> page down <enter> line down <q> quit _

I have to figure out why after rebooting from a successful fresh install, the loader can't "see" the disk slice it just used to install FreeBSD. Oddly, the loader can ls the disk0p1: slice (the EFI partition).
 
I found my issue. After reading https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234031, specifically comment https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234031#c48, I decided to physically disconnect and disable the SATA port for my CD-ROM. I also removed/disabled all other bootable stuff (e.g. INT13h), only leaving the Dell PERC H700 card and the single volume (17TB RAID6). This permitted my Dell T610 to reboot without issue.

So there's still something wrong with the loader and BIOS/UEFI boot (INT13h) after the fixes for https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=234031.

I have no idea or thoughts on why it would work with different sizes (e.g. RAID configurations).
 
Back
Top