Where is all about /boot?

Hi!

Ok, maybe some stupid question but where the hell is all of kernel/loader.conf/etc of /boot in the rootzfsboot in FreeBSD 10????? (Perhaps too much time away from FreeBSD?). In my system there is a directory called /boot that is a link to /bootpool that is empty :-/

My system is a FreeBSD 10-BETA2 with gptzfsboot and encrypted disk. I look for this here with no luck (or maybe ignorance about a good search).

Thank you!!
 
If /boot is not a directory then your installation is not a standard one. Most likely the /bootpool directory is used to mount an UFS filesystem that holds the boot files. Without more details this is just guessing based on my experience with various ZFS on Root methods.
 
kpa said:
If /boot is not a directory then your installation is not a standard one. Most likely the /bootpool directory is used to mount an UFS filesystem that holds the boot files. Without more details this is just guessing based on my experience with various ZFS on Root methods.

You're right, but as I said before, /boot is a link pointing to /bootpool, and this last is empty. No UFS filesystem is used here.

The installation is a "normal" install from FreeBSD 10 with the new option of ZFS root install. Now I don't have my laptop here sorry :-/

Thank your for your interest ;-)
 
Your system is encrypted so the contents of the /boot directory will be encrypted also. Obviously the location that your system boots off must not be encrypted, because the FreeBSD kernel and GELI modules need to be loaded before decryption can be done.

Thus, your system must have a UFS partition or separate zpool that is un-encrypted, containing files needed to boot the system (i.e. the /boot folder), and your 'main' pool which gets mounted on root (hence why the system runs even though /boot on your main pool is empty.

I'd expect someone round here to have already tried 10.0 will full-encryption and know exactly what the installer does (I've no idea, not used encryption on FreeBSD at all yet)...

First thing I'd do is run # gpart show {disk} and see if there are two zfs partitions, or a zfs and a ufs partition. I'm sure it shouldn't take too much effort to find the partition FreeBSD is booting from and get into it.
 
usdmatt said:
Your system is encrypted so the contents of the /boot directory will be encrypted also. Obviously the location that your system boots off must not be encrypted, because the FreeBSD kernel and GELI modules need to be loaded before decryption can be done.

Thus, your system must have a UFS partition or separate zpool that is un-encrypted, containing files needed to boot the system (i.e. the /boot folder), and your 'main' pool which gets mounted on root (hence why the system runs even though /boot on your main pool is empty.

I'd expect someone round here to have already tried 10.0 will full-encryption and know exactly what the installer does (I've no idea, not used encryption on FreeBSD at all yet)...

First thing I'd do is run # gpart show {disk} and see if there are two zfs partitions, or a zfs and a ufs partition. I'm sure it shouldn't take too much effort to find the partition FreeBSD is booting from and get into it.

Thank you very much for your answer. This night I will test all about you are talking me ;-)
 
If you opt to encrypt your zpool, the 10-BETA2 installer creates two zpools. Please execute # zpool import to get a list of available zpools, and then import the unencrypted boot pool.

Last time I tried, the boot pool would not be present after reboot, and had to be manually re-imported again.
 
Savagedlight said:
If you opt to encrypt your zpool, the 10-BETA2 installer creates two zpools. Please execute # zpool import to get a list of available zpools, and then import the unencrypted boot pool.

Last time I tried, the boot pool would not be present after reboot, and had to be manually re-imported again.

Thank you very much. You're giving me a very good help to work tonight on my laptop ;-)

Cheers
 
Savagedlight said:
If you opt to encrypt your zpool, the 10-BETA2 installer creates two zpools. Please execute # zpool import to get a list of available zpools, and then import the unencrypted boot pool.

Last time I tried, the boot pool would not be present after reboot, and had to be manually re-imported again.

Hi!
You're right:

Code:
root@leonsio:~ # zpool import
   pool: bootpool
     id: 16839700242918591603
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

	bootpool                                      ONLINE
	  gptid/4ffd8e5b-40dd-11e3-a109-001c23faed62  ONLINE

I made this:

$ zpool import -f bootpool

Also, I just discover this information (I hope it can be useful for other people with the same problem as me):

Please note the following:

If using the ZFS installation option and full-disk encryption is enabled, a few entries will need to be manually added to loader.conf(5) before the 'bootpool' zpool will be available after the system boots. This manual step is expected to be fixed in the next 10.0 release cycle build.

The entries that need to be added are:

zpool_cache_load="YES"
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"

This can be done at the final menu of bsdinstall(8), when prompted to boot into the newly-installed system; alternatively, this can be done post-install, in which case, the following must be run before appending loader.conf(5):

# zpool import -f bootpool

Thank you very much for your great help ;-)
 
Hello,

to follow-up on what has already been said, I also have a weird issue with pools being exported after reboot.
I have a HP ProLiant Microsever Gen8 which has a SD Card slot on the mobo which I decided to use for hosting the OS in order to use all four drive bays for storage (raidz2 or stripped mirror).

Initially when I tried to install FreeBSD 11.0-RC1 using the Auto ZFS option (GPT scheme) for some reason the system could not boot. The issue appears to be HP related, booting from a GPT partitioned SD card for some reason doesn't work well with the HP Microserver. So I decided to give it a try using a MBR/BSD label scheme which appears to be booting ok, BUT besides the issue with having the "bootpool" being exported every time the system is rebooted, all other pools except the "zroot" are also exported after reboot!

I managed to reproduce the issue in a VM (FreeBSD 11.0-RC2 in VirtualBox).
Here is what I have:
The VM has two controllers: 1 IDE and 1 SATA.
There are 2 virtual drives, one attached to the IDE and the other to the SATA controller, please see below:

upload_2016-8-30_15-49-58.png


Code:
# camcontrol devlist
<VBOX HARDDISK 1.0>  at scbus0 target 0 lun 0 (pass0,ada0)
<VBOX CD-ROM 1.0>  at scbus1 target 0 lun 0 (cd0,pass1)
<VBOX HARDDISK 1.0>  at scbus2 target 0 lun 0 (pass2,ada1)

On the system drive I have (the idea being to reproduce a MBR partitioned SD card setup):
Code:
# gpart show ada0
=>  63  33554369  ada0  MBR  (16G)
  63  1  - free -  (512B)
  64  33554360  1  freebsd  [active]  (16G)
  33554424  8  - free -  (4.0K)

# gpart show ada0s1
=>  0  33554360  ada0s1  BSD  (16G)
  0  4194304  1  freebsd-zfs  (2.0G)
  4194304  4194304  2  freebsd-swap  (2.0G)
  8388608  25165744  4  freebsd-zfs  (12G)
  33554352  8  - free -  (4.0K)

On the second drive I have a GPT scheme with a single partition:
Code:
# gpart show ada1
=>  40  8388528  ada1  GPT  (4.0G)
  40  2008  - free -  (1.0M)
  2048  8384512  1  freebsd-zfs  (4.0G)
  8386560  2008  - free -  (1.0M)

I have created a pool named "ztank" on the ada1 drive which disappears after reboot:
Code:
# zpool import
  pool: ztank
  id: 8653646329154612492
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

  ztank  ONLINE
  gpt/test0  ONLINE

  pool: bootpool
  id: 12326811282761698183
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

  bootpool  ONLINE
  ada0s1a  ONLINE

Is anyone else having the same issue with other pools being exported after reboot?
 
yes, today I installed FreeBSD 11-RELEASE on the IBM x3500 - installer created two pools, bootpool and zroot (my zroot is not encrypted, so I don't know why extra bootpool) - and also after reboot, my bootpool is not mounted, boot is OK, but bootpool is always exported after reboot, so /boot is not accessible (freebsd-update complains about it, you can't edit /boot/loader.conf wihout extra zpool import bootpol etc.).

One investigation, bootpool is automaticaly mounted on my notebook (GPT partition scheme), but bootpool is NOT mounted on IBM x3500 (MBR partition scheme).

Maybe it's affected by MBR partition scheme? Any ideas?
 
Hi!
You're right:

Code:
root@leonsio:~ # zpool import
   pool: bootpool
     id: 16839700242918591603
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

    bootpool                                      ONLINE
      gptid/4ffd8e5b-40dd-11e3-a109-001c23faed62  ONLINE

I made this:

$ zpool import -f bootpool

Also, I just discover this information (I hope it can be useful for other people with the same problem as me):



Thank you very much for your great help ;-)
Thank you

I just finished to install FreeBSD 11.0 RELEASE p1 with the options ZFS - MBR - No Encrypt and still i had the same problem, fortunately the information you post make it work just fine.

The Box is a GIGABYTE-GB-BXi5H-4200 (with the wireless exchanged to a RAL 3090, since the included Azure rtl8723ae didn't make it at all)

Since i always have used GPT i haven´t notice the bug until today that i tried the MBR installation.

i noticed that the bug report is open, hopefully they fix it for the next releases.

Thank you so much for your information.
 
But you need to call "zpool import -f bootpool" after every boot (in rc.local for example), right?

that's right, or as they wrote above, you just add the following to the loader.conf:
Code:
zpool_cache_load="YES"
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"
 
Sorry for necro. Ran into same bug on 13-CURRENT. It still appears being unfixed.

Putting anything into /boot/loader.conf is pointless, when pool has been exported and has not been manually imported yet, system can't see that file due missing bootpool mount. Vicious circle.

Anyone has some other suggestion?
 
I found myself here via bug 212258, which (glancing at 267843) might have been overcome by events …

zroot/ROOT/default is the pool which is mounted under /boot

Please, are you certain?

For me, the boot environment is mounted at the root / of the file system (not under /boot).

Code:
% bectl list -c creation | tail -n 7
1500025-006-base-ports  -      -          1.78G 2024-10-10 16:32
1500025-007-base        -      -          1.94G 2024-10-11 07:20
1500025-008-base        -      -          1.98G 2024-10-11 16:15
1500025-009-base        -      -          121M  2024-10-12 02:59
1500025-010-ports-force -      -          32.3M 2024-10-12 10:00
1500025-011-base        -      -          1.32G 2024-10-12 18:22
1500025-012-base        NR     /          318G  2024-10-13 03:27
% mount | grep 1500025-012-base
august/ROOT/1500025-012-base on / (zfs, local, noatime, nfsv4acls)
% zfs get mountpoint,canmount august/ROOT/1500025-012-base
NAME                          PROPERTY    VALUE       SOURCE
august/ROOT/1500025-012-base  mountpoint  none        inherited from august
august/ROOT/1500025-012-base  canmount    noauto      local
%
 
Thanks, that's consistent with none for the mountpoint property, however (for example) zroot/ROOT/default would be mounted at:

/

– not under /boot
 
Code:
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot                810M   119G    96K  /zroot
zroot/ROOT           807M   119G    96K  none
zroot/ROOT/default   807M   119G   807M  /
zroot/home            96K   119G    96K  /home
zroot/tmp            104K   119G   104K  /tmp
zroot/usr            288K   119G    96K  /usr
zroot/usr/ports       96K   119G    96K  /usr/ports
zroot/usr/src         96K   119G    96K  /usr/src
zroot/var            624K   119G    96K  /var
zroot/var/audit       96K   119G    96K  /var/audit
zroot/var/crash       96K   119G    96K  /var/crash
zroot/var/log        144K   119G   144K  /var/log
zroot/var/mail        96K   119G    96K  /var/mail
zroot/var/tmp         96K   119G    96K  /var/tmp
 
Maybe confusing "boot" with "boot filesystem", "boot environment" and so on in the manpage.

Code:
bectl itself accepts an -r flag specified before    the command  to     indi-
cate  the  beroot  that should be used as the boot environment root, or
the dataset whose children are all boot    environments.    Normally  this
information  is    derived     from  the bootfs property of the pool that is
mounted at /, but it is useful when the system has not been booted into
a ZFS root or a different pool should be    operated  on.    For  instance,
booting    into the recovery media    and manually importing a pool from one
of the system's resident    disks will require the -r flag to work.

Code:
activate    [-t | -T] beName
     Activate the given beName as the default boot filesystem.  If
     the -t    flag is    given, this takes effect  only    for  the  next
     boot.     Flag  -T  removes  temporary boot once    configuration.
     Without temporary configuration, the next boot    will  use  zfs
     dataset specified in boot pool    bootfs property.

Maybe bootfs pool property would be set as zroot/ROOT/default by default, and it would be mounted at /.
 
Maybe bootfs pool property would be set as zroot/ROOT/default by default, and it would be mounted at /.

True, if the user allows the generic zroot name for the pool.



… derived from the bootfs property of the pool that is mounted at /, …

With a distinctively-named pool and a descriptively-named boot environment (1500025-013-base):

Code:
% zpool get bootfs august
NAME    PROPERTY  VALUE                         SOURCE
august  bootfs    august/ROOT/1500025-013-base  local
%
 
Back
Top