ERROR: cannot open /boot/lua/loader.lua : no such file or directory.

Hello !
I'm using the latest version of FreeBSD, running very fine from a NVMe USB 3.1 with absolutely no problems, against a Windows 11 running from the internal NVMe SSD (just as stated on my signature).
I was working on FreeBSD, but I had to reboot in Windows for a customer in need (thanks Anydesk...). I rebooted smoothly, as usual, without any problem. No rude reboot, I didn't upgrade any software, and Windows also did not.

Now, I can't boot FreeBSD anymore. At boot, it shows (EFI loader, revision 1.1) several lines (Command line arguments, Image base... Then Ignoring Boot0001: Only one DP found), trying ESP, trying PciRoot on disk0p1, p2, and on disk0p3, it lines :
Setting currdev to zfs:zroot/default:
Failed to find bootable partition
ERROR: cannot open /boot/lua/loader.lua : no such file or directory.


I tested booting from this external drive on another device, same errors. I tried things like load kernel (not found), and gpart is also not found.

I don't know what to do next. In BIOS, I run UEFI with CSM and I never had any problem.
Thanks !
 
To begin with, we need the gpart show of this disk.
I don't understand why you speak of CSM since it seems you boot in UEFI mode.

Reading what you state, I can suppose that the problem isn't in the FreeBSD loader and probably not in the EFI vars.
 
Setting currdev to zfs:root/default:
Was this a fairly default install? Because the boot pool typically is called zroot, not root.

I run UEFI with CSM and I never had any problem.
Did you perhaps turn off CSM in order to boot Windows? You don't get to see loader.efi(8) if you CSM boot (because it's loaded from the efi partition when UEFI booting).
 
Hi, thank you for your replies !

Was this a fairly default install? Because the boot pool typically is called zroot, not root.
Oops, bad typo. That's zroot indeed, sorry for this - I correct accordinly.

Did you perhaps turn off CSM in order to boot Windows? You don't get to see loader.efi(8) if you CSM boot (because it's loaded from the efi partition when UEFI booting).

Well, regarding UEFI, this is how I set it up - on the BIOS, it is named this way. Running UEFI only, I have an error (Key invalid). I only tested "in case of", but I never had to change it since my first install (was it 13.1 or 13.2, I can't remember).

To begin with, we need the gpart show of this disk.
I don't understand why you speak of CSM since it seems you boot in UEFI mode.

Here is the gpart show, from my another computer :
Code:
=>        40  1000215136  da1  GPT  (477G)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832   995485696    4  freebsd-zfs  (475G)
  1000214528         648       - free -  (324K)

Everything looks fine...
 
This message "Failed to find bootable partition" seems to come from the FreeBSD efi loader.
Code of /usr/src/stand/efi/loader/main.c:

Code:
/*
     * Try and find a good currdev based on the image that was booted.
     * It might be desirable here to have a short pause to allow falling
     * through to the boot loader instead of returning instantly to follow
     * the boot protocol and also allow an escape hatch for users wishing
     * to try something different.
     */
    if (find_currdev(uefi_boot_mgr, is_last, boot_info, bisz) != 0)
        if (uefi_boot_mgr &&
            !interactive_interrupt("Failed to find bootable partition"))
            return (EFI_NOT_FOUND);

I don't have the sufficent knowledge (nor time) to trace the problem (probably in find_currdev()). But, maybe someone else will.

For the moment, you should try to import your zpool and verify that the bootfs property is well set at zroot/ROOT/default. zpool get bootfs zroot
 
Hi, thank you for your replies !


Oops, bad typo. That's zroot indeed, sorry for this - I correct accordinly.



Well, regarding UEFI, this is how I set it up - on the BIOS, it is named this way. Running UEFI only, I have an error (Key invalid). I only tested "in case of", but I never had to change it since my first install (was it 13.1 or 13.2, I can't remember).



Here is the gpart show, from my another computer :
Code:
=>        40  1000215136  da1  GPT  (477G)
          40      532480    1  efi  (260M)
      532520        1024    2  freebsd-boot  (512K)
      533544         984       - free -  (492K)
      534528     4194304    3  freebsd-swap  (2.0G)
     4728832   995485696    4  freebsd-zfs  (475G)
  1000214528         648       - free -  (324K)

Everything looks fine...
dear perceval:
1. boot freebsd with freebsd DVD iso.
2. please enter the shell , not install .
3. mount your zpool to somewhere you want . (example : zpool import -R /tmp/xxx/ zroot)
note: backup all your data in that time
4. check the what is your default freebsd installation location . (i think that is your issue )
5. make sure you zfs boot location with command ( zpool set bootfs=zroot/system zroot )
maybe that will help you .
 
I'm using the latest version of FreeBSD, running very fine from a NVMe USB 3.1 with absolutely no problems, against a Windows 11 running from the internal NVMe SSD
I tested booting from this external drive on another device, same errors.
If the system doesn't boot on two different machines, then there might be something wrong with the Z file system.

Try booting a installation media, enter "live system" and try zpool import to check if the pool is available.

If it is available, import the pool without mounting the file system, check pool status, look for data errors message:
Code:
 # zpool import -N zroot

 # zpool status zroot

Eventually run zpool-scrub(8).
 
One possibility for the failure to boot might be that you upgraded the boot pool (zpool-upgrade(8)) and forgot to update the bootcode. Or upgraded the freebsd-boot partition, not realizing the system is actually booting from efi and you didn't update loader.efi(8).

Get yourself a 14.1-RELEASE install image and boot it. Drop to the shell, mount the efi partition on /boot/efi and copy /boot/loader.efi to /boot/efi/EFI/freebsd/loader.efi. Do not overwrite /boot/efi/EFI/boot/bootx64.efi as that's probably the Windows bootloader.
 
Thanks for your help. So...
I've been trying T-Daemon things (both from latest FreeBSD 14.1 latest live disc and my other FreeBSD computer) :
zpool import :
pool : zroot
id : (filled with the ID)
state : ONLINE
status : Some supported features are not enabled on the pool.
action : The pool can be imported using its nmae or numeric identifier, though some features will not be available without an explicit 'zpool upgrade'.
config :
zroot ONLINE
da0p4 ONLINE
(double checked on camcontrol devlist, it is the pool from the non working BSD)

But if I run zpool import -N zroot, it says
cannot import 'zroot' : one or more devices is currently unavailable
(EDIT : On my other computer, I also tried using zpool import -f -N -R /mnt zroot zroot2 but same error...)

Regarding SirDice, I copied a new version of loader.efi (not the same timestamp and size) but it does the same. I have to admit I did it "the lazy way" and I hope it's the same (when my not working FreeBSD drive is plugged to my other computer, I can access to /efi/ easily from XFCE, so I backed up the original loader.efi and copy/pasted the new one from the FreeBSD disc).
 
Try to add the -f option.
Just added it while you were typing :)
I updated my previous post meanwhile, still stuck on unavaible device...

EDIT :
I forgot to mention, if I run for instance zpool import zroot zrootISDEAD without anything else, it says correctly :
cannot import 'zroot': pool was previously in use from another system.
Last accessed by (hostid=0) at Tue Sep 3 17:28:51 2024
The pool can be imported, use 'zpool import -f' to import the pool.


nothing new, to help here is the result of
geom disk list
Geom name: da1
Providers:
1. Name: da1
Mediasize: 512110190592 (477G)
Sectorsize: 512
Stripesize: 4096
Stripeoffset: 0
Mode: r1w1e2
descr: SSK
lunname: SSK DD564198838B8
lunid: 3044564198838280
ident: DD564198838B8
rotationrate: 0
fwsectors: 63
fwheads: 255


zdb -l /dev/da1p4
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'zroot'
state: 0
txg: 489937
pool_guid: 8280732139935653737
errata: 0
hostname: ''
top_guid: 15020858812927277298
guid: 15020858812927277298
vdev_children: 1
vdev_tree:
type: 'disk'
id: 0
guid: 15020858812927277298
path: '/dev/da0p4'
whole_disk: 1
metaslab_array: 67
metaslab_shift: 32
ashift: 12
asize: 509683957760
is_log: 0
create_txg: 4
features_for_read:
com.delphix:hole_birth
com.delphix:embedded_data
labels = 0 1 2 3

The path can explain (?) why it won't boot from my another computer (it's NOT the goal, the external drive is used only because I couldn't get a free slot on my main laptop...).
 
But if I run zpool import -N zroot, it says
cannot import 'zroot' : one or more devices is currently unavailable
zpool import shows the pool can be imported, but actually trying to import the pool fails.

This looks like the file system got corrupted somehow, making it inaccessible. If there is important data on the disk, hope you have backups on your other FreeBSD computer,

If no backups exists, and there is important data you want to fetch, try to import the pool read-only. Boot installation media or the other system (in this case skip /tmp/zfs, use /mnt) ,
Code:
mkdir /tmp/zfs
zpool  import  -fR  /tmp/zfs  -o  readonly=on  zroot

In case this fails too, there are zpool-import(8) options you could try, but they can be hazardous for the pool (see -F and -X options).

It could be an issue with the hardware which eventually damaged the file system, I would check the NVMe SSD disks health with sysutils/smartmontools.

I have experienced a unbootable root-on-ZFS on a Samsung 840 Pro SSD twice a few years ago. Re-installing the system after the first time, the system faild to boot after a while the second time. In the end, the disk was no longer recognized by the PC.


The path can explain (?) why it won't boot from my another computer
Path (device name changes da0p4 -> da1p4) don't matter in booting root-on-ZFS. Even when the device names change, ZFS is designed to recognize the pool by the pool name, regardless what device name(s) the vdev(s) had when the pool was created (da0p4 in this case).
 
zpool import shows the pool can be imported, but actually trying to import the pool fails.

This looks like the file system got corrupted somehow, making it inaccessible. If there is important data on the disk, hope you have backups on your other FreeBSD computer,

If no backups exists, and there is important date you want to fetch, try to import the pool read-only. Boot installation media or the other system (in this case skip /tmp/zfs, use /mnt) ,
Code:
mkdir /tmp/zfs
zpool  import  -fR  /tmp/zfs  -o  readonly=on  zroot

In case this fails too, there are zpool-import(8) options you could try, but they can be hazardous for the pool (see -F and -X options).

It could be an issue with the hardware which eventually damaged the file system, I would check the NVMe SSD disks health with sysutils/smartmontools.

I have experienced a unbootable root-on-ZFS on a Samsung 840 Pro SSD twice a few years ago. Re-installing the system after the first time, the system faild to boot after a while the second time. In the end, the hard disk was no longer recognized by the PC.



Path (device name changes da0p4 -> da1p4) don't matter in booting root-on-ZFS. Even when the device names change, ZFS is designed to recognize the pool by the pool name, regardless what device name(s) the vdev(s) had when the pool was created (da0p4 in this case).
Thank you very much, I admit I'm a bit lost in this, but that's a positive way to learn new things !
I've been testing your commands in FreeBSD setup live mode (so far, I tested from DVD using single user mode), and I have almost the same results ; the zpool import -N zroot still states it's unavailable, with 4 lines :
Code:
pool log replay failure. zpool=zroot

If I try your exact command with :
Code:
mkdir /tmp/zfs
zpool  import  -fR  /tmp/zfs  -o  readonly=on  zroot
I have the same failure lines, ending with :
Code:
Invalid vdev configuration.

New hint I'd say !
I've already tested SMART values from CrystalDiskInfo, everything is fine.

Regarding backups, how to tell ? I'm probably damned, because I was ending my backup script, and the backups tests I have are of course on the pool, it was not ready yet to backup somewhere else (testing only). i have very few data on it, I was backing up all the conf/system files I was editing for about a year and all my latest setup notes are in the pool.
Luckily, I've been using snapshot from this computer to start my fresh install on my Lenovo, so I have a few months ago backup, which is not that bad (and all my important data are on a NAS). I've been also using the forum to keep in mind what I've done.
But I'll try to keep hope again and fix this !!

PS : forgive the poor writing, I'm replying from my smartphone... EDIT : corrected !
 
Tested zdb -l /dev/da0 from live install :
Code:
failed to unpack label 0
failed to unpack label 1
---
LABEL 2 (Bad label cksum)
---
    version: 5000
    name: 'zroot'
    state: 0
    txg: 489937
    pool_guid: 8280732139935653737
    errata: 0
    hostname: ''
    top_guid: 15020858812927277298
    guid: 15020858812927277298
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 15020858812927277298
        path: '/dev/da0p4'
        whole-disk: 1
        metaslab_array: 67
        metaslab_shift: 32
        ashift: 12
        asize: 50968357760
        is_log_log: 0
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
    labels = 2
failed to unpack label 3
 
Well, nothing new by now, I guess it's lost but I'll try some commands on the disk.
Meanwhile, I've setup a fresh FreeBSD install from scratch, this time from an internal SSD (SATA, but I don't care), it will be much cleaner than running from an external drive (the data I had on the SSD will go on a bigger Windows NVMe drive). My new setup is clean and works fine as before, I recovered some of the stuff from my other laptop and thanks to the notes from the forum ?
I keep my "faulted" external drive as is, to try and recover the remaining data if possible. So I'll keep opened this thread regarding what's going on next... Thanks !
 
Partition 4:

zdb -l /dev/da1p4
------------------------------------
LABEL 0
------------------------------------
version: 5000
name: 'zroot'

From the manual page: "… zdb -l will return 0 if valid label was found, …".



The device (not partition 4):

Tested zdb -l /dev/da0 from live install :

Bad checksum aside … from the presence of label 2 (near the 'end' of the device) – alone – I guess that you previously:
  1. gave the entire device to a pool
  2. did not clear the four copies of the label before giving only part of the device to another pool.
zpool-labelclear(8)



Compare with four of four failures to unpack at the device level (when, in this case, I do not specify partition 1):

Code:
root@mowa219-gjp4-zbook-freebsd:~ # lsblk /dev/da2
DEVICE         MAJ:MIN SIZE TYPE                                    LABEL MOUNT
da2              0:240 932G GPT                                         - -
  <FREE>         -:-   1.0M -                                           - -
  da2p1          0:235 932G freebsd-zfs                     gpt/Transcend <ZFS>
  <FREE>         -:-   712K -                                           - -
root@mowa219-gjp4-zbook-freebsd:~ # zdb -l /dev/da2p1
------------------------------------
LABEL 0
------------------------------------
    version: 5000
    name: 'Transcend'
    state: 0
    txg: 1736306
    pool_guid: 13095179708734892202
    errata: 0
    hostid: 635545813
    hostname: 'mowa219-gjp4-zbook-freebsd'
    top_guid: 12353927128696550259
    guid: 12353927128696550259
    vdev_children: 1
    vdev_tree:
        type: 'disk'
        id: 0
        guid: 12353927128696550259
        path: '/dev/gpt/Transcend'
        whole_disk: 1
        metaslab_array: 130
        metaslab_shift: 33
        ashift: 12
        asize: 1000198373376
        is_log: 0
        DTL: 2347
        create_txg: 4
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2
    labels = 0 1 2 3
root@mowa219-gjp4-zbook-freebsd:~ #

Code:
root@mowa219-gjp4-zbook-freebsd:~ # zdb -l /dev/da2
failed to unpack label 0
failed to unpack label 1
failed to unpack label 2
failed to unpack label 3
root@mowa219-gjp4-zbook-freebsd:~ #
 
Back
Top