Loader needs to be updated (14-STABLE)

I reinstalled the script from git and ran again, but this time it said the loaders were up to date, of course. Sorry I didn't capture the error message when it occurred.

Working from memory, show-me said I needed updating. I don't remember the error. It seemed like it had something to do with the partition being mounted. I really don't remember.
 
There was indeed a bug that prevented the actual update of the efi loaders.

It's fixed in the 1.2.1 version.
I also final find that I need boot loader upgrade (I never did) and I will try your
sysutils/loaders-update. It is ennough to just run and voila?
gpart show:
Code:
 40  500118112  nda0  GPT  (238G)
         40     532480     1  efi  (260M)
     532520  490201088     2  freebsd-ufs  (234G)
  490733608    8388608     3  freebsd-swap  (4.0G)
  499122216     995936        - free -  (486M)
Thank you.

I just run and got:
Code:
root # loaders-update show-me
loaders-update v1.2.1

One or more efi partition(s) have been found.

Examining nda0p1...
mount -t msdosfs /dev/nda0p1 /mnt
mount_msdosfs: /dev/nda0p1: Device busy
Cannot mount nda0p1, so cannot looking for its loader(s).

-------------------------------
Your current boot method is UEFI.
Boot device: nda0p1 File(\efi\freebsd\loader.efi)
One or more target partition(s) have been found...
But no loader seems to be updatable.
1 error(s) occured during the scan.
-------------------------------


The system is FreeBSD 14.2-RELEASE (amd64)
 
It wasn't able to look into your efi partition and that's not normal:


Is your partition mounted somewhere? Have you something related in the output of mount?
Code:
root # mount
/dev/nvd0p2 on / (ufs, local, soft-updates, journaled soft-updates)
devfs on /dev (devfs)
/dev/nvd0p1 on /boot/efi (msdosfs, local)
fdescfs on /dev/fd (fdescfs)
procfs on /proc (procfs, local)
 
What if I use
Code:
cp /boot/loader.efi /boot/efi/efi/freebsd/loader.efi
cp /boot/loader.efi /boot/efi/efi/boot/bootx64.efi
Thank you.
 
Seems to be a naming problem:

But:


Try with the -g switch: loaders-update show-me -g
Code:
root # loaders-update show-me -g
loaders-update v1.2.1

One or more efi partition(s) have been found.

Examining nda0p1...
mount -t msdosfs /dev/nda0p1 /mnt
mount_msdosfs: /dev/nda0p1: Device busy
Cannot mount nda0p1, so cannot looking for its loader(s).

-------------------------------
Your current boot method is UEFI.
Boot device: nda0p1 File(\efi\freebsd\loader.efi)
One or more target partition(s) have been found...
But no loader seems to be updatable.
1 error(s) occured during the scan.
-------------------------------
and here is my /etc/fstab:
Code:
# Device        Mountpoint      FStype  Options Dump    Pass#
/dev/nvd0p1     /boot/efi       msdosfs rw      2       2
/dev/nvd0p2     /               ufs     rw      1       1
/dev/nvd0p3.eli none            swap    sw      0       0
fdesc           /dev/fd         fdescfs rw,late 0       0
proc            /proc           procfs  rw,late 0       0
 
Got it, I think. It's because you mount explicitly your partitions with the nvd names instead of label or nda names. Your system has these nvd names in /dev but not in sysctl kern.disks or gpart show.

That said, I don't see for the moment how to find a workaround for this special case.
 
Got it, I think. It's because you mount explicitly your partitions with the nvd names instead of label or nda names. Your system has these nvd names in /dev but not in sysctl kern.disks or gpart show.

That said, I don't see for the moment how to find a workaround for this special case.
It was fresh installation on laptop in 2021 (from this time I didn't update loader), and nvd name in fstab was made by system not from me.
Do you thin it will be a problem if I use cp /boot/...?
Thank you very much.
From /dev
nda0.gif
 
And what version was there at that time in 2021? Remember the hellish moments and the train wreck that astyle often tells us about...
 
Looking at nda(4):
hw.nvme.use_nvd
The nvme(4) driver will create nda device nodes for block storage
when set to 0. Create nvd(4) device nodes for block storage when
set to 1. See nvd(4) when set to 1.

kern.cam.nda.nvd_compat
When set to 1, nvd(4) aliases will be created for all nda devices,
including partitions and other geom(4) providers that take their
names from the disk's name. nvd(4) devices will not, however, be
reported in the kern.disks sysctl(8).

And nvme(4):
The nvd(4) driver is used to provide a disk driver to the system by de-
fault. The nda(4) driver can also be used instead. The nvd(4) driver
performs better with smaller transactions and few TRIM commands. It
sends all commands directly to the drive immediately. The nda(4) dri-
ver performs better with larger transactions and also collapses TRIM
commands giving better performance. It can queue commands to the
drive; combine BIO_DELETE commands into a single trip; and use the CAM
I/O scheduler to bias one type of operation over another.

So, you probably have kern.cam.nda.nvd_compat=1 and hw.nvme.use_nvd=0. I would suggest to modify your /etc/fstab file with the label names (if you have), like this VM:
Code:
$ gpart show -l
=>      40  41942960  nda0  GPT  (20G)
        40    532480     1  efiboot0  (260M)
    532520      1024     2  gptboot0  (512K)
    533544       984        - free -  (492K)
    534528   4194304     3  swap0  (2.0G)
   4728832  37214168     4  zfs0  (18G)

$ cat /etc/fstab
# Device            Mountpoint    FStype    Options        Dump    Pass#
/dev/gpt/efiboot0    /boot/efi    msdosfs    rw            2        2
/dev/gpt/swap0       none        swap    sw            0        0
proc                /proc        procfs    rw
Therefore, no more problem of nda or nvd (verify you have all the labels in /dev/gpt).

Beside this, you can switch to nvd driver with hw.nvme.use_nvd=1 in /boot/loader.conf (the default is 0). That said, I'm unable to tell you what is the best driver for your machine.
 
Yes, I have kern.cam.nda.nvd_compat=1 and hw.nvme.use_nvd=0 and I do not have /dev/gpt, I have /dev/gptid and
Code:
 gpart show -l
=>       40  500118112  nda0  GPT  (238G)
         40     532480     1  (null)  (260M)
     532520  490201088     2  (null)  (234G)
  490733608    8388608     3  (null)  (4.0G)
  499122216     995936        - free -  (486M)
For all those years I didn't have problems and I do not want to have them with some changes. If I those settings which I have update loaders with just copy should I expect problems, please?

Thank you for all your help.
 
You won't have any problem in using cp. It's just what loaders-update does.

But, if I was you, I will label my partitions and modify /etc/fstab accordingly. Or, at least, translate /etc/fstab to nda naming. Because it's the nda driver your system uses, not nvd. /dev/nvd* are just symlinks to /dev/nda*. This confusion might cause another future side-effects.
 
There is a fundamental difference in how nda(4) and nvd(4) interfaces wih the NVMe device.
Jim Harris & Warner Losch - FreeBSD and NVM EXPRESS - FreeBSD Journal July/August 2018:
In FreeBSD, NVMe controllers and namespaces are enumerated and initialized through the nvme(4)
driver. nvme(4) is also responsible for providing an interface to expose these namespaces as block
devices, but does not register those namespaces with GEOM nor CAM directly. nvd(4) registers each
namespace with GEOM as a block device. Since NVMe has become mainstream since nvd(4) was originally
developed, Netflix added nda(4) as an alternative to nvd(4). The nvd(4) driver is a very thin
layer on top of the NVMe protocol and is designed to operate at high transaction rates. The nda(4)
driver integrates with CAM, including its error recovery and advanced queueing. It also offers traffic
shaping to the drive via the I/O scheduler to improve overall performance.

Emrion, you might find some inspiration as to how to correctly detect these 'nvd' NVMe devices by looking at the source code of bsdinstall(8).
 
Emrion, you might find some inspiration as to how to correctly detect these 'nvd' NVMe driver by looking at the source code of bsdinstall(8).
As I wrote, its system use nda driver. The loaders-update detection is correct. I can find a workaround for this peculiar case but it will add a lot of code. I will see if another people are in the same case.
 
I will see if another people are in the same case.
It looks like nda is the default since 14.0, quoted from the 14.0 release notes:
NVMe disks are now nda devices by default, for example nda0; see nda(4). Symbolic links for the previous nvd(4) device names are created in /dev. However, configuration such as fstab(5) should be updated to refer to the new device names.Options to control the use of nda devices and symbolic links are described in nda(4). bdc81eeda05d(Sponsored by Netflix)

the commit says:
nvme: Switch to nda by default
We already run nda by default on all the !x86 architectures. Switch thedefault to nda. nda created nvd compatibility links by default, so this should be a nop. If this causes problems for your application, sethw.nvme.use_nvd=1 in your loader.conf.

I found a discussion in the mailing list where the transitioning between nvd to nda may rises some questions (but it's mostly related to ZFS so nothing to do with OP because he uses UFS), and at some point someone concluded by:
My current interpretation is, that the nvd driver reports the wrong
value for maximum performance and reliability. I should make a backup
and re-create the pool.
Maybe we should note in the 14.0 release notes, that the switch to nda
is not a "nop"
.
The question I can't find the answer for, what does nop mean?

Being myself on 13.X it seems I will have to deal with this switch too at some point, it's a good thing I've read this thread, duly noted and bookmarked :)
 
It looks like nda is the default since 14.0, quoted from the 14.0 release notes:


the commit says:


I found a discussion in the mailing list where the transitioning between nvd to nda may rises some questions (but it's mostly related to ZFS so nothing to do with OP because he uses UFS), and at some point someone concluded by:

The question I can't find the answer for, what does nop mean?

Being myself on 13.X it seems I will have to deal with this switch too at some point, it's a good thing I've read this thread, duly noted and bookmarked :)
My guess why nvd is not fixed to work like nda is that nvd is no longer the default starting from 14.0 and kept for backward compatibility. Fixing it would break backward compatibility (POLA violation until the transmission periods).

And creating partitions with labels and use the labels for creating ZFS pools, geli-encrypted partitions and/or entries for /etc/fstab would be a good habit.

In my experience, creating pool using labels instead of GEOM providers for NVMe M2 card mounted in USB adapter (thus, GEOM generates /dev/da* as its provider) and moving the card into M2 (NVMe) slots of brand-new computer (thus, GEOM generates /dev/nda* as stable/14 was installed) just worked.
 
You won't have any problem in using cp. It's just what loaders-update does.

But, if I was you, I will label my partitions and modify /etc/fstab accordingly. Or, at least, translate /etc/fstab to nda naming. Because it's the nda driver your system uses, not nvd. /dev/nvd* are just symlinks to /dev/nda*. This confusion might cause another future side-effects.
I listen you and I have now:
Code:
gpart show -l
=>       40  500118112  nda0  GPT  (238G)
         40     532480     1  efiboot0  (260M)
     532520  490201088     2  root0  (234G)
  490733608    8388608     3  swap0  (4.0G)
  499122216     995936        - free -  (486M)
and loaders-update:
Code:
loaders-update show me
loaders-update v1.2.1
Usage: loaders-update mode [-fgr] [-m efi_mount_dir] [-s loaders_source_dir]
mode can be one of:
  show-me: just show the commands to type, change nothing.
  shoot-me: may update the loader(s), but ask for confirmation before each one.
Options:
  -f: won't check the freebsd-boot content for BIOS loaders update.
  -g: force to use 'gpart show' for disk detection.
  -r: won't check the root file system for BIOS loaders update.

Thank you.
 
One question to "swith to nda" which looks like I am using (symlink - post #36): Is it enogh to use
kern.cam.nda.nvd_compat=0 and hw.nvme.use_nvd=1? Should besettings in /boot/loader.conf or /etc/sysctl.conf, please.
 
Is it enogh to use
kern.cam.nda.nvd_compat=0 and hw.nvme.use_nvd=1? Should besettings in /boot/loader.conf or /etc/sysctl.conf, please.
I have hw.nvme.use_nvd=0 in my /boot/loader.conf until before nda became default (to test nda) and still keeping it as is (would be paranoid, though).

And it should be in /boot/loader.conf, as kernel needs to know it at the moment loader passes control to kernel.

On the other hand, if you need to stick with nvd even on 14.0 and later, you'll need hw.nvme.use_nvd=1 and kern.cam.nda.nvd_compat=1 in your /boot/loader.conf.
 
and loaders-update:
loaders-update show me loaders-update v1.2.1 Usage: loaders-update mode [-fgr] [-m efi_mount_dir] [-s loaders_source_dir] mode can be one of: show-me: just show the commands to type, change nothing. shoot-me: may update the loader(s), but ask for confirmation before each one. Options: -f: won't check the freebsd-boot content for BIOS loaders update. -g: force to use 'gpart show' for disk detection. -r: won't check the root file system for BIOS loaders update.
You forgot the '-' between 'show' and 'me': loaders-update show-me
 
Back
Top