Where did the partition nvd0p4 go?

Code:
root@F3ja:/usr/home/luba # ls /dev/nvd0p4
/dev/nvd0p4
root@F3ja:/usr/home/luba #

Previously, it used to display like this:
Code:
# zpool status -v
  pool: zroot
 state: ONLINE
  scan: scrub repaired 0B in 00:01:06 with 0 errors on Thu Mar 30 23:49:59 2023
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          nvd0p4    ONLINE       0     0     0

errors: No known data errors

And now it is displayed like this:
Code:
# zpool status -v
  pool: zroot
 state: ONLINE
status: Some supported and requested features are not enabled on the pool.
        The pool can still be used, but some features are unavailable.
action: Enable all features using 'zpool upgrade'. Once this is done,
        the pool may no longer be accessible by software that does not support
        the features. See zpool-features(7) for details.
config:
        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          gpt/zfs0  ONLINE       0     0     0

errors: No known data errors
Where did the partition nvd0p4 go?
Code:
root@F3ja:/usr/home/luba # df -h
Filesystem            Size    Used   Avail Capacity  Mounted on
zroot/ROOT/default    454G    9,7G    444G     2%    /
devfs                 1,0K      0B    1,0K     0%    /dev
/dev/gpt/efiboot0     260M    1,8M    258M     1%    /boot/efi
procfs                8,0K      0B    8,0K     0%    /proc
fdescfs               1,0K      0B    1,0K     0%    /dev/fd
zroot/tmp             444G    168K    444G     0%    /tmp
zroot/usr/home        447G    2,3G    444G     1%    /usr/home
zroot                 444G     96K    444G     0%    /zroot
zroot/var/log         444G    368K    444G     0%    /var/log
zroot/usr/ports       444G     96K    444G     0%    /usr/ports
zroot/usr/src         444G     96K    444G     0%    /usr/src
zroot/var/crash       444G     96K    444G     0%    /var/crash
zroot/var/mail        444G    184K    444G     0%    /var/mail
zroot/var/tmp         444G     96K    444G     0%    /var/tmp
zroot/var/audit       444G     96K    444G     0%    /var/audit
Code:
root@F3ja:/usr/home/luba # gpart show
=>        40  1000215136  nda0  GPT  (477G)
          40      532480     1  efi  (260M)
      532520        1024     2  freebsd-boot  (512K)
      533544         984        - free -  (492K)
      534528     4194304     3  freebsd-swap  (2.0G)
     4728832   995485696     4  freebsd-zfs  (475G)
  1000214528         648        - free -  (324K)
 
gpart show -l
Code:
root@F3ja:/usr/home/luba # gpart show -l
=>        40  1000215136  nda0  GPT  (477G)
40      532480     1  efiboot0  (260M)
532520        1024     2  gptboot0  (512K)
533544         984        - free -  (492K)
534528     4194304     3  swap0  (2.0G)
4728832   995485696     4  zfs0  (475G)
1000214528         648        - free -  (324K)
 
Where did the partition nvd0p4 go?
It's still there. Probably after upgrading to 14.0-BETA1 "Some supported and requested features are not enabled on the pool." Now it's not displaying the device name (nvd0p4) but the GPT label of the partition.

The gpt/zfs0 label was created automatically during installation by the former version 13.x.

Run zpool upgrade <pool name> as the output of zpool status suggest. That should display the pool by it's device name.
 
It's still there. Propably after upgrading to 14-.0-BETA1 "Some supported and requested features are not enabled on the pool." Now it's not displaying the device name (nvd0p4) but the GPT label of the partition.

The gpt/zfs0 label was created automatically during installation by the former version 13.x.

Run zpool upgrade <pool name> as the output of zpool status suggest. That should display the pool by it's device name.
Yes, I updated to 14-.0-BETA1. Run zpool upgrade <pool name> - does this need to be done in single-user mode?

What is the pool name? zroot - ?
 
I don't thinks so. But if you are concerend about crashing the system you can execute from single-user mode or even boot a installation media and do it from there.

It would be a good idea to have a backup of importent data, just in case.
 
Code:
root@F3ja:/usr/home/luba # gpart show -l
=>        40  1000215136  nda0  GPT  (477G)
40      532480     1  efiboot0  (260M)
532520        1024     2  gptboot0  (512K)
533544         984        - free -  (492K)
534528     4194304     3  swap0  (2.0G)
4728832   995485696     4  zfs0  (475G)
1000214528         648        - free -  (324K)
See UPDATING [1], entry dated 20230612.
From 14, nvme driver is defaulted to nda instead of nvd.
And if you installed the system using bsdinstaller and automatic partitioning, possibly bsdinstall labeled zfs0, swap0 and so on, and the old version just didn't show it.

And beware! If you upgrade pool on Root-on-ZFS installation, DO NOT FORGET TO UPDATE BOOTCODES.

[1] https://cgit.freebsd.org/src/tree/UPDATING?h=releng/14.0
 
See UPDATING [1], entry dated 20230612.
From 14, nvme driver is defaulted to nda instead of nvd.
Nice catch.

Easy to miss, gpart show above displays the device name as nda0 instead of nvd0. That should have been a clue.

Elimelech, I'm sorry, my suspision it has something to do with zpool features was incorrect. Nevertheless the pools features can and should be upgraded.

If you wish the device name to appeare in the zpool-status(8) instead of the GPT label, boot a installation media and import the pool with the -d option:
Code:
zpool import -N -d /dev zroot
With the -N option set, the file system woun't be mounted.

With changed device name to nda0 the swap device name in /etc/fstab needs to be corrected from
Code:
/dev/nvd0p3
to
Code:
/dev/nda0p3
or using the GPT label
Code:
/dev/gpt/swap0
 
zpool import -N -d /dev zroot - Is this necessary to do, or is it not a critical error?
It's no error at all, and it's not necessary.

In your case, a one disk pool, it has no importans how the pools drive is named, naming it after the nda(4) driver has a more aesthetical value than a practical. In a multi-device pool, administrators want the disks labeled to identify them exactly in case of disk failor.
 
What does upgrade pool on Root-on-ZFS installation mean? I just upgraded from freebsd 13.2 to freeBSD 14.0
If you upgrading FreeBSD via source upgrading (switch branch and rebuild), ZFS pool features would never upgraded automatically.

If I recall correctly, upgrading FreeBSD itself using freebsd-update would not upgrade ZFS pool features, too. So you can ignore this if you intentionally do below.

zpool upgrade -a zroot

But if you intend to use not-yet-enabled features available, this is needed.
And sometimes it causes trouble, if any of new features are enabled and activated.
Especially, if any of features which are NOT supported by installed bootcodes (includes UEFI boot program for UEFI boot), FreeBSD would never boot again until somehow upgrade bootcodes.

To avoid this, if you intend to upgrade ZFS pool features and using Root-on-ZFS installation (I guess yours is), update boot codes using files under /boot AFTER upgrading FreeBSD and BEFORE upfrading ZFS pool features.

Usually, sources of bootcodes are updated when new pool features affecting boot codes are implemented, but sometimes forgotten or delayed (as all these are done by human volunteers) and bite you.
 
It depends on how you boot FreeBSD. As you have both nda0p1 (efiboot0) and nda0p2 (gptboot0), I cannot determine which method you are using.

For UEFI boot:
First, check /boot/efi is not empty. If it's empty, mount /dev/nda0p1
to /boot/efi as msdosfs.
Copy /boot/loader.efi into /boot/efi/efi/freebsd/ (overwrite)
Usually, if bsdinstall configured UEFI boot manager properly
AND your UEFI firmware is NOT buggy, it should work.
If it doesn't work, assuming your hardware arch is amd64,
copy /boot/loader.efi or /boot/boot1.efi as /boot/efi/efi/bootx64.efi.
(Before loader.efi become usable for first stage boot code, boot1.efi
is used and it kicks /boot/loader.efi. It is still usable.)

For legacy (BIOS) boot:
/sbin/gpart bootcode -p /boot/gptzfsboot -i 2 nda0

See section 26.3. "Updating Bootcode" on handbook [1] and man pages introduced there for details.

[1] https://docs.freebsd.org/en/books/handbook/cutting-edge/#updating-bootcode
 
Back
Top