I'll add some information (and ultimately what I did to correct it) because I ran into a similar issue when I upgraded to 13-STABLE.
My root file system is
not on zfs: it's on a small M.2 NVMe drive partition with ffs. I have a separate 12GB ZFS pool that consists of 3 SATA drives. After upgrading via source to 13-STABLE, I discovered that my pool was not being automatically mounted at boot.
I checked
/etc/rc.conf for anything I had overlooked such as missing
or typos or a corrupt file. Nothing seemed to be incorrect, missing, or out of place and I also verified the file
/etc/zfs/zpool.cache actually existed on my system which it did.
After the machine was up and running, if I reloaded ZFS manually by running the command
service zfs restart
, my pool was properly mounted so I ruled out trouble with my pool or zfs versions being wackado.
The only specific zpool messages I could find in my log files were
pid 63 (zpool) is attempting to use unsafe AIO requests - not logging anymore. Since I assumed this might be coming from
/etc/rc.d/zpool, I turned on debugging briefly for the RC subsystem using
but there were no debugging messages other than more of the same so I turned debugging off.
At some point in this process, I thought about looking at the source for
/etc/rc.d/zpool in the git repository
https://cgit.freebsd.org/ for stable/13 and ALSO for main. The location in the source tree is
root/libexec/rc/rc.d/zpool. The file I am posting is from main (HEAD) at
https://cgit.freebsd.org/src/tree/libexec/rc/rc.d/zpool
Code:
#!/bin/sh
#
# $FreeBSD$
#
# PROVIDE: zpool
# REQUIRE: hostid disks
# BEFORE: mountcritlocal
# KEYWORD: nojail
. /etc/rc.subr
name="zpool"
desc="Import ZPOOLs"
rcvar="zfs_enable"
start_cmd="zpool_start"
required_modules="zfs"
zpool_start()
{
local cachefile
for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do
if [ -r $cachefile ]; then
zpool import -c $cachefile -a -N
if [ $? -ne 0 ]; then
echo "Import of zpool cache ${cachefile} failed," \
"will retry after root mount hold release"
root_hold_wait
zpool import -c $cachefile -a -N
fi
break
fi
done
}
load_rc_config $name
run_rc_command "$1"
The section of code that has been added to this file (and is missing in stable/13) is
Code:
if [ $? -ne 0 ]; then
echo "Import of zpool cache ${cachefile} failed," \
"will retry after root mount hold release"
root_hold_wait
zpool import -c $cachefile -a -N
fi
I believe the
root_hold_wait takes into account that the root file system may not necessarily
be on ZFS and gives the system time to mount the root file system and then release the hold and continue on to import existing ZFS pools. I pulled the HEAD version of this file and temporarily replaced my existing
/etc/rc.d/zpool with this one and rebooted. At that point, my pool was automatically mounted at boot.
This may not be the same problem the op has reported; however, I suspect it might be similar to the trouble
iucoen is having (?) since he specifically said the system is booting from an NVMe device. It's not necessarily the device per se, it's having the root file system
not on ZFS that I believe is the issue since the 13-STABLE version of this file does not have a
root_hold_wait.
Let me start by saying this is my first post (long time user of FreeBSD) and I have tried super hard to follow the Formatting Guidelines at
https://forums.freebsd.org/threads/formatting-guidelines.49535/. If I have made any mistakes, I do apologize in advance.