Solved Non-boot zpool won't import on boot

I have built a 14.2-RELEASE system that boots from a UFS filesystem on an SSD, but also has four other hard disks attached containing a ZFS raidz pool.

I have the required variables set in the right files to start ZFS on boot:
Code:
# grep zfs /boot/loader.conf /etc/rc.conf
/boot/loader.conf:zfs_load="YES"
/etc/rc.conf:zfs_enable="YES"

However, after booting, the raidz pool hasn't imported and the datasets aren't mounted. I can manually run a zpool import and everything then works fine.

I found this comment on an earlier thread stating that a new /etc/rc.d/zpool script was added to resolve this, but it doesn't seem to be working for me. I wondered if the cachefile property on the pool being set to its default value of '-' might be the issue, so I tried changing it to cachefile=/etc/zfs/zpool.cache but it made no difference on the next boot and reverted to '-'.

Can anyone advise how I can fix this, besides adding a zpool import command to /etc/rc.local?
 
Try zpool import -o cachefile=/etc/zfs/zpool.cache poolname — i think it only updates the cache on import and export.
 
I have built a 14.2-RELEASE system that boots from a UFS filesystem on an SSD, but also has four other hard disks attached containing a ZFS raidz pool.

I have the required variables set in the right files to start ZFS on boot:
Code:
# grep zfs /boot/loader.conf /etc/rc.conf
/boot/loader.conf:zfs_load="YES"
/etc/rc.conf:zfs_enable="YES"

However, after booting, the raidz pool hasn't imported and the datasets aren't mounted. I can manually run a zpool import and everything then works fine.

I found this comment on an earlier thread stating that a new /etc/rc.d/zpool script was added to resolve this, but it doesn't seem to be working for me. I wondered if the cachefile property on the pool being set to its default value of '-' might be the issue, so I tried changing it to cachefile=/etc/zfs/zpool.cache but it made no difference on the next boot and reverted to '-'.

This was added when ZFS (Illumos upstream) was replaced by OpenZFS. OpenZFS doesn't do an implicit zpool import (in the kernel) like the old Illumos ZFS did.

Can anyone advise how I can fix this, besides adding a zpool import command to /etc/rc.local?
Your /etc/zfs/zpool.cache isn't being updated at zpool import.

What kind of fs is rootfs?
 
Your /etc/zfs/zpool.cache isn't being updated at zpool import.
I'm pretty certain it is. After my manual zpool import pool0, running zdb -C -U /etc/zfs/zpool.cache is returning correct information for my pool:
Code:
[root@filer2 ~]# zdb -C -U /etc/zfs/zpool.cache
pool0:
    version: 5000
    name: 'pool0'
    state: 0
    txg: 48987
    pool_guid: 4536089192970494829
    errata: 0
    hostid: 726799574
    hostname: 'filer2'
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 4536089192970494829
        create_txg: 4
        com.klarasystems:vdev_zap_root: 129
        children[0]:
            type: 'raidz'
            id: 0
            guid: 3438085862275244924
            nparity: 1
            metaslab_array: 256
            metaslab_shift: 34
            ashift: 12
            asize: 8001576501248
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 130
            children[0]:
                type: 'disk'
                id: 0
                guid: 14336507559410368910
                path: '/dev/da2'
                devid: 'ata-SAMSUNG_HD204UI_S2H7J90B309265'
                phys_path: 'pci-0000:06:00.0-sas-phy13-lun-0'
                whole_disk: 1
                DTL: 779
                create_txg: 4
                com.delphix:vdev_zap_leaf: 131
            children[1]:
...
            children[2]:
...
            children[3]:
...

My root filesystem is on a simple 'freebsd-ufs' GPT partition:
Code:
[root@filer2 ~]# gpart show
=>      40  41942960  vtbd0  GPT  (20G)
        40      2008         - free -  (1.0M)
      2048    262144      1  efi  (128M)
    264192   4194304      2  freebsd-swap  (2.0G)
   4458496  37484504      3  freebsd-ufs  (18G)

[root@filer2 ~]# mount
/dev/vtbd0p3 on / (ufs, local, soft-updates)
devfs on /dev (devfs)
/dev/vtbd0p1 on /boot/efi (msdosfs, local)
tmpfs on /tmp (tmpfs, local)

vtbd0 is a virtio block device. The system is a VM running under qemu/kvm, and the four hard disks are passed through from the hypervisor as raw SCSI devices, but are detected fine during boot:
Code:
da0 at vtscsi0 bus 0 scbus2 target 0 lun 0
da0: <QEMU QEMU HARDDISK 2.5+> Fixed Direct Access SPC-3 SCSI device
da0: 300.000MB/s transfers
da0: Command Queueing enabled
da0: 1907729MB (3907029168 512 byte sectors)
da1 at vtscsi0 bus 0 scbus2 target 0 lun 1
da1: <QEMU QEMU HARDDISK 2.5+> Fixed Direct Access SPC-3 SCSI device
da1: 300.000MB/s transfers
da1: Command Queueing enabled
da1: 1907729MB (3907029168 512 byte sectors)
da2 at vtscsi0 bus 0 scbus2 target 0 lun 2
da2: <QEMU QEMU HARDDISK 2.5+> Fixed Direct Access SPC-3 SCSI device
da2: 300.000MB/s transfers
da2: Command Queueing enabled
da2: 1907729MB (3907029168 512 byte sectors)
da3 at vtscsi0 bus 0 scbus2 target 0 lun 3
da3: <QEMU QEMU HARDDISK 2.5+> Fixed Direct Access SPC-3 SCSI device
da3: 300.000MB/s transfers
da3: Command Queueing enabled
da3: 1907729MB (3907029168 512 byte sectors)
 
The system is a VM running under qemu/kvm, and the four hard disks are passed through from the hypervisor as raw SCSI devices, but are detected fine during boot:
If the pool can be imported, then this shouldn't be a factor.

I couldn't reproducible the issue on a test system:
disk1 Root-on-UFS, raidz2, 4 disks storage pool.

It shouldn't matter, but try "cachefile=/boot/zfs/zpool.cache".

Apparently the cachefile property retrieve is inconsistent:
Code:
# zpool get cachefile tank
NAME  PROPERTY   VALUE      SOURCE
tank  cachefile  -          default

# zpool set cachefile=/etc/zfs/zpool.cache tank

# zpool get cachefile tank
NAME  PROPERTY   VALUE      SOURCE
tank  cachefile  -          default

# zpool set cachefile=/boot/zfs/zpool.cache tank

# zpool get cachefile tank
NAME  PROPERTY   VALUE                  SOURCE
tank  cachefile  /boot/zfs/zpool.cache  local
On system reboot the cachefile VALUE is gone, replaced by " - ". The pool is imported automatically, though.

If "cachefile=/boot/zfs/zpool.cache", a copy should be created in /etc/zfs as well.
 
OK, here's something strange. I rebooted the VM, logged in, and ran the following before doing anything else:

Code:
[root@filer2 ~]# zpool list
no pools available

[root@filer2 ~]# zpool import
   pool: pool0
     id: 4536089192970494829
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

    pool0       ONLINE
      raidz1-0  ONLINE
        da2     ONLINE
        da3     ONLINE
        da0     ONLINE
        da1     ONLINE

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 22 22:21 /etc/zfs/zpool.cache

[root@filer2 ~]# zdb -C -U /etc/zfs/zpool.cache
pool0:
    version: 5000
    name: 'pool0'
    state: 0
    txg: 48987
    pool_guid: 4536089192970494829
    errata: 0
    hostid: 726799574
    hostname: 'filer2'
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 4536089192970494829
        create_txg: 4
        com.klarasystems:vdev_zap_root: 129
        children[0]:
            type: 'raidz'
            id: 0
            guid: 3438085862275244924
            nparity: 1
            metaslab_array: 256
            metaslab_shift: 34
            ashift: 12
            asize: 8001576501248
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_top: 130
            children[0]:
                type: 'disk'
                id: 0
                guid: 14336507559410368910
                path: '/dev/da2'
                devid: 'ata-SAMSUNG_HD204UI_S2H7J90B309265'
                phys_path: 'pci-0000:06:00.0-sas-phy13-lun-0'
                whole_disk: 1
                DTL: 779
                create_txg: 4
                com.delphix:vdev_zap_leaf: 131
            children[1]:
                type: 'disk'
                id: 1
                guid: 2246828322968303539
                path: '/dev/da3'
                devid: 'ata-SAMSUNG_HD204UI_S2H7J9CB304047'
                phys_path: 'pci-0000:06:00.0-sas-phy12-lun-0'
                whole_disk: 1
                DTL: 778
                create_txg: 4
                com.delphix:vdev_zap_leaf: 132
            children[2]:
                type: 'disk'
                id: 2
                guid: 8185657086808692798
                path: '/dev/da0'
                devid: 'ata-SAMSUNG_HD204UI_S2H7J90B308992'
                phys_path: 'pci-0000:06:00.0-sas-phy15-lun-0'
                whole_disk: 1
                DTL: 777
                create_txg: 4
                com.delphix:vdev_zap_leaf: 133
            children[3]:
                type: 'disk'
                id: 3
                guid: 4515887524442859984
                path: '/dev/da1'
                devid: 'ata-SAMSUNG_HD204UI_S2H7J90B309221'
                phys_path: 'pci-0000:06:00.0-sas-phy14-lun-0'
                whole_disk: 1
                DTL: 776
                create_txg: 4
                com.delphix:vdev_zap_leaf: 134
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data
        com.klarasystems:vdev_zaps_v2

[root@filer2 ~]# zpool import -c /etc/zfs/zpool.cache -a -N

[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

[root@filer2 ~]# zfs list
NAME                  USED  AVAIL  REFER  MOUNTPOINT
pool0                46.9G  5.11T   140K  /pool0
pool0/media          24.2G  5.11T   151K  /pool0/media
pool0/media/blu-ray  24.2G  5.11T  24.2G  /pool0/media/blu-ray
pool0/media/dvd       140K  5.11T   140K  /pool0/media/dvd
pool0/qemu-images    22.6G  5.11T  22.6G  /pool0/qemu-images

The zpool import -c /etc/zfs/zpool.cache -a -N command is exactly what /etc/rc.d/zpool runs, so I'm confused as to why it worked when I ran it manually, but not when the rc script ran it at boot.
 
Is there anything that indicates the rc.de setup executes as planned/fails in either the zfs history or dmesg?
I added rc_debug="YES" to /etc/rc.conf and rebooted but there was nothing obvious in /var/log/messages afterwards.

However, manually running /etc/rc.d/zpool start after boot works and outputs a few DEBUG lines:
Code:
root@filer2 ~]# zpool list
no pools available

[root@filer2 ~]# /etc/rc.d/zpool start
/etc/rc.d/zpool: DEBUG: checkyesno: zfs_enable is set to YES.
/etc/rc.d/zpool: DEBUG: load_kld: zfs kernel module already loaded.
/etc/rc.d/zpool: DEBUG: run_rc_command: doit:  zpool_start

[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

These /etc/rc.d/zpool: DEBUG: ... lines were not present in the messages from the boot, suggesting that the zpool rc script isn't being run.
 
The timestamp of /etc/zfs/zpool.cache updates when I manually import the pool:
Code:
[root@filer2 ~]# zpool list
no pools available

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 09:46 /etc/zfs/zpool.cache

[root@filer2 ~]# zpool import pool0

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 18:42 /etc/zfs/zpool.cache

Also, the zpool.cache file is completedly removed when I export the pool, and reappears when I import the pool:

Code:
[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 18:45 /etc/zfs/zpool.cache

[root@filer2 ~]# zpool export pool0

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
ls: /etc/zfs/zpool.cache: No such file or directory

[root@filer2 ~]# zpool import pool0

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 18:47 /etc/zfs/zpool.cache

[root@filer2 ~]# zdb -CU /etc/zfs/zpool.cache | head -10
pool0:
    version: 5000
    name: 'pool0'
    state: 0
    txg: 55652
    pool_guid: 4536089192970494829
    errata: 0
    hostid: 726799574
    hostname: 'filer2'
    com.delphix:has_per_vdev_zaps
 
Don't export the pool. Exporting the pool will delete the cachefile.

To make the cachefile persistent, so /etc/rc.d/zpool can import the pool, reboot/power down the system with the pool imported.
 
Don't export the pool. Exporting the pool will delete the cachefile.

To make the cachefile persistent, so /etc/rc.d/zpool can import the pool, reboot/power down the system with the pool imported.
I don't routinely export the pool. I just did it here to show that exporting and importing did result in the zpool.cache file being updated, that is to say, deleted and recreated with current pool info.

I hadn't exported before all the reboots where it didn't automatically import again afterwards.
 
Yes it is. I've just gathered some more terminal output to illustrate this:
Code:
[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 20:47 /etc/zfs/zpool.cache

[root@filer2 ~]# md5sum /etc/zfs/zpool.cache
cd6b6ec6cb106529fba3febd09892ae5  /etc/zfs/zpool.cache

[root@filer2 ~]# shutdown -r now
Shutdown NOW!
shutdown: [pid 2853]
[root@filer2 ~]#                                                                               
*** FINAL System shutdown message from jason@filer2 ***                     

System going down IMMEDIATELY                                                 

System shutdown time has arrived

Connection to filer2 closed by remote host.
Connection to filer2 closed.

jason@framework:~$ ssh filer2
Last login: Thu Jan 23 20:47:18 2025 from 10.0.0.102

[jason@filer2 ~]$ uptime
 8:49PM  up 11 secs, 1 user, load averages: 0.62, 0.14, 0.05

[jason@filer2 ~]$ sudo -i
Password:

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 20:47 /etc/zfs/zpool.cache

[root@filer2 ~]# md5sum /etc/zfs/zpool.cache
cd6b6ec6cb106529fba3febd09892ae5  /etc/zfs/zpool.cache

[root@filer2 ~]# zpool list
no pools available

[root@filer2 ~]# zpool import pool0
 
[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

A zpool.cache file exists before the reboot, a reboot is performed without exporting the pool, after the reboot the zpool.cache file still exists with the same timestamp and MD5 checksum, but the pool hasn't automatically been imported.

As I mentioned earlier, it looks very much to me like the /etc/rc.d/zpool script is not being run. The debug output you usually see when rc_debug="YES" is absent from the boot time messages, but shows up fine if you run the script manually after boot. I also inserted an echo statement into the script temporarily and didn't see it during boot.
 
Yes it is. I've just gathered some more terminal output to illustrate this:
Code:
[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 20:47 /etc/zfs/zpool.cache

[root@filer2 ~]# md5sum /etc/zfs/zpool.cache
cd6b6ec6cb106529fba3febd09892ae5  /etc/zfs/zpool.cache

[root@filer2 ~]# shutdown -r now
Shutdown NOW!
shutdown: [pid 2853]
[root@filer2 ~]#                                                                              
*** FINAL System shutdown message from jason@filer2 ***                    

System going down IMMEDIATELY                                                

System shutdown time has arrived

Connection to filer2 closed by remote host.
Connection to filer2 closed.

jason@framework:~$ ssh filer2
Last login: Thu Jan 23 20:47:18 2025 from 10.0.0.102

[jason@filer2 ~]$ uptime
 8:49PM  up 11 secs, 1 user, load averages: 0.62, 0.14, 0.05

[jason@filer2 ~]$ sudo -i
Password:

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 23 20:47 /etc/zfs/zpool.cache

[root@filer2 ~]# md5sum /etc/zfs/zpool.cache
cd6b6ec6cb106529fba3febd09892ae5  /etc/zfs/zpool.cache

[root@filer2 ~]# zpool list
no pools available

[root@filer2 ~]# zpool import pool0
 
[root@filer2 ~]# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  7.27T  64.5G  7.20T        -         -     0%     0%  1.00x    ONLINE  -

A zpool.cache file exists before the reboot, a reboot is performed without exporting the pool, after the reboot the zpool.cache file still exists with the same timestamp and MD5 checksum, but the pool hasn't automatically been imported.

As I mentioned earlier, it looks very much to me like the /etc/rc.d/zpool script is not being run. The debug output you usually see when rc_debug="YES" is absent from the boot time messages, but shows up fine if you run the script manually after boot. I also inserted an echo statement into the script temporarily and didn't see it during boot.

grep zfs_enable /etc/rc.conf
 
As shown in my original post:
Code:
# grep zfs /boot/loader.conf /etc/rc.conf
/boot/loader.conf:zfs_load="YES"
/etc/rc.conf:zfs_enable="YES"
 
Put an echo statement (or a great many of them so they can be noticed at boot) into rc.d/zpool. Reboot and see if the messages are displayed on the console.

Make sure there are many of them so the message isn't missed among the numerous other boot messages. This will tell you if the script is called or not.

Simple diagnostic techniques are immensely better than simple guessing. ;)
 
I added rc_debug="YES" to /etc/rc.conf and rebooted but there was nothing obvious in /var/log/messages afterwards.
For context: on my 14.2R I'm getting this in my dmesg on a UFS-on-root and ZFS on USB stick after boot:
Rich (BB code):
[0-0] % egrep 'zfs|rc_debug' /boot/loader.conf /etc/rc.conf
/boot/loader.conf:#zfs_load="YES" # not necessary anymore
/etc/rc.conf:rc_debug="YES"
/etc/rc.conf:zfs_enable="YES"
[1-0] % dmesg -a | grep -C 5 'DEBUG: run_rc_command: doit:  zpool_start'
/etc/rc: DEBUG: run_rc_command: start_precmd: [ -n "$(geli_make_list)" -o -n "${geli_groups}" ]
/etc/rc: DEBUG: run_rc_command: doit:  /sbin/swapon -aq
/etc/rc: DEBUG: checkyesno: zfs_enable is set to YES.
ZFS filesystem version: 5
ZFS storage pool version: features support (5000)
/etc/rc: DEBUG: run_rc_command: doit:  zpool_start
/etc/rc: DEBUG: checkyesno: zfskeys_enable is set to NO.
/etc/rc: DEBUG: run_rc_command: doit:  fsck_start
/etc/rc: DEBUG: checkyesno: rc_startmsgs is set to YES.
Starting file system checks:
/etc/rc: DEBUG: checkyesno: background_fsck is set to YES.
[2-0] % zpool status
  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 00:00:02 with 0 errors on Thu Jan 23 13:33:04 2025
config:

        NAME         STATE     READ WRITE CKSUM
        tank         ONLINE       0     0     0
          gpt/zsub1  ONLINE       0     0     0

errors: No known data errors
[3-0] %
 
so I tried changing it to cachefile=/etc/zfs/zpool.cache
[...] Apparently the cachefile property retrieve is inconsistent:
Code:
# zpool get cachefile tank
NAME  PROPERTY   VALUE      SOURCE
tank  cachefile  -          default

# zpool set cachefile=/etc/zfs/zpool.cache tank

# zpool get cachefile tank
NAME  PROPERTY   VALUE      SOURCE
tank  cachefile  -          default

# zpool set cachefile=/boot/zfs/zpool.cache tank

# zpool get cachefile tank
NAME  PROPERTY   VALUE                  SOURCE
tank  cachefile  /boot/zfs/zpool.cache  local

According to openzfs.org/wiki/System_Administration - Boot process, only /boot/zfs/zpool.cache is used on FreeBSD (and not /etc/zfs/zpool.cache), :
[...] ZFS makes the following changes to the boot process:
  1. When the rootfs is on ZFS, the pool must be imported before the kernel can mount it. The bootloader on Illumos and FreeBSD will pass the pool informaton to the kernel for it to import the root pool and mount the rootfs. On Linux, an initramfs must be used until bootloader support for creating the initramfs dynamically is written.
  2. Regardless of whether there is a root pool, imported pools must appear. This is done by reading the list of imported pools from the zpool.cache file, which is at /etc/zfs/zpool.cache on most platforms. It is at /boot/zfs/zpool.cache on FreeBSD. This is stored as a XDR-encoded nvlist and is readable by executing the `zdb` command without arguments.
  3. After the pool(s) are imported, the filesystems must be mounted and any filesystem exports or iSCSI LUNs must be made. If the mountpoint property is set to legacy on a dataset, fstab can be used. Otherwise, the boot scripts will mount the datasets by running `zfs mount -a` after pool import. Similarly, any datasets being shared via NFS or SMB for filesystems and iSCSI for zvols will be exported or shared via `zfs share -a` after the mounts are done. Not all platforms support `zfs share -a` on all share types. Legacy methods may always be used and must be used on platforms that do not support automation via `zfs share -a`.
The following table shows which protocols are supported by `zfs share -a` on various platforms:

Not sure but, the two occurrences of zpool.cache in libexec/rc/rc.d/zpool might be there only because of this OS dual/multi use; or for reasons of pre-openZFS:
Code:
zpool_start()
{
	local cachefile

	for cachefile in /etc/zfs/zpool.cache /boot/zfs/zpool.cache; do


Perhaps defined here in cachefile.cfg:
Rich (BB code):
if [[ $os_name == "FreeBSD" ]]; then
	export CPATH="/boot/zfs/zpool.cache"
else
	export CPATH="/etc/zfs/zpool.cache"
fi



If "cachefile=/boot/zfs/zpool.cache", a copy should be created in /etc/zfs as well.
Also, the [/etc/zfs/]zpool.cache file is completedly removed when I export the pool, and reappears when I import the pool:
Therefore, on FreeBSD, /etc/zfs/zpool.cache acts probably like any other replacement cachefile that could be used as an alternative to the default /boot/zfs/zpool.cache.

EDIT: I may have gotten this interpreted backwards: the old FreeBSD ZFS (pre-openZFS) used /boot/zfs/zpool.cache (basing this also on the two ZFS books by Michael Lucas & Allan Jude) and with openZFS it is /etc/zfs/zpool.cache (like on other non-FreeBSD platform). That probably explains:
Apparently the cachefile property retrieve is inconsistent:
Code:
# zpool get cachefile tank
NAME  PROPERTY   VALUE      SOURCE
tank  cachefile  -          default

# zpool set cachefile=/etc/zfs/zpool.cache tank

# zpool get cachefile tank
NAME  PROPERTY   VALUE      SOURCE
tank  cachefile  -          default

# zpool set cachefile=/boot/zfs/zpool.cache tank

# zpool get cachefile tank
NAME  PROPERTY   VALUE                  SOURCE
tank  cachefile  /boot/zfs/zpool.cache  local
where zpool set cachefile=/etc/zfs/zpool.cache tank results in the "default" as SOURCE (its VALUE accordingly implicitly specified as "-").
zpool set cachefile=/boot/zfs/zpool.cache tank results in a "local" as SOURCE and its VALUE specified explicitly as "boot/zfs/zpool.cache".

The documentation referenced then refers to the pre-openZFS period; I can't verify that because I've progressed beyond that era.
 
Good morning. Thanks for the replies since my last comment.

Last night I shut down the filer2 VM and the hypervisor host before going to bed. This morning, I started them back up, logged into filer2, and discovered that it had automatically imported pool0 at boot and mounted the datasets:

Code:
[root@filer2 ~]# who -b
                 system boot  Jan 24 10:08

[root@filer2 ~]# zpool history pool0 | grep import | tail -1
2025-01-24.10:07:58 zpool import -c /etc/zfs/zpool.cache -a -N

[root@filer2 ~]# ls -l /etc/zfs/zpool.cache
-rw-r--r--  1 root wheel 3288 Jan 24 10:08 /etc/zfs/zpool.cache

I hadn't changed anything since my previous post.
 
According to openzfs.org/wiki/System_Administration - Boot process, only /boot/zfs/zpool.cache is used on FreeBSD (and not /etc/zfs/zpool.cache), :
2. Regardless of whether there is a root pool, imported pools must appear. This is done by reading the list of imported pools
from the zpool.cache file, which is at /etc/zfs/zpool.cache on most platforms. It is at /boot/zfs/zpool.cache on FreeBSD.
I cannot say exactly what observation this statement is based on, probably because when Root-on-ZFS is installed menu guided (or unattended scripted zfsboot installed), the /usr/libexec/bsdinstall/zfsboot script sets the cachefile under /boot/zfs):
Rich (BB code):
1546         # Set cachefile for boot pool so it auto-imports at system start
1547         f_dprintf "$funcname: Configuring zpool.cache for boot pool..."
1548         f_eval_catch $funcname zpool "$ZPOOL_SET" \
1549                      "cachefile=\"$BSDINSTALL_CHROOT/boot/zfs/zpool.cache\"" \
1550                      "$bootpool_name" || return $FAILURE
but the pool is imported by /etc/rc.d/zpool just fine with only /etc/zfs/zpool.cache:
Code:
 # zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
pool0  63.5G  5.41M  63.5G        -         -     0%     0%  1.00x    ONLINE  -
 
 # ls /boot/zfs  /etc/zfs/zpool.cache
/etc/zfs/zpool.cache

/boot/zfs:
/etc/zfs/zpool.cache was created without explicitly specifying a cache file just by importing a pool.
 
What does zpool get cachefile pool0 say now?
It's still set to the default value, '-'.

My primary filer host has three pools, one being the boot pool. All three have cachefile defaulted to '-' and the only cachefile that exists is /etc/zfs/zpool.cache. Nothing in /boot/zfs/.

It seems to me that /etc/zfs/zpool.cache is default location in OpenZFS, and /boot/zfs/zpool.cache was the default location with OpenSolaris ZFS on FreeBSD. If you build your pools from scratch on a recent version of FreeBSD using OpenZFS, it'll go with the new default.

I'm marking the thread Solved, even though I don't know why it suddenly started behaving as expected.
 
Back
Top