ZFS 13-RELEASE can't import zpool type='file'

Hi.
This is something I'm used to do but not sure what could have changed (or some problem here).
If I create a zpool type='file' the pool would not resist reboots or export/import.
The following sequence describes the problem.
Code:
root@FreeBSD-dev:/usr/local/zpoolfiles # mkfile 100M tank
root@FreeBSD-dev:/usr/local/zpoolfiles # zpool create tank /usr/local/zpoolfiles/tank
root@FreeBSD-dev:/usr/local/zpoolfiles # zfs list
NAME   USED  AVAIL     REFER  MOUNTPOINT
tank   100K  39.9M       24K  /tank
root@FreeBSD-dev:/usr/local/zpoolfiles # zpool export tank
root@FreeBSD-dev:/usr/local/zpoolfiles # zpool import -d /usr/local/zpoolfiles/tank
   pool: tank
     id: 8556822463090427726
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        tank                          ONLINE
          /usr/local/zpoolfiles/tank  ONLINE
root@FreeBSD-dev:/usr/local/zpoolfiles # zpool import tank
cannot import 'tank': no such pool available
root@FreeBSD-dev:/usr/local/zpoolfiles # zpool import 8556822463090427726
cannot import '8556822463090427726': no such pool available
root@FreeBSD-dev:/usr/local/zpoolfiles # zfs list
no datasets available

Any clues how to solve this?
Thanks!
 
Well, you have to specify the dir where the pool is.
Type: zpool import -d /usr/local/zpoolfiles tank
Note the space between the dir name and the pool name.
This behaviour has nothing new.
 
Well, you have to specify the dir where the pool is.
Type: zpool import -d /usr/local/zpoolfiles tank
Note the space between the dir name and the pool name.
This behaviour has nothing new.
Thanks. Yeah I can import the pool this way but it wont be persistent after a reboot. Not sure what but same procedure on other machine (old setup but 12.2-RELEASE actually) just keeps the pool after reboots with just zfs_enable="yes" in rc.conf (no additional zfs related lines in /boot/loader.conf).

Maybe it has to do with
/etc/zfs/zpool.cache
or
/boot/zfs/zpool.cache
?

Interesting /etc/zfs/zpool.cache didn't exist after pool creation steps of my original post. After 'zpool import -d /usr/local/zpoolfiles tank' the file was created and I can view the pool info with 'zdb -CU /etc/zfs/zpool.cache'. Despite this cache it wont survive reboot or while trying 'zpool import tank'. Need the full command ('zpool import -d /usr/local/zpoolfiles tank') to be able to import it again.
 
I copied over the cache file from etc to boot so this list is ok,
Code:
zdb -CU /boot/zfs/zpool.cache | grep name | grep -v hostname
 
In rc.conf you have ?,
Code:
zpool_enable="YES"

Inserted this one also but didn't change the result.
Tried to replicate the same situation I have in the other (working) machine, with some jails in the pool that get loaded when booted (as per ezjail_enable="YES") but no. Still need to import pool and start jails manually.

It looks like it's a bug: PR 250816. The PR describes a similar setup, but it's seams it's not a priority to correct it right now, if at all (Comment 17). The PR reporter claims to have a workaround though ( Comment 16 ).

Nice finding, thank you. And thanks for pointing the details I will check and try any workaround.


I copied over the cache file from etc to boot so this list is ok,
Code:
zdb -CU /boot/zfs/zpool.cache | grep name | grep -v hostname

Tried copying from /etc/zfs/zpool.cache to /boot/zfs/zpool.cache. Yeah it list the pool OK but not able to import and/or load from ezjail at boot etc.
No sure about these lines just read somewhere about trying adding to /boot/loader.conf:
Code:
zpool_cache_load="YES"
zpool_cache_type="/boot/zfs/zpool.cache"
zpool_cache_name="/boot/zfs/zpool.cache"

But still same thing.
For now I guess I will just sit in the "... a bug that affects tiny minority of users who do very unusual things with ZFS" corner and try to automate it with some startup script :)
 
You mean pool id? It doesn't work for me, still same thing as my initial post "cannot import '8556822463090427726': no such pool available". Here it works as Emrion explained with the full path + pool name.
 
With a 12.0-RELEASE (yes, I know it's outdated), an active zpool like yours survives at reboot.
The file /boot/zfs/zpool.cache keeps the required informations for that. I'll try on a 13.0-RELEASE.
 
The following script works without errors on 13.0 , it imports using the directory where the zpool file is located and the zpool id of that zpool file.
Code:
#!/usr/local/bin/zsh -v
zpool export zfile
zpool create -m /zfile2 -f zfile /zfile/zfile
zpool export zfile
export myid=`zpool import -d /zfile | grep "id:" | awk '{print $2}'`
zpool import -d /zfile $myid
 
With a 12.0-RELEASE (yes, I know it's outdated), an active zpool like yours survives at reboot.
The file /boot/zfs/zpool.cache keeps the required informations for that. I'll try on a 13.0-RELEASE.
Now this is interesting. The working machine was 12.0-RELEASE also the time I've set the file type zpool and jails. Didn't upgraded that to 13 yet, actually 12.2. This 'infringing' machine was 12 that I upgraded to 13.
 
So, in 13.0-RELEASE, as you noticed, this kind of extra zpool isn't imported at boot.
In fact, /boot/zfs/zpool.cache does not have information except about the main zpool.
The informations about the main and others zpool are stored inside /etc/zfs/zpool.cache.

As a first solution, you can execute at boot: zpool import -ac /etc/zfs/zpool.cache
For instance, with the help of cron:
/etc/crontab
Code:
@reboot                root    zpool import -ac /etc/zfs/zpool.cache
This will import all zpool that were active before reboot / poweroff.

But it's just a workaround. If you look in the boot messages (with or without this cron entry), you find:
Code:
# dmesg -a | grep tank
Cannot import 'tank': one or more devices are readonly
Of course, when tank is imported, it isn't readonly. Therefore, there is something to dig here.
 
Or run it twice. Once for the import of the exterior zpool and a second time for the import of the interior zpool.
PS: I tried to create a zpool on a zvol block storage but that does not works.
 
So, in 13.0-RELEASE, as you noticed, this kind of extra zpool isn't imported at boot.

Just upgraded from 12.4 to 13.1, and can confirm the same errors in dmesg (re: a zfs 'file' type volume for iocage's jails), resulting in it not importing on boot:
Code:
dmesg -a | grep -B 1 zjail
Setting hostid: 0xaf15cad4.
cannot import 'zjail': one or more devices is read only
cachefile import failed, retrying
cannot import 'zjail': one or more devices is read only

Despite that it's listed in the zfs cache:
Code:
zdb -CU /boot/zfs/zpool.cache
…
zjail:
    version: 5000
    name: 'zjail'
    state: 0
    txg: 4
    pool_guid: 2150185645504109970
    hostid: 2937440980
    hostname: ''
    com.delphix:has_per_vdev_zaps
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 2150185645504109970
        create_txg: 4
        children[0]:
            type: 'file'
            id: 0
            guid: 15379805632568827398
            path: '/zjail-vdev/zjail-vdev.img'
            metaslab_array: 64
            metaslab_shift: 31
            ashift: 12
            asize: 429492011008
            is_log: 0
            create_txg: 4
            com.delphix:vdev_zap_leaf: 129
            com.delphix:vdev_zap_top: 130
    features_for_read:
        com.delphix:hole_birth
        com.delphix:embedded_data

The crontab command that worked here is:
@reboot root /sbin/zpool import -d /zjail-vdev/ zjail
 
Another option, try changing /etc/rc.d/zpool like this
Diff:
 # PROVIDE: zpool
-# REQUIRE: hostid disks
+# REQUIRE: hostid disks root
 # BEFORE: mountcritlocal
 # KEYWORD: nojail
This should ensure that the root filesystem is mounted r/w before importing other pools.
Although, it won't help if vdev files are on a different filesystem.
 
Back
Top