How to mount zfs pool on boot?

Dear all,

what's the best way to automatically mount/import ZFS pool when booting a system?

Thank you in advance for an answer!

Marek
 
If not already imported/mounted:
Code:
# zpool import poolname

Then add following to /etc/rc.conf to mount on boot
Code:
zfs_enable="YES"

Edit: You may also need the following in /boot/loader.conf to load the ZFS module, although if you're not booting off ZFS I think it might load the module automatically anyway.
Code:
zfs_load="YES"
 
ZFS filesystems are mounted as soon as the pool is loaded. Just enable ZFS and the rest will go automatically.
 
Thank you for replies!

Just to clarify, I do have
Code:
zfs_load="YES"
in /boot/laoder.conf and
Code:
zfs_enable="YES"
in /etc/rc.conf.

I have two pools:
  • zroot, which contains pretty much the whole system apart from /boot and is mounted as /
  • bootfs, which contains /boot and is mounted on /bootfs

Additionally, there's a softlink in zroot: /boot -> /bootfs/boot.

After the system boots, bootfs is neither mounted nor imported. I can still do it manually with zpool import bootfs. Do you have ideas, how could I have bootfs mounted during boot?

I'm using FreeBSD 10.1-RC1.

Thank you!
Marek
 
Is this an encrypted ZFS installation? Normally /boot is not a separate filesystem.
 
I believe I've seen similar problems to this one before, involving awkward configuration of the /boot folder.

When a pool is imported, the /boot/zfs/zpool.cache file is updated. As far as I'm aware this file is used on boot to determine which pools should be automatically imported.

Because this directory doesn't exist on your system until after bootfs is imported and mounted, ZFS is probably failing to update this file and so it only ever contains a record of your root pool.

It's difficult to know the best way to fix this without spending some time testing, and it's complicated by the symlink.
You could try something like the following: (Disclaimer: I don't know the exact configuration of your system and have never had to perform a process such as the below. It's entirely possible these commands could stop your system booting)
Code:
Import bootfs
# zpool import bootfs

Temporarily remove the symlink and create a /boot/zfs directory on the root pool
# rm /boot
# mkdir -p /boot/zfs

Hopefully you'll have a zpool.cache file on the bootfs which should contain details of your root pool
Copy this to the encrypted pool
# cp /bootfs/boot/zfs/zpool.cache /boot/zfs/zpool.cache

Export and import bootfs
This should update /boot/zfs/zpool.cache
# zpool export bootfs
# zpool import bootfs

With any luck /boot/zfs/zpool.cache should now include both pools
Copy it back to the boot pool and put everything back as it was
# cp /boot/zfs/zpool.cache /bootfs/boot/zfs/zpool.cache
# mv /boot /boot.bak
# ln -s /bootfs/boot /boot

During boot, I don't know if ZFS reads the cache file from the partition that it boots from (bootfs), or if it mounts the root filesystem (zroot) and reads it off there. If it's the latter then having that symlink isn't going to work as you have a chicken-and-egg problem. /boot/zfs/zpool.cache doesn't exist until the system imports the bootfs pool, but the system doesn't know bootfs should be imported until it reads /boot/zfs/zpool.cache.

Hopefully it reads the file from the same place it boots from (The fact it lives in the /boot folder suggests this is likely).
 
One interesting thing you can try, before any of that above, is to confirm the contents of your cache file:

Code:
zpool import bootfs
zdb -CU /bootfs/boot/zfs/zpool.cache

The zdb command above should display the contents of /boot/zfs/zpool.cache. If I'm right, that file will only contain a record of the zroot pool.
 
I've just tried it and zpool.cache has both pools:

Code:
red# zdb -CU /bootfs/boot/zfs/zpool.cache
bootfs:
    version: 5000
    name: 'bootfs'
    state: 0
    txg: 113102
    pool_guid: 291351687511010234
    hostid: 2011763808
    hostname: 'red'
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 291351687511010234
        children[0]:
            type: 'disk'
            id: 0
            guid: 17428454769508522880
            path: '/dev/gpt/boot'
            phys_path: '/dev/gpt/boot'
            whole_disk: 1
            metaslab_array: 33
            metaslab_shift: 24
            ashift: 12
            asize: 2142765056
            is_log: 0
            create_txg: 4
    features_for_read:
zroot:
    version: 5000
    name: 'zroot'
    state: 0
    txg: 141987
    pool_guid: 16999660486353457145
    hostid: 2011763808
    hostname: ''
    vdev_children: 1
    vdev_tree:
        type: 'root'
        id: 0
        guid: 16999660486353457145
        children[0]:
            type: 'disk'
            id: 0
            guid: 11651401711197098028
            path: '/dev/ada0p3.eli'
            phys_path: '/dev/ada0p3.eli'
            whole_disk: 1
            metaslab_array: 33
            metaslab_shift: 32
            ashift: 12
            asize: 497955373056
            is_log: 0
            DTL: 123
            create_txg: 4
    features_for_read:
 
Hmm, I'm a bit stumped then. If bootfs is in the cache file, it should be imported on boot.

The only other possibility I can think of is that it's trying to read that file off your root pool and can't because it's a symlink to a place that doesn't exist until bootfs is imported.
 
Thank you @usdmatt for the hint with zpool.cache.

My solution/workaround:
  1. Move zpool.cache to /var/zroot.cache on zroot pool.

    We want zroot.cache to be available before bootfs is re-mounted.
  2. Link /boot -> /bootfs/boot
  3. In zroot pool: link /bootfs/boot/zfs/zpool.cache -> /var/zpool.cache

    This link will be used before bootfs pool is mounted. Once bootfs is mounted it will be covered by the bootfs pool.
  4. In bootfs pool: link /bootfs/boot/zfs/zpool.cache -> /var/zpool.cache

    This link will be used after bootfs is mounted.

Please, tell me, what do you think about it and if you have some tips/ideas.

Thank you!
 
Last edited by a moderator:
Back
Top