Trying to mount root from zfs:zfs/ROOT failed with error 2

Hi all,

I want to test zfs of a fresh install on FreeBSD 9.1. I found these two tutorials:
  1. https://wiki.freebsd.org/RootOnZFS/GPTZFSBoot/9.0-RELEASE
  2. http://freebsdwiki.net/index.php/ZFS,_booting_from

They are slightly different :
  1. n°1 uses a traditional complete BSD partitioning pattern as depicted here : http://freebsdwiki.net/index.php/ZFS,_creating_datasets_for_the_FreeBSD_system
  2. n°2 uses option -b for partitioning with [cmd=]gpart add[/cmd] to specify the start. Is it important? n° 1 specifies only the size.
  3. n°1 loads the necessary kernel modules : [cmd=]kldload opensolaris[/cmd], [cmd=]kldload zfs[/cmd]
  4. In [cmd=]zpool create[/cmd] n°1 uses option -O canmount=off
  5. n°2 does not create any partition with [cmd=]zfs create[/cmd]
  6. If my understanding is correct, they use different techniques to mount the root:
    • n°1 :
      Code:
      # zfs create -o mountpoint=/ zroot/ROOT
      # echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
      # zpool set bootfs=zroot/ROOT zroot (may be redundant with line 1 ?)
      # zpool export zroot
    • n°2 :
      Code:
      # configure /boot/loader.conf
      hostname# echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
      hostname# echo 'vfs.root.mountfrom="zfs:zroot"' >> /mnt/boot/loader.conf
      # Install zpool.cache to the ZFS filesystem
      hostname# zpool export zroot
      hostname# zpool import -o altroot=/mnt -o cachefile=/tmp/zpool.cache zroot
      hostname# cp /tmp/zpool.cache /mnt/boot/zfs/

I followed the first one:
Code:
gpart create -s gpt ada0
gpart create -s gpt ada1
gpart add -s 222 -t freebsd-boot -l boot0 ada0
gpart add -s 512M -t freebsd-swap -l swap0 ada0
gpart add -t freebsd-zfs -l disk0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gpart add -s 222 -t freebsd-boot -l boot1 ada1
gpart add -s 512M -t freebsd-swap -l swap1 ada1
gpart add -t freebsd-zfs -l disk1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
kldload opensolaris
kldload zfs
zpool create -o altroot=/mnt -O canmount=off zroot mirror /dev/gpt/disk0 /dev/gpt/disk1
zfs set checksum=fletcher4 zroot
zfs create -o mountpoint=/                                   zroot/ROOT
zfs create -o compression=on -o exec=on      -o setuid=off   zroot/tmp
chmod 1777 /mnt/tmp
zfs create                                                   zroot/usr
zfs create                                                   zroot/usr/local
zfs create                                   -o setuid=off   zroot/home
zfs create -o compression=lzjb                       -o setuid=off   zroot/usr/ports
zfs create -o compression=off        -o exec=off     -o setuid=off   zroot/usr/ports/distfiles
zfs create -o compression=off        -o exec=off     -o setuid=off   zroot/usr/ports/packages
zfs create -o compression=lzjb       -o exec=off     -o setuid=off   zroot/usr/src
zfs create                                                           zroot/usr/obj
zfs create                                                           zroot/var
zfs create -o compression=lzjb       -o exec=off     -o setuid=off   zroot/var/crash
zfs create                           -o exec=off     -o setuid=off   zroot/var/db
zfs create -o compression=lzjb       -o exec=on      -o setuid=off   zroot/var/db/pkg
zfs create                           -o exec=off     -o setuid=off   zroot/var/empty
zfs create -o compression=lzjb       -o exec=off     -o setuid=off   zroot/var/log
zfs create -o compression=gzip       -o exec=off     -o setuid=off   zroot/var/mail
zfs create                           -o exec=off     -o setuid=off   zroot/var/run
zfs create -o compression=lzjb       -o exec=on      -o setuid=off   zroot/var/tmp
chmod 1777 /mnt/var/tmp
exit
# normal installation, then continuing with the LiveCD :
echo 'zfs_enable="YES"' >> /mnt/etc/rc.conf
echo 'zfs_load="YES"' >> /mnt/boot/loader.conf
cat << EOF > /mnt/etc/fstab
# Device                       Mountpoint              FStype  Options         Dump    Pass#
/dev/gpt/swap0                 none                    swap    sw              0       0
/dev/gpt/swap1                 none                    swap    sw              0       0
EOF
zfs unmount -a
zpool set bootfs=zroot/ROOT zroot
zfs set mountpoint=/ zroot/ROOT
zfs set mountpoint=/zroot zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var
zfs set mountpoint=/home zroot/home
zfs set readonly=on zroot/var/empty
zpool export zroot
reboot

Then I get:
Code:
"Trying to mount root from zfs:zfs/ROOT" "failed with error 2"

I am a bit lost. Any help please?
 
I don't see the command to copy the zpool.cache file onto the pool in your list, and I don't think you should export the pool either.

I personally think the method used in the below forum post should be the current recommended method as it ties in with the boot environments tool (if you ever want to use it) and keeps the root file system (and any clones or copies) neatly under the pool/ROOT dataset.

http://forums.freebsd.org/showthread.php?t=31662

Edit: Depending on your disks (Mainly if they're SSD or Advanced Format) you will likely see quite a performance benefit by making sure partitions are aligned with the -a/-b gpart() options and/or by using the GNOP trick to make ZFS see the disks with 4k sectors.
 
usdmatt said:
Edit: Depending on your disks (Mainly if they're SSD or Advanced Format) you will likely see quite a performance benefit by making sure partitions are aligned with the -a/-b gpart() options and/or by using the GNOP trick to make ZFS see the disks with 4k sectors.

That should be must use proper alignment and the gnop-trick. Using only the gnop-trick is "OK" when using whole disks, but if you partition it, all your writes will be misaligned, inflicting a serious performance penalty. Best practice is to care for both aligned partitioning and IO-sizing right off the bat, when you still have the chance. Once ashift=9, always ashift=9.

/Sebulon
 
Thanks a lot @Sebulon. It fails at step 16 (@usdmatt link):
Code:
16. # cp /tmp/zpool.cache /mnt/boot/zfs/
with :
Code:
cp: /tmp/zpool.cache: No such file or directory
Here is exactly what I did (since I have adapted a little) :
Code:
gpart create -s gpt ada0 
gpart add -b 40 -s 222 -t freebsd-boot -l boot0 ada0
gpart add -b 264 -s 512M -t freebsd-swap -l swap0 ada0
gpart add -t freebsd-zfs -l disk0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gnop create -S 4096 /dev/gpt/disk0
gpart create -s gpt ada1
gpart add -b 40 -s 222 -t freebsd-boot -l boot1 ada1
gpart add -b 264 -s 512M -t freebsd-swap -l swap1 ada1
gpart add -t freebsd-zfs -l disk1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
gnop create -S 4096 /dev/gpt/disk1
zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/disk0.nop /dev/gpt/disk1.nop
zpool export sys
gnop destroy /dev/gpt/disk0.nop
gnop destroy /dev/gpt/disk1.nop
zpool import sys
zfs set mountpoint=none sys
zfs set checksum=fletcher4 sys
zfs set atime=off sys
zfs create sys/ROOT
zfs create -o mountpoint=/mnt sys/ROOT/default
zpool set bootfs=sys/ROOT/default sys
cd /usr/freebsd-dist/
for I in base.txz kernel.txz; do 
> tar --unlink -xvpJf ${I} -C /mnt 
> done
cp /tmp/zpool.cache /mnt/boot/zfs/
I definitely have the option -o cachefile=/tmp/zpool.cache when I create the pool. So what?
 
Last edited by a moderator:
Thanks a lot, @kpa. So I assume my pool is now broken. I am going to rebuild it. If you have some links for getting a good understanding of zfs and pools, you are welcome: not things like tutorials or howtos which are usually only procedural - I am looking for a deep insight.

EDIT : the export deletes the cache. As a workaround, I made a copy that I restored after the export, then I could successfully import with :
Code:
zpool import -o cachefile=/tmp/zpool.cache sys
 
Last edited by a moderator:
No don't rebuild it, it's just missing the zpool.cache file. Import it again with the -o cachefile=/tmp/zpool.cache -R /mnt -options and copy the /tmp/zpool.cache file to /mnt/boot/zfs

Edit: The instructions that you followed could be more straightforward, they now use some really unnecessary steps that could have been avoid if the -R /mnt option had been used after getting rid of the gnop(8) devices

This is how I would have done it:

# zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/disk0.nop /dev/gpt/disk1.nop
# zpool export sys
(destroy the gnop devices)
# zpool import -R /mnt -o cachefile=/tmp/zpool.cache sys

# zfs set mountpoint=none sys
# zfs set checksum=fletcher4 sys
# zfs set atime=off sys
# zfs create sys/ROOT
# zfs create -o mountpoint=/ sys/ROOT/default

Since the pool is imported with altroot set to /mnt all paths are relative to it and so the mountpoint of sys/ROOT/default will be internally set to / but for this session it will be mounted under /mnt.

# zpool set bootfs=sys/ROOT/default sys

(Install system here as above)

And then the copying of the zpool.cache file:

# cp /tmp/zpool.cache /mnt/boot/zfs/

It should be safe to do # zpool export sys before rebooting since the copy of the cache file is now saved on the pool.
 
Thanks again @kpa. The problem I had is that after [cmd=]zpool export sys[/cmd], /tmp/zpool.cache is destroyed. Here is what I finally successfully did, mixing the following information:

Code:
gpart create -s gpt ada0 
gpart add -b 40 -s 222 -t freebsd-boot -l boot0 ada0
gpart add -b 264 -s 512M -t freebsd-swap -l swap0 ada0
gpart add -t freebsd-zfs -l disk0 ada0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada0
gnop create -S 4096 /dev/gpt/disk0
gpart create -s gpt ada1
gpart add -b 40 -s 222 -t freebsd-boot -l boot1 ada1
gpart add -b 264 -s 512M -t freebsd-swap -l swap1 ada1
gpart add -t freebsd-zfs -l disk1 ada1
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada1
gnop create -S 4096 /dev/gpt/disk1
zpool create -f -o cachefile=/tmp/zpool.cache sys mirror /dev/gpt/disk0.nop /dev/gpt/disk1.nop
cp /tmp/zpool.cache /tmp/zpool.cache.save
zpool export sys
gnop destroy /dev/gpt/disk0.nop
gnop destroy /dev/gpt/disk1.nop
cp /tmp/zpool.cache.save /tmp/zpool.cache
zpool import -o cachefile=/tmp/zpool.cache sys
zfs set mountpoint=none sys
zfs set checksum=fletcher4 sys
zfs set atime=off sys
zfs create sys/ROOT
zfs create -o mountpoint=/mnt sys/ROOT/default
zpool set bootfs=sys/ROOT/default sys
cd /usr/freebsd-dist/
for I in base.txz kernel.txz; do 
> tar --unlink -xvpJf ${I} -C /mnt 
> done
cp /tmp/zpool.cache /mnt/boot/zfs/
cat << EOF >> /mnt/boot/loader.conf 
> zfs_load=YES 
> vfs.root.mountfrom="zfs:sys/ROOT/default" 
> EOF
cat << EOF >> /mnt/etc/rc.conf 
> zfs_enable=YES 
> EOF
cat << EOF > /mnt/etc/fstab 
> # Device Mountpoint FStype Options Dump Pass# 
> /dev/gpt/swap0 none swap sw 0 0 
> /dev/gpt/swap1 none swap sw 0 0 
> EOF
zfs umount -a
zfs set mountpoint=legacy sys/ROOT/default
exit
shutdown -h now
 
Last edited by a moderator:
Yes, it's expect that the cache file is deleted when the pool is exported. You have to copy the file before the pool is exported.

I would strongly advice using mountpoint=legacy on the root filesystem and set it to / instead. It makes crash/disaster recovery so much easier when you can mount the whole pool in one operation under let's say /mnt:

# zpool import -R /mnt sys

You of course have to remember to use the -R /mnt flag every time when you import the pool manually or you will have the filesystems mounted over your recovery system.
 
Thank you @kpa,

Yes, mountpoint=legacy is in my script, thought I did not know why until you explain it.

For the -R option, it is less obvious to me as I am a newbie. I will try it later when it will be more easy to me.
 
Last edited by a moderator:
Back
Top