ZFS How to mount a zfs partition?

I run a 3TB and a 1TB ZFS file system on a machine with 3GB of RAM, and have never had a memory problem. This is on a 32-bit machine. There are a few kernel parameters that I set: vm.kmem_size and kmem_size_max (both 512M), and a few ZFS ones: vfs.zfs.arc_max=64M and vfs.zfs.vdev.cache.size=8m. Don't ask me why, that would require a half-hour archeology session. I've never done ZFS performance tuning on this machine, since it is already faster than I need.
 
You should be able to get to your zfs root this way:

#Boot into Live CD
#run zpool import to get name of zpool (probably zroot)
zpool import
#create a mountpoint for zpool:
mkdir -p /tmp/zroot
#import zpool:
zpool import -fR /tmp/zroot zroot
#create a mountpoint for zfs /:
mkdir /tmp/root
#mount /:
mount -t zfs zroot/ROOT/default /tmp/root

#the directories will now be available in /tmp/root - make changes or save your stuff as needed
#export zpool:
zpool export zroot
#boot normally
Thanks !
 
I have very similar problem ,i searched how to and solved it as follows,its easy:
Code:
[root@rescue-bsd:~ # mkdir /root/a
root@rescue-bsd:~ # mkdir /root/b
/root/a is a directory where we want to mount zfs / filesystem
/root/b for /home partition
root@rescue-bsd:~ # zpool import -d /dev
   pool: zroot
     id: 2714116618066408679
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

    zroot       ONLINE
      ada0p4    ONLINE
root@rescue-bsd:~ # zpool import -f zroot
cannot mount '/home': failed to create mountpoint
cannot mount '/zroot': failed to create mountpoint

root@rescue-bsd:~ # zfs list
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               3.84G  1.75T    96K  /zroot
zroot/ROOT          3.55G  1.75T    96K  none
zroot/ROOT/default  3.55G   192G  3.55G  /
zroot/home           294M  1.63T   294M  /home

root@rescue-bsd:~ # zfs set mountpoint=/root/a zroot/ROOT/default
root@rescue-bsd:~ # zfs set mountpoint=/root/b zroot/home
root@rescue-bsd:~ # zfs mount zroot/ROOT/default
root@rescue-bsd:~ # zfs mount zroot/home

root@rescue-bsd:~ # cd /root/a
root@rescue-bsd:~/a # ls
.cshrc        boot        home        net        sys
.profile    boot.config    lib        proc        tmp
.rnd        dev        libexec        rescue        usr
COPYRIGHT    entropy        media        root        var
bin        etc        mnt        sbin        zroot

and we have our filelesystem mounted ,after changes
Code:
root@rescue-bsd:~ # zpool export zroot

then restore original mountpoints
Code:
root@rescue-bsd:~ # zpool import -f zroot
root@rescue-bsd:~ # zfs set mountpoint=/ zroot/ROOT/default
root@rescue-bsd:~ # zfs set mountpoint=/home zroot/home
restart and boot from hard drive
 
Last edited by a moderator:
(Solution below)

I'm in the same situation as the OP: booted into an OVH rescue image, which isn't playing nice. I expected a simple rescue system, but 'df' shows a strange Frankenstein of temporary rescue (eg /etc) and what appear to be the server's file system (eg /var)... but the server's /etc is what I need to access. (It seems strange that the server's /usr/local and /var directories would be mounted, but not /etc ???)

Code:
root@rescue-bsd:~ # df
Filesystem                                       1K-blocks       Used      Avail Capacity  Mounted on
91.121.XXX.XXX:/home/pub/freebsd11-amd64-rescue 1848410796  347933072 1406560796    20%    /
devfs                                                    1          1          0   100%    /dev
/dev/md0                                             29596       2996      24236    11%    /etc
/dev/md1                                              7132          8       6556     0%    /mnt
/dev/md2                                            239516        296     220060     0%    /opt
/dev/md3                                              7132         60       6504     1%    /root
procfs                                                   4          4          0   100%    /proc
<above>:/opt/local                              1848650312 1848411092     220060   100%    /usr/local
<above>:/opt/var                                1848650312 1848411092     220060   100%    /var
/dev/md4                                             63004         28      57936     0%    /tmp
/var/empty                                      1848650312 1848411092     220060   100%    /opt/ovh

I tried importing zroot to an alternate directory, and it appears to succeed, but the directory contents at the mount point are empty.

Code:
root@rescue-bsd:~ # zpool import
   pool: zroot
     id: 3515742166604024554
  state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

        zroot       ONLINE
          ada0p4    ONLINE

root@rescue-bsd:~ # mkdir /tmp/mnt
root@rescue-bsd:~ # zpool import -R /tmp/mnt 3515742166604024554
root@rescue-bsd:~ # ls /tmp/mnt/
zroot
root@rescue-bsd:~ # ls /tmp/mnt/zroot/            <--- *** EMPTY ***
root@rescue-bsd:~ # zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot  1.80T  48.8G  1.75T         -     1%     2%  1.00x  ONLINE  /tmp/mnt

Then after trying some other things this starts happening:

Code:
root@rescue-bsd:/tmp # zpool list
internal error: failed to initialize ZFS library
root@rescue-bsd:/tmp # zpool status
internal error: failed to initialize ZFS library
root@rescue-bsd:/tmp # w
w: invalid core

I suspect that it's because OVH are using a rather old FreeBSD version which may be buggy when accessing a pool created with a more current version (I'm running 11.2R)

FreeBSD rescue-bsd.ovh.net 11.0-RELEASE-p1 FreeBSD 11.0-RELEASE-p1 #0 r306420: Thu Sep 29 01:43:23 UTC 2016 root@releng2.nyi.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64

I have no idea how to get access to /etc on this machine, so it's currently unbootable. Very frustrating!!!


Solution: I managed to get access by doing the following with the 10.3 rescue image:

# zpool import -R /tmp/mnt 3515742166604024554
# umount /tmp/mnt
# mount -t zfs zroot/ROOT/default /tmp/mnt

There's probably a cleaner way to do it, but it worked for me.
 
Because this thread bumps up in the search results here are some more hints on how to mount external ZFS pool in case of system crash or other data recovery..

If your ZFS volume is encrypted you need to geli attach /dev/daxxyy first.
Then you will be able to import ZFS pool with zpool import -af -R /mnt/zfs where /mnt/zfs is the local mount point for your pool. If you do not want to mount them use -N switch.

You can mound and umount all pool datasets with zfs mount -a / zfs umount -a. You can mount given dataset with zfs mount pool/dataset.

In case you are using default layout created by FreeBSD installer, you can modify zroot/ROOT/default mount point with zfs set mountpoint=/mnt/zfs/ROOT, then you need to install that dataset by hand with zfs mount zroot/ROOT/default because its parent has mountpoint set to none it will not mount with the rest of datasets.
 
Because this thread bumps up in the search results here are some more hints on how to mount external ZFS pool in case of system crash or other data recovery..

If your ZFS volume is encrypted you need to geli attach /dev/daxxyy first.
Then you will be able to import ZFS pool with zpool import -af -R /mnt/zfs where /mnt/zfs is the local mount point for your pool. If you do not want to mount them use -N switch.

You can mound and umount all pool datasets with zfs mount -a / zfs umount -a. You can mount given dataset with zfs mount pool/dataset.

In case you are using default layout created by FreeBSD installer, you can modify zroot/ROOT/default mount point with zfs set mountpoint=/mnt/zfs/ROOT, then you need to install that dataset by hand with zfs mount zroot/ROOT/default because its parent has mountpoint set to none it will not mount with the rest of datasets.

Thanks to this post. Although alerted with POSIX {user} by this command zfs mount zroot/ROOT/default, but ignored it and ran to my success!
 
Back
Top