Solved How to mount a zfs partition?

balanga

Son of Beastie

Reaction score: 218
Messages: 4,028

Before anyone jumps on me I did read Thread how-to-mount-a-zfs-partition.61112 but it did not shed any light on my situation...

I have a disk with a single ZFS partition which was in a system which ran pfSense. Now I've attached it to a system running FreeBSD 11.1 and I can't work out how to mount it. How do I find out what zpool exists on the disk?
 

ralphbsz

Son of Beastie

Reaction score: 2,401
Messages: 3,280

zpool import. Most cases, this will immediately mount it. If not, follow it with zfs mount. Read the man pages for those commands.
 
OP
B

balanga

Son of Beastie

Reaction score: 218
Messages: 4,028

Neither command does anything... Presumably I need a pool name, but don't have one. I just have a disk with a 1TB freebsd-zfs partition and can't figure out how to mount it.
 

ShelLuser

Son of Beastie

Reaction score: 2,111
Messages: 3,792

Well, I can't be bothered to read up on that other thread and ZFS happens to be my favorite filesystem. So here goes...

Just using zpool import (without any arguments) will make the system check the currently attached storage media and if it finds a valid ZFS signature then the pool name will be listed. This command does not automatically import and mount filesystems, all it does is detect them.

After you got a name you can then proceed to the actual import process. Let's assume our pool is called zroot (a mostly common name). There are 2 important things to keep in mind: a pool always contains a filesystem. In other words: the pool is also a filesystem on its own. Because of that it'll need a mount point. Second: ZFS is an intelligent filesystem, it keeps track of its history. Therefor you may need to force the import because it will otherwise detect a different environment.

Because /mnt is a commonly used mountpoint this leads us to: # zpool import -fR /mnt zroot.

After that you should be able to access your filesystem(s) in /mnt. However... there is more to this story:
Code:
peter@zefiris:/home/peter $ zfs get mountpoint,canmount zroot
NAME   PROPERTY    VALUE       SOURCE
zroot  mountpoint  /           local
zroot  canmount    on          default
A normal filesystem doesn't know where it should be mounted in an hierarchy, this is only determined by /etc/fstab (or the individual mount command). ZFS on the other hand doesn't use /etc/fstab at all and instead its filesystem(s) keep track of the proper place on a per-filesystem basis. Through the properties listed above. Which is the main reason why we used -R in the zpool import command: to make sure the system realized that its filesystem(s) should be accessible using a different path than their normal one.

Next stop, very important when you have a ZFS pool automatically set up by the installer, is the canmount property. This is set to off by the FreeBSD installer for your root filesystems, and as a result they will not automatically mount after you imported your ZFS pool. The reason for doing that is to cater to sysutils/beadm, a decision which I personally consider ridiculous and only showcasing a bad design. But that's offtopic here.

But as a result it is possible that you won't be able to access your filesystem(s) after successfully importing a pool. You can check that they're still available by using: zfs list, this should list all the available filesystems in the currently imported pool(s).

If you need to mount those "hidden" filesystems just use: # zfs mount zroot (for example). Or to give a proper example of a default setup: # zfs mount zroot/root/DEFAULT. Just list the available filesystems and you'll soon see what you should use.

You don't have to worry with specifying a mount point or anything because all filesystems will be mounted under the virtual root (which we set to /mnt in the example above).

And that's how you mount a ZFS filesystem.

It is noteworthy that you can also use the normal mount command, but I would recommend against this. Simply because you'd need to know the ZFS filesystem name before you can do this, meaning that you'd be using the zfs command anyway, seems pointless to suddenly switch to mount.

Hope this can help.
 
OP
B

balanga

Son of Beastie

Reaction score: 218
Messages: 4,028

Well, I can't be bothered to read up on that other thread and ZFS happens to be my favorite filesystem. So here goes...

Just using zpool import (without any arguments) will make the system check the currently attached storage media and if it finds a valid ZFS signature then the pool name will be listed. This command does not automatically import and mount filesystems, all it does is detect them.

And that's how you mount a ZFS filesystem.

Hope this can help.

Unfortunately not...
and if it finds a valid ZFS signature then the pool name will be listed.
and what if it doesn't?

I guess I'll put it back in my pfSense box and see if is accessible there. It's been there for about three years and I can't remember what it was used for...
 

ShelLuser

Son of Beastie

Reaction score: 2,111
Messages: 3,792

and what if it doesn't?
Then my conclusion would be that it doesn't contain any ZFS pools, or that the pool got corrupted somehow.

What does # file -s /dev/<device> tell you?

Example:
Code:
root@zefiris:/home/peter # gpart show ada0
=>       40  312450656  ada0  GPT  (149G)
         40        256     1  freebsd-boot  (128K)
        296  312450400     2  freebsd-zfs  (149G)

root@zefiris:/home/peter # file -s /dev/ada0p2
/dev/ada0p2: data
Although file doesn't recognize the ZFS filesystem it does recognize others, and this also tells us that something is on there. Just because your partition type is of freebsd-zfs doesn't imply that it also contains a ZFS pool. It should, but I could just as easily try and run newfs on it.

So I'd also try this trick to see what it says.

(edit)

pfSense? Then my other theory is that the filesystem could also be encrypted. That would explain why no valid ZFS pools are found, and it would also explain why you might get the same result as mine above: file mentioning data.
 
OP
B

balanga

Son of Beastie

Reaction score: 218
Messages: 4,028

Then my conclusion would be that it doesn't contain any ZFS pools, or that the pool got corrupted somehow.

What does # file -s /dev/<device> tell you?

/dev/da0p1: Unix Fast File system [v2] (little-endian) last mounted on /mnt/data, last written at Mon Mar 20 09:00:40 2017, clean flag 1, readonly flag 0, number of blocks 244190636, number of data blocks 236521838, number of cylinder groups 1524, block size 32768, fragment size 4096, average file size 16384, average number of files in dir 64, pending blocks to free 0, pending inodes to free 0, system-wide uuid 0, minimum percentage of free blocks 8, TIME optimization
 

ShelLuser

Son of Beastie

Reaction score: 2,111
Messages: 3,792

Well, there we have it. It's not ZFS but UFS. Solution: # mount /dev/da0p1 /mnt and you should be able to access your data easily.
 
OP
B

balanga

Son of Beastie

Reaction score: 218
Messages: 4,028

Well done! That worked. I would never have figured that out myself.
 
Top