ZFS How to mount a zfs partition?

Hello,

I am in the rescue mode on an OVH server, meaning the partition of my server is not mounted but it is supposed to be available to be mounted.

root@rescue:~# lsblk -f
Code:
NAME   FSTYPE     LABEL UUID                                 MOUNTPOINT
sda                                                         
├─sda4 zfs_member zroot 15046027900747129393                 
├─sda2                                                       
├─sda3 ext4       /     9a65df4f-d111-568a-a219-71a5c03edc15
└─sda1 vfat       EFI   5B01-1112

Of course

# mount /dev/sda4 /mymount doesn't work.
I get "mount: unknown filesystem type 'zfs_member'".

How do I mount a zfs partition so that I can read and modify some conf files. I locked myself out of this server by editing my /etc/pf.conf too quickly and I need to modify it before being able to reboot the server.

Thanks for your help.
 
If sda4 is a device that is a (or the only) member of a zpool it can't just be mounted. You have to import the pool and then you could mount any filesystem that wasn't mounted automatically. But the easy way is just to imprt the pool with a temporary/alternate root:
Code:
zpool import -R /some/folder zroot
 
Why use the FreeBSD forum for questions about non-FreeBSD systems? sda is not a device used by FreeBSD, lsblk doesn't exist. Might be better to use the support forums for whatever OS you're using instead.

At the very least use the Offtopic forum.
 
This is apparently a Linux system, as ShelLuser implied. Linux's ZFS is somewhat different and advice received here might not be applicable.

Therefore, you are much better off asking on a forum or mailing list for whichever flavor of Linux you're using.
 
The device in question is the hard drive of a FreeBSD server.
I hadn't realized the RESCUE mode of OVH used a linux system. But the partition /dev/sda4 corresponds to my FreeBSD server.
 
If this were a FreeBSD system (which it isn't), then the normal answer would not be mount -t zfs ..., but zfs mount .... Huge difference. In particular, the arguments are different: The mount commands wants two arguments, the block device name and the mount point (a directory). The zfs command wants the name of the file system, and ZFS has its own mechanism for locating the block device, and the file system has stored in it where it wants to be mounted.

Clearly, you are now running on a Linux machine. Before you assume that a FreeBSD ZFS block device is actually readable on Linux, please learn about compatibility of ZFS between operating systems. For example, ZFS is not always compatible between ZFS and Solaris. And in general, if you are moving a ZFS device between places, learn about the import and export commands of ZFS.

I have no idea how ZFS works on Linux. I don't even know whether ZFS is supported in all Linux distributions, nor do I know which Linux distribution you are using (just looked it up: OVH is a hosting provider, and they offer roughly a dozen different Linux distributions). You have to remember: Linux is really just a kernel; it is then configured, and packaged with userland utilities (such as the mount or lsblk commands) by various distributors, and those distributions can have remarkably little in common.

The suggestion made above is very sensible: Find out what Linux distribution you are running (say for example "Slackware release 12"). Then find a Slackware mailing list or discussion forum, where you can get Slackware-specific help.
 
I managed to load the FreeBSD rescue mode (it didn't work earlier because of a problem at 0VH but they fixed it). I now have a rescue FreeBSD server running, and somewhere there must the drive of my old FreeBSD server that is accessible. I think it corresponds to one of the

/dev/ada0 /dev/ada0p1 /dev/ada0p2 /dev/ada0p3 /dev/ada0p4

files but I don't know how to mount them. Actually I'm not even sure these are the real names of the files. Sometimes the name of the files appear with a % at the end.
I'm going to search more about that on the internet.
Just wanted to let you know that this thread is now 100% FreeBSD.
 
All the required hints / instructions on how to mount a ZFS filesystem have already been given above by the way. And no, not what getopt said; it would help if people would only answer if they actually knew the solution to a problem.

Maybe also useful: sysctl kern.disks to easily identify your storage devices (not required here I guess) as well as: gpart show ada0. This would show you the slices on disk ada0. Even though, you don't need these commands when handling ZFS.
 
A: Where did you get that list of block devices from? Why do some have a % at the end? What do you mean by "not the real names"? These are definitely real names of typical block devices: The raw disk /dev/ada0, and four partitions on that disk, named ada0p1 through ...p4.

Do you understand the distinction between a real disk, the block device that represents that real disk, partitions on that disk, the file names of the block devices for the disk and its partitions, the file system stored on those partitions, the mount point of that file system, and the files in that file system? Please think through these concepts; otherwise we can't help you when you say "the real name", because we don't know what you are actually referring to.

B: Which one of those partitions is your ZFS partition? My hunch is going to be ada0p4 (just guess from the Linux numbering system sda4 -> ada0p4). Verify that with gpart. This would be a really good time to read "man gpart".

C: Did you or did you not export the ZFS pool when running it on Linux? Have you verified whether Linux and FreeBSD ZFS pools and file systems are compatible?

D: If you get past these questions, then zpool import should work, followed by zfs mount. I would propose that you first read the man pages for the zfs and zpool commands (they are long, but full of very useful information that explains concepts, and well written).
 
A: Where did you get that list of block devices from? Why do some have a % at the end?
When typing `mount /dev/` and typing the tab key to obtain filenames suggestions, this is what appears:

Code:
acpi%       atkbd0%     cuau0%      devstat%    hpet0%      log@        mdctl%      pci%        stdout@     ttyv0%      ttyv8%      ugen3.1@    zfs%
ada0%       audit%      cuau0.init% diskid/     io%         lpt0%       mem%        ppi0%       sysmouse%   ttyv1%      ttyv9%      ugen4.1@
ada0p1%     auditpipe%  cuau0.lock% fd/         kbd0@       lpt0.ctl%   midistat%   pts/        ttyu0%      ttyv2%      ttyva%      ugen4.2@
ada0p2%     bpf%        cuau1%      fido%       kbd1@       md0%        msdosfs/    random%     ttyu0.init% ttyv3%      ttyvb%      urandom@
ada0p3%     bpf0@       cuau1.init% full%       kbdmux0%    md1%        netmap%     reroot/     ttyu0.lock% ttyv4%      ufssuspend% usb/  
ada0p4%     console%    cuau1.lock% geom.ctl%   klog%       md2%        nfslock%    sndstat%    ttyu1%      ttyv5%      ugen0.1@    usbctl%
apm%        consolectl% devctl%     gpt/        kmem%       md3%        null%       stderr@     ttyu1.init% ttyv6%      ugen1.1@    xpt0%  
apmctl%     ctty%       devctl2%    gptid/      led/        md4%        pass0%      stdin@      ttyu1.lock% ttyv7%      ugen2.1@    zero%


What do you mean by "not the real names"?

Well, I thought something must have been wrong with these names /dev/ada0, /dev/ada0p1 etc..., because when I run

# mount -t zfs /dev/ada0 /actually/existing/directory/
I get:

Code:
mount: /dev/ada0: No such file or directory

and running
# mount -t zfs /dev/ada0p1 /actually/existing/directory/
leads to

Code:
mount: /dev/ada0p1: No such file or directory

This is pretty disconcerting and led me to think maybe the /dev/ada0 is not the "real name" of the file, maybe I must add the % at the end.
But actually I get the same result when adding the %.
I now think the % is just presentation/display when using the tab key. I'm still clueless about why `mount` thinks there's no such file as /dev/ada0p1, while actually there is.


B: Which one of those partitions is your ZFS partition? My hunch is going to be ada0p4 (just guess from the Linux numbering system sda4 -> ada0p4). Verify that with gpart. This would be a really good time to read "man gpart".

Well, I don't know why I talked about zfs partitions. It's an entire hard drive that I'm trying to access. The OVH installer installed FreeBSD on this drive. This hard drive has 4 partitions (boot, root, swap and home). I know this server uses ZFS. When I installed the server at OVH, you had the choice between ZFS or something else and I chose ZFS. I probably shouldn't have chosen ZFS since I didn't know anything about it at the time. I thought I'd have some time to learn about ZFS but I never found the time to read about ZFS until today.

# gpart show /dev/ada0p4
Code:
gpart: No such geom: /dev/ada0p4.

# gpart show /dev/ada0
Code:
=>       40  976773088  ada0  GPT  (466G)
         40       1600     1  efi  (800K)
       1640       1024     2  freebsd-boot  (512K)
       2664       1432        - free -  (716K)
       4096    8388608     3  freebsd-swap  (4.0G)
    8392704  968380416     4  freebsd-zfs  (462G)
  976773120          8        - free -  (4.0K)

C: Did you or did you not export the ZFS pool when running it on Linux? Have you verified whether Linux and FreeBSD ZFS pools and file systems are compatible?
Nothing worked on linux. I'm now using a FreeBSD rescue system.

D: If you get past these questions, then zpool import should work, followed by zfs mount. I would propose that you first read the man pages for the zfs and zpool commands (they are long, but full of very useful information that explains concepts, and well written).

I read about zfs now, espcially the pages in the FreeBSD manual. This doesn't help as it's entirely about how to create pools, datasets etc... Nothing about how to read the content of a hard drive that uses zfs and probably has all these things already created.

Using this link advice: https://unix.stackexchange.com/ques...thout-clobbering-altering-current-or-ext?rq=1
I ran:

# zpool import -o readonly=on -d /dev -f -R ~/mydir 10543172897146326321
(trying with the readonly option seemed a good idea to avoid destroying anything at first)

This led to partial result. ~/mydir got populated with:

1) a home directory corresponding to the old home directory of the server I'm trying to repair

2) a zroot directory which is empty

This gave me hope but this was not what I wanted to obtain since I want to access my old /etc/ directory to modify the pf.conf file.

So I'm stuck now.
Any help appreciated.
 
# mount -t zfs /dev/ada0 /actually/existing/directory/
I get:
That's not how you mount a ZFS filesystem, which I already mentioned above. I also told you that the real procedure had already been mentioned, look at the post from krawall

So basically first import the ZFS pool and provide a valid mountpoint (normally /mnt should do fine) using zpool(8), this should normally make the whole structure accessible. If needed then you can use the zfs(8) command to mount / access the individual file systems.

So: use zfs list to look for the overview of available filesystems and mount the one you need accordingly.
 
I had tried that earlier but it didn't work.
Now, when I run:

zpool import -R /dev zroot

I get:
Code:
cannot import 'zroot': a pool with that name is already created/imported,
and no additional pools with that name were found

presumably because of the command I had run earlier:

zpool import -o readonly=on -d /dev -f -R ~/mydir 10543172897146326321
 
# zfs list
Code:
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               2.70G   443G    96K  /root/mydir/zroot
zroot/ROOT          2.43G   443G    96K  none
zroot/ROOT/default  2.43G   103G  2.43G  /root/mydir
zroot/home           277M   356G   277M  /root/mydir/home
 
How do I mount the zroot/ROOT dataset (is it what you call a dataset?) onto a directory?

I tried the following:


# mkdir /root/dir2
# zfs set mountpoint=/root/dir2 zroot/ROOT

Code:
internal error: out of memory
 
zroot/ROOT is an empty filesystem. You're looking for zroot/ROOT/default, this is the actual root filesystem. The reason why it's not mounted automatically is because of specific settings (the canmount property) but I'd advice you not to touch that.

Instead run: # zfs mount -v zroot/ROOT/default and see what happens. However... it might be safer to create a mountpoint first: # mkdir /mnt/temp then: # zfs mount -v -o mountpoint=/mnt/temp zroot/ROOT/default.

The reason I'd prefer /mnt over /root is because you're most likely working on a readonly filesystem (boot cd?).

That error message looks nasty, but try rebooting and then start over. Can't comment if this is an unavoidable error because I also don't know what kind of commands you have been using. In situations like these it's best not to simply try stuff to see what happens (that is: if you value your data) but instead read up on the commands you can use, make a decision, and then use them.
 
I had tried that earlier but it didn't work.
Now, when I run:

zpool import -R /dev zroot

I get:
Code:
cannot import 'zroot': a pool with that name is already created/imported,
and no additional pools with that name were found

presumably because of the command I had run earlier:

zpool import -o readonly=on -d /dev -f -R ~/mydir 10543172897146326321

That should mean, that the zpool is already imported to /root/mydir. Have you checked, that there is no filesystem?

Also why do you use all these options? Usually zpool will automatically find available pools and show them with zpool status, so no need for (-d /dev). You never should use force (-f) execpt if you know what you are doing. This is obviously not the case and can lead to data loss very easely. I would advice you to just reboot, do a zpool status, if your pool is shown just try a simple zpool import -R /some/empty/dir poolname and look for your files in the specified path. Don't use /dev as a mount path nor any other important system folder.
And stop copying random commands from the internet that you don't understand. In most cases (as in this) that will just complicate things and annoy those who are trying to help you.
 
# zfs list
Code:
NAME                 USED  AVAIL  REFER  MOUNTPOINT
zroot               2.70G   443G    96K  /root/mydir/zroot
zroot/ROOT          2.43G   443G    96K  none
zroot/ROOT/default  2.43G   103G  2.43G  /root/mydir
zroot/home           277M   356G   277M  /root/mydir/home

This shows that you have already imported the "zroot" pool, and that all the datasets are already mounted under /root/mydir

IOW, you're done. Just CD into /root/mydir and make whatever changes you need to make. I'm not seeing what the issue is at this point.
 
This shows that you have already imported the "zroot" pool, and that all the datasets are already mounted under /root/mydir
Unfortunately not. It shows that the filesystems are available and that there is a designated mountpoint. But it does not prove that the filesystem has actually been mounted. It could have, but the output doesn't show this.

See this thread for a more in-depth example of this.

The only way to determine this is by checking the mounted property, yet that isn't shown above.
 
It's very rare for a pool to be imported without the child datasets being mounted. Checking the ouput of mount will show whether they are mounted or not. If they aren't, then a simple zfs mount -a will mount them.

In fact, you have to specify -N in the import command to have it not mount the ZFS datasets automatically as part of the import process.
 
It's very rare for a pool to be imported without the child datasets being mounted.
Sorry to be 'that guy' Phoenix (honest) but it's not rare. If you install FreeBSD on ZFS through the installer you'll end up with the, what I call, chaos you see above. I can see why they did this (sysutils/beadm comes to mind) but in my opinion things went the wrong way: instead of the tool following best practices things got reversed.

My point being: zroot/ROOT/default as well as zroot/ROOT will be set with the canmount property set to off by default. This is why plenty of people end up utterly confused why they can't access anything after using # zpool import. Not even when specifying a valid mountpoint. You can check several threads on the forum for proof of that. And it is that confusion which also fuels my dislike for this whole default setup, because I think it makes no sense and only causes more confusion and problems.

It is not rare for this behavior: if you use the FreeBSD installer then this is the default behavior for reasons way beyond me.

Apologies for the smirky comments. I know... But I honestly think this setup was a grave mistake and causes more problems than comfort. Sometimes I vent a wee bit ;)
 
Sorry to be 'that guy' Phoenix (honest) but it's not rare. If you install FreeBSD on ZFS through the installer you'll end up with the, what I call, chaos you see above. I can see why they did this (sysutils/beadm comes to mind) but in my opinion things went the wrong way: instead of the tool following best practices things got reversed.

Ah, did not know that. I didn't use the installer for my ZFS-on-root setups, so canmount is either set to noauto or on for all my datasets, so everything mounts correctly at import time.

The only dataset I have with canmount=off is my 1G "do-not-delete" dataset that's used for recovering from a "pool is 100% full" situation. It has a 1 GB reservation that can be shrunk to free space in the pool, thus allowing "zfs destroy" to work when the pool is at 100%. :D
 
You should be able to get to your zfs root this way:

#Boot into Live CD
#run zpool import to get name of zpool (probably zroot)
zpool import
#create a mountpoint for zpool:
mkdir -p /tmp/zroot
#import zpool:
zpool import -fR /tmp/zroot zroot
#create a mountpoint for zfs /:
mkdir /tmp/root
#mount /:
mount -t zfs zroot/ROOT/default /tmp/root

#the directories will now be available in /tmp/root - make changes or save your stuff as needed
#export zpool:
zpool export zroot
#boot normally
 
Thanks to everybody for trying to help. Unfortunately this memory error seemed irremediable so I just reinstalled the server. Fortunately I have quick deployment scripts and backups. I guess using ZFS on a 500 GO HDD with only 2G0 of RAM is a bad idea? Unfortunately, I had to reinstall using ZFS again because it's now the only available option for a FreeBSD 11 server on OVH.
If anybody has an idea on how to make sure I don't run into mermory errors again due to my small amount of RAM, I'd be interested in reading your advices.
Thanks again to everybody.
 
Back
Top