Change zroot from legacy to /

I set[ ]up my new FreeBSD 9.2 amd64 server with ZFS root. At the end of the guide I followed I ran zfs set mountpoint=legacy zroot

So when I run zfs list I get:
Code:
NAME              MOUNTPOINT
zroot               legacy

The problem I have is that I want to run zfs send/recv to back[ ]up my root pool to a backup pool. Unfortunately when I run zfs snapshot -r zroot@TEST
zfs create backup/test
zfs send -R zroot@TEST | zfs receive -Fduv backup/test

It replicates the ZFS root pool to the backup pool BUT it doesn't automount the new file system that is created. Even trying to mount it with a legacy mount point doesn't seem to do what I want. I basically want to access the ENTIRE ZFS root pool under backup/test.

So the questions I have are:
  1. How do I back[ ]up the ENTIRE ZFS root pool when it is in legacy mode?
  2. Is it possible (or advisable) to run the zroot as /? ie: zfs set mountpoint=/ zroot
    Will I need to rebuild the server in non legacy mode? Or can I switch from legacy mode to non legacy mode?
 
Hello, hope you all had a great Xmas!

I decided to rebuild my server with a non-legacy mountpoint for root on ZFS. My ZFS filesystems look as follows:
Code:
NAME                                             MOUNTPOINT
bootdir                                          /bootdir
zroot                                            /
zroot/tmp                                    /tmp
zroot/usr                                     /usr
zroot/usr/home                           /usr/home
zroot/usr/ports                           /usr/ports
zroot/usr/ports/distfiles             /usr/ports/distfiles
zroot/usr/ports/packages           /usr/ports/packages
zroot/var                                   /var
zroot/var/crash                         /var/crash
zroot/var/db                             /var/db
zroot/var/db/pkg                      /var/db/pkg
zroot/var/empty                       /var/empty
zroot/var/log                           /var/log
zroot/var/mail                         /var/mail
zroot/var/run                          /var/run
zroot/var/tmp                         /var/tmp
zstore                                      /zstore
zstore/backups                                      /zstore/backups

bootdir = bootable USB pool
zroot = root on ZFS pool
zstore = backup pool

I have the most bizarre issue when trying to backup my ENTIRE zroot pool to the backup pool. I am using the following commands:
Code:
zfs snapshot -r zroot@TODAY

zfs send -R zroot@TODAY | zfs receive -Fduv zstore/backups

When I do this the zfs send/recv command runs just fine. But when I run zfs list I get this error:
Code:
internal error: failed to initialize ZFS library

None of the ZFS commands work after this. To get things working again I have to boot into the Live CD option off the FreeBSD CD and destroy the ZFS backup pool named zstore.

What I think it happening is that as soon as the zroot pool is received into the backup pool zstore it mounts all the datasets and messes all the zroot datasets!

I did read this thread:

http://forums.freebsd.org/viewtopic.php?t=36844

and tried to set:
Code:
zfs set canmount=noauto zstore/backups

but this didn't help. The annoying thing is the canmount=noauto option does not get inheritted so that when I try to create a new file system with the zfs send/recv command it automounts all the datasets it is receieving! Catch 22!

ie: zfs send -R zroot@TODAY | zfs receive -Fduv zstore/NEW

I have tried various combinations for zfs recv:
-Fduv
-duv
-uv

but haven't had any luck!

So the $64000 question is:

How do you make a COMPLETE backup of the root on ZFS pool (and all its child datasets) to a backup pool successfully? I cannot use SSH for this, it must be on the local machine. What I want to be able to do is access any files backed up from zroot in /zstore/backups

Can this be done? Funny thing is, if I do a zfs send/recv of ONLY zroot/usr@TEST to the backup pool its fine. But as soon as I try to send zroot@TEST (and all child datasets) I get the
internal error: failed to initialize ZFS library
error.

Any ideas? I have tried so many options now that I am out of ideas. And yes I have had a look through the man page and read many forum posts/blogs/etc.

How do people backup their root on ZFS pool successfully to another backup pool AND have it mounted to a different mountpoint so it is accessible?? :q

Also, I'd like to avoid a legacy mountpoint!

Thank you so much and sorry for the long post!
 
The problem is that when the backup dataset receives the back up stream it retains all the properties of the source, including the mountpoint property. This causes the backup dataset to be mounted over the existing system directory hierarchy and that is what causes the "internal error: failed to initialize ZFS library" error. One way of working around this problem is to always import the backup pool with the altroot property set to a directory, for example /mnt.

Code:
zpool import -R /mnt zstore

This will force all mountpoints of the zstore pool to be rooted under /mnt preventing them from overwriting the directories on the running system.
 
Genius @kpa! Thank you so much, that worked for me!

I did:
Code:
mkdir /backups
zpool export zstore
zpool import -R /backups zstore

Now when I run zfs send/recv of the entire zroot pool I can see the contents in /backups

Thank you SO much :beergrin (been battling with this all day)
 
Last edited by a moderator:
No problem, I got bitten by the same thing when I was a ZFS user and it took me a while to figure that out myself.
 
The next question I have is, how do you make it so that when you reboot the machine it rememebers to mount zstore under /backups?
 
Is one approach to create the pool as follows:
Code:
zpool create -R /backups zstore

Would this always mount the zstore pool in the /backups directory? I should mention that the backup pool is always on and connected to my machine (its an internal drive) so I'll never export the pool or remove the drive from the server.
 
The altroot property is only a temporary property. It gets reset to empty on next import. There is no automatic way to do what you want, you have to use a script for example that sets the -R /path option for zpool import. Changing the mount points is not an option because they would be overwritten when you send the next back up to the back up pool.
 
Thats a shame!

Can you have a script that does this when you reboot? ie: so its fully automated

Otherwise I'll just have to remember to export the pool before I reboot the server!!
 
I have the command:
zpool export zxray in /etc/rc.shutdown.local and
zpool import zxray in /etc/rc.local

where zxray is my zpool name, so that my external zpool on a USB drive is exported at shutdown and imported at startup.

Don't know if there is a better way to do it, but this approach has worked for me through a month's worth of daily system shutdown/startup sequences, including during source upgrades to FreeBSD-10.0-RC2 and just today for FreeBSD-10.0-RC3.
 
Thanks @trh411, that sounds like it may solve my problem.

I had a look on my server but I couldn't find the files you mentioned in your post? Do you have to create them manually before using them?

In the shutdown script I would want to run:
Code:
zpool export zstore

But when the server boots up it must import the pool as follows:
Code:
zpool import -R /backups zstore
 
Last edited by a moderator:
xy16644 said:
I had a look on my server but I couldn't find the files you mentioned in your post? Do you have to create them manually before using them?
Yes, you have to create them as they are not created by default during the installation. Here is my /etc/rc.local:
Code:
#!/bin/sh
# script to run at system startup

# import the zxray pool located on external USB drive da0
/sbin/zpool import zxray
exit 0
My /etc/shutdown.local is identical except it does an export rather than import of the zpool. As you stated, your import would differ based on your local needs.
 
Back
Top