Clone ZFS-only FS with raidz1 as virtual machine

I try to write a wiki about to recover or clone a full ZFS snapshot on a virtual machine by use of SSHd So I want to clone a physical, production server as virtual machine.
(to use as development server.) But I can not boot the destination server, probably because the IDE drives are named differently in the legacy part of the snapshot of the zpool. How can I fix that ?

Setup.

The physical server does run root on ZFS v14 (freeBSD 8.1).
it is full zfs based with a raidz1 mirror.

Code:
#gpart show
=>       34  976773101  ad4  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162   16777216    2  freebsd-swap  (8.0G)
   16777378  959995757    3  freebsd-zfs  (458G)

=>       34  976773101  ad5  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162   16777216    2  freebsd-swap  (8.0G)
   16777378  959995757    3  freebsd-zfs  (458G)

=>       34  976773101  ad6  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162   16777216    2  freebsd-swap  (8.0G)
   16777378  959995757    3  freebsd-zfs  (458G)

note: 3x IDE drives, named ad4, ad5, ad6

The zpool contains a legacy part and subdirs:

Code:
#zfs list

NAME                        USED  AVAIL  REFER  MOUNTPOINT
zroot                      12.3G   885G   521M  legacy
zroot/tmp                  4.23G   885G  4.23G  /tmp
zroot/usr                  5.66G   885G  1.56G  /usr
zroot/usr/home             2.52G   885G  2.51G  /usr/home
zroot/usr/ports            1.27G   885G  1.04G  /usr/ports
zroot/usr/ports/distfiles   241M   885G   241M  /usr/ports/distfiles
zroot/usr/ports/packages   24.0K   885G  24.0K  /usr/ports/packages
zroot/usr/src               315M   885G   315M  /usr/src
zroot/var                  1.91G   885G   235K  /var
zroot/var/crash            26.0K   885G  26.0K  /var/crash
zroot/var/db               1.91G   885G  1.91G  /var/db
zroot/var/db/pkg           2.43M   885G  2.43M  /var/db/pkg
zroot/var/empty            24.0K   885G  24.0K  /var/empty
zroot/var/log               278K   885G   226K  /var/log
zroot/var/mail             99.2K   885G  69.3K  /var/mail
zroot/var/run               114K   885G  79.3K  /var/run
zroot/var/tmp              33.3K   885G  33.3K  /var/tmp

So I made a snapshot of the source system:
Code:
zfs snapshot -r zroot@20110319

On the destination machine,

I did setup a virtual machine with 3 IDE disks, unfortionally they are ad0, ad1 and ad3 (!)

I did boot from the FreeBSD dvd, but did exit to the loader (option 6), using commands:

Code:
load ahci 
load geom_mirror 
load zfs 
boot

Now the kernel is aware of the ZFS system. And I did enter FIXIT mode from sysinstall.

I did partition the destination system.

Code:
#gpart show 
=>       34 976852925 da0 GPT (466G) 
         34       128   1 freebsd-boot (64K) 
        162  16777216   2 freebsd-swap (8.0G) 
   16777378 970075581   3 freebsd-zfs  (458G)
...

Wrote bootcodes:

Code:
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da0
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da1
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da3

And I did setup raidz1:

Code:
mkdir /boot/zfs 
zpool create zroot raidz1 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2 
zpool set bootfs=zroot zroot

Afterwards, I did setup SSHd in FIXIT mode. It did run.

From the source server, I did send the snapshot to the destination server (192.168.1.8) with:

Code:
zfs send -R zroot@20110319 | ssh root@192.168.1.8 'zfs recv -vFd zroot'

note: I could export the snapshot to a file, then send it over too. But this was one step.

Good so far. All did transfer, and I see the zpool on the destination server.

I did some commands to prepare to boot the destination server:

Code:
zfs unmount -a
zfs set mountpoint=legacy zroot 
zfs set mountpoint=/tmp zroot/tmp 
zfs set mountpoint=/usr zroot/usr 
zfs set mountpoint=/var zroot/var

And I did reboot the destination system.

The system seems to boot, kernel loads, zfs, geom,
but it halts with a ROOT MOUNT ERROR

the loader variables are:
Code:
vfs.root.mountfrom=zfs:zroot

using the ? I do see some geom managed disks,
so ad0, ad1, ad3 (ad0p1, ad0p2, ad0p3, ...)

So what did I miss in my setup to boot.

Or do I need to tweak the setup to manage the new hardware
or do I need to change the legacy part ?

thanks.
 
I think that you may have to recreate your zpool.cache Boot into fix it mode and:
Code:
Fixit# mkdir /boot/zfs
Fixit# zpool import -f zroot
Fixit# zpool export -f zroot
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
Then reboot and see if this works.
 
the zpool.cache was the problem

Indeed, gkontos, the problem was zpool.cache

Your solution did put me on the road to the final solution.

Your commands do not work because the root system is stored in legacy. As I got some
Code:
/libexec/ld-elf.so.1: Shared object ... not found
errors.

I did this to solve it:

Code:
Fixit# export LD_LIBRARY_PATH=/dist/lib

To get rid of the error of those shared objects, not found. Why isn't that exported by default in Fixit#?

Code:
Fixit# mkdir /boot/zfs

To trap the NEW zpool.cache file

Code:
Fixit# mkdir -p /mnt3/zroot

I do create a new mount-point to store the new zpool.cache file, because it will be gone when we do the zpool export command.

I will store that file in /mnt3/zpool.cache, the subsirectory zroot, I use to mount the legacy of the snapshot.

Code:
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /mnt3

So I store the zpool.cache in my new location.

Code:
Fixit# mount -t zfs zroot /mnt3/zroot

Because /, /etc, /boot are in legacy I need to mount them in the 'old' way.

Code:
Fixit# cp /mnt3/zpool.cache /mnt3/zroot/boot/zfs

I do overwrite my old zpool.cache here

Code:
Fixit# umount /mnt3/zroot
Fixit# zpool export -f zroot

Unmount those things to be sure (maybe not needed).

Code:
Fixit# zpool import -f zroot

Mount again that zpool, now tweak again mountpoints! Or there will be an error 'no zpools available' after reboot.

Code:
Fixit# zfs set mountpoint=legacy zroot 
Fixit# zfs set mountpoint=/tmp zroot/tmp 
Fixit# zfs set mountpoint=/usr zroot/usr 
Fixit# zfs set mountpoint=/var zroot/var

So loader is happy now. We unmount the zpool.

Code:
Fixit# zfs unmount -a

Exit Fixit.

It works now.

Well, that zpool.cache file is pretty dangerous, if we need to recover a snapshot to different hardware!

Thanks for help.
 
I can see the issue with "legacy" mount point. However, something like that should work:
Code:
Fixit# kldload /mnt2/boot/kernel/opensolaris.ko
Fixit# kldload /mnt2/boot/kernel/zfs.ko
Fixit# mkdir /boot/zfs
Fixit# export LD_LIBRARY_PATH=/mnt2/lib
Fixit# zpool import -fR /zroot zroot
Fixit# zfs set mountpoint=/zroot zroot
Fixit# zpool export -f zroot
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
Fixit# zfs unmount -a
Fixit# zfs set mountpoint=legacy zroot
Fixit# zfs set mountpoint=/tmp zroot/tmp
Fixit# zfs set mountpoint=/usr zroot/usr 
Fixit# zfs set mountpoint=/var zroot/var
 
gkontos said:
I can see the issue with "legacy" mount point. However, something like that should work:
Code:
Fixit# kldload /mnt2/boot/kernel/opensolaris.ko
Fixit# kldload /mnt2/boot/kernel/zfs.ko
Fixit# mkdir /boot/zfs
Fixit# export LD_LIBRARY_PATH=/mnt2/lib
Fixit# zpool import -fR /zroot zroot
Fixit# zfs set mountpoint=/zroot zroot
Fixit# zpool export -f zroot
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
Fixit# zfs unmount -a
Fixit# zfs set mountpoint=legacy zroot
Fixit# zfs set mountpoint=/tmp zroot/tmp
Fixit# zfs set mountpoint=/usr zroot/usr 
Fixit# zfs set mountpoint=/var zroot/var

As said, in my previous post, this does not work, because there is no /zroot directory.

The kldload lines, I don't use this, because I do load those modules at the loader prompt (using option 6)
Code:
load zfs

To access the legacy, you need to perform the mount command, not zfs commands like zfs import.

Code:
mount -t zfs zroot /mnt3/zroot

My previous answer was the complete solution to this issue. But you did put me on track. Thanks.
 
@Jurgen, I am sorry if my answer offended you. I didn't want to play smart believe me.
However, because cloning a full ZFS system is something that interests me a lot, I recreated the scenario trying to find an easier and faster way. Here is what I did and it worked like a charm:
Download mfbsd bootable iso from Martin Matuska site. Boot from it and create the zfs pool as usual:
Code:
gpart create -s gpt ad0
gpart create -s gpt ad1
gpart add -b 34 -s 64k -t freebsd-boot ad0
gpart add -t freebsd-zfs -l disk0 ad0
gpart add -b 34 -s 64k -t freebsd-boot ad1
gpart add -t freebsd-zfs -l disk1 ad1
In this scenario swap is located within the ZFS pool that's why I did not create it.
Code:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad1
zpool create zroot mirror /dev/gpt/disk0 /dev/gpt/disk1
zpool set bootfs=zroot zroot
Now send the complete snapshot from the source machine:
Code:
zfs send -R zroot@bck | ssh root@10.10.10.141 zfs recv -Fdv zroot
That should take a while. After that the last steps, on the target machine.
Code:
zfs destroy -r zroot@bck
zfs set mountpoint=/zroot zroot
zpool export -f zroot
zpool import -f zroot
cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
zfs umount -a
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr 
zfs set mountpoint=/var zroot/var
That's what I did and it worked.
Regards,
 
Jurgen said:
Indeed, gkontos, the problem was zpool.cache

Your solution did put me on the road to the final solution.

Your commands do not work because the root system is stored in legacy. As I got some
Code:
/libexec/ld-elf.so.1: Shared object ... not found
errors.
There is a very quick and easy way to fix these errors - exit back into sysinstall menu with
# exit
And then go back into #Fixit.
 
Back
Top