Clone ZFS-only FS with raidz1 as virtual machine

General questions about the FreeBSD operating system. Ask here if your question does not fit elsewhere.

Clone ZFS-only FS with raidz1 as virtual machine

Postby Jurgen » 24 Mar 2011, 21:41

I try to write a wiki about to recover or clone a full ZFS snapshot on a virtual machine by use of SSHd So I want to clone a physical, production server as virtual machine.
(to use as development server.) But I can not boot the destination server, probably because the IDE drives are named differently in the legacy part of the snapshot of the zpool. How can I fix that ?

Setup.

The physical server does run root on ZFS v14 (freeBSD 8.1).
it is full zfs based with a raidz1 mirror.

Code: Select all
#gpart show
=>       34  976773101  ad4  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162   16777216    2  freebsd-swap  (8.0G)
   16777378  959995757    3  freebsd-zfs  (458G)

=>       34  976773101  ad5  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162   16777216    2  freebsd-swap  (8.0G)
   16777378  959995757    3  freebsd-zfs  (458G)

=>       34  976773101  ad6  GPT  (466G)
         34        128    1  freebsd-boot  (64K)
        162   16777216    2  freebsd-swap  (8.0G)
   16777378  959995757    3  freebsd-zfs  (458G)


note: 3x IDE drives, named ad4, ad5, ad6

The zpool contains a legacy part and subdirs:

Code: Select all
#zfs list

NAME                        USED  AVAIL  REFER  MOUNTPOINT
zroot                      12.3G   885G   521M  legacy
zroot/tmp                  4.23G   885G  4.23G  /tmp
zroot/usr                  5.66G   885G  1.56G  /usr
zroot/usr/home             2.52G   885G  2.51G  /usr/home
zroot/usr/ports            1.27G   885G  1.04G  /usr/ports
zroot/usr/ports/distfiles   241M   885G   241M  /usr/ports/distfiles
zroot/usr/ports/packages   24.0K   885G  24.0K  /usr/ports/packages
zroot/usr/src               315M   885G   315M  /usr/src
zroot/var                  1.91G   885G   235K  /var
zroot/var/crash            26.0K   885G  26.0K  /var/crash
zroot/var/db               1.91G   885G  1.91G  /var/db
zroot/var/db/pkg           2.43M   885G  2.43M  /var/db/pkg
zroot/var/empty            24.0K   885G  24.0K  /var/empty
zroot/var/log               278K   885G   226K  /var/log
zroot/var/mail             99.2K   885G  69.3K  /var/mail
zroot/var/run               114K   885G  79.3K  /var/run
zroot/var/tmp              33.3K   885G  33.3K  /var/tmp


So I made a snapshot of the source system:
Code: Select all
zfs snapshot -r zroot@20110319


On the destination machine,

I did setup a virtual machine with 3 IDE disks, unfortionally they are ad0, ad1 and ad3 (!)

I did boot from the FreeBSD dvd, but did exit to the loader (option 6), using commands:

Code: Select all
load ahci
load geom_mirror
load zfs
boot


Now the kernel is aware of the ZFS system. And I did enter FIXIT mode from sysinstall.

I did partition the destination system.

Code: Select all
#gpart show
=>       34 976852925 da0 GPT (466G)
         34       128   1 freebsd-boot (64K)
        162  16777216   2 freebsd-swap (8.0G)
   16777378 970075581   3 freebsd-zfs  (458G)
...


Wrote bootcodes:

Code: Select all
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da0
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da1
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da3


And I did setup raidz1:

Code: Select all
mkdir /boot/zfs
zpool create zroot raidz1 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2
zpool set bootfs=zroot zroot


Afterwards, I did setup SSHd in FIXIT mode. It did run.

From the source server, I did send the snapshot to the destination server (192.168.1.8) with:

Code: Select all
zfs send -R zroot@20110319 | ssh root@192.168.1.8 'zfs recv -vFd zroot'


note: I could export the snapshot to a file, then send it over too. But this was one step.

Good so far. All did transfer, and I see the zpool on the destination server.

I did some commands to prepare to boot the destination server:

Code: Select all
zfs unmount -a
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var


And I did reboot the destination system.

The system seems to boot, kernel loads, zfs, geom,
but it halts with a ROOT MOUNT ERROR

the loader variables are:
Code: Select all
vfs.root.mountfrom=zfs:zroot


using the ? I do see some geom managed disks,
so ad0, ad1, ad3 (ad0p1, ad0p2, ad0p3, ...)

So what did I miss in my setup to boot.

Or do I need to tweak the setup to manage the new hardware
or do I need to change the legacy part ?

thanks.
Jurgen
Junior Member
 
Posts: 7
Joined: 24 Mar 2011, 20:36

Postby gkontos » 24 Mar 2011, 23:54

I think that you may have to recreate your [FILE]zpool.cache[/FILE] Boot into fix it mode and:
Code: Select all
Fixit# mkdir /boot/zfs
Fixit# zpool import -f zroot
Fixit# zpool export -f zroot
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache

Then reboot and see if this works.
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

the zpool.cache was the problem

Postby Jurgen » 25 Mar 2011, 01:49

Indeed, gkontos, the problem was [FILE]zpool.cache[/FILE]

Your solution did put me on the road to the final solution.

Your commands do not work because the root system is stored in legacy. As I got some
Code: Select all
/libexec/ld-elf.so.1: Shared object ... not found
errors.

I did this to solve it:

Code: Select all
Fixit# export LD_LIBRARY_PATH=/dist/lib


To get rid of the error of those shared objects, not found. Why isn't that exported by default in Fixit#?

Code: Select all
Fixit# mkdir /boot/zfs


To trap the NEW [FILE]zpool.cache[/FILE] file

Code: Select all
Fixit# mkdir -p /mnt3/zroot


I do create a new mount-point to store the new [FILE]zpool.cache[/FILE] file, because it will be gone when we do the [FILE]zpool export[/FILE] command.

I will store that file in [FILE]/mnt3/zpool.cache[/FILE], the subsirectory [FILE]zroot[/FILE], I use to mount the legacy of the snapshot.

Code: Select all
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /mnt3


So I store the [FILE]zpool.cache[/FILE] in my new location.

Code: Select all
Fixit# mount -t zfs zroot /mnt3/zroot


Because [FILE]/[/FILE], [FILE]/etc[/FILE], [FILE]/boot[/FILE] are in legacy I need to mount them in the 'old' way.

Code: Select all
Fixit# cp /mnt3/zpool.cache /mnt3/zroot/boot/zfs


I do overwrite my old [FILE]zpool.cache[/FILE] here

Code: Select all
Fixit# umount /mnt3/zroot
Fixit# zpool export -f zroot


Unmount those things to be sure (maybe not needed).

Code: Select all
Fixit# zpool import -f zroot


Mount again that zpool, now tweak again mountpoints! Or there will be an error '[FILE]no zpools available[/FILE]' after reboot.

Code: Select all
Fixit# zfs set mountpoint=legacy zroot
Fixit# zfs set mountpoint=/tmp zroot/tmp
Fixit# zfs set mountpoint=/usr zroot/usr
Fixit# zfs set mountpoint=/var zroot/var


So loader is happy now. We unmount the zpool.

Code: Select all
Fixit# zfs unmount -a


Exit Fixit.

It works now.

Well, that [FILE]zpool.cache[/FILE] file is pretty dangerous, if we need to recover a snapshot to different hardware!

Thanks for help.
Jurgen
Junior Member
 
Posts: 7
Joined: 24 Mar 2011, 20:36

Postby gkontos » 25 Mar 2011, 12:50

I can see the issue with "legacy" mount point. However, something like that should work:
Code: Select all
Fixit# kldload /mnt2/boot/kernel/opensolaris.ko
Fixit# kldload /mnt2/boot/kernel/zfs.ko
Fixit# mkdir /boot/zfs
Fixit# export LD_LIBRARY_PATH=/mnt2/lib
Fixit# zpool import -fR /zroot zroot
Fixit# zfs set mountpoint=/zroot zroot
Fixit# zpool export -f zroot
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
Fixit# zfs unmount -a
Fixit# zfs set mountpoint=legacy zroot
Fixit# zfs set mountpoint=/tmp zroot/tmp
Fixit# zfs set mountpoint=/usr zroot/usr
Fixit# zfs set mountpoint=/var zroot/var
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby Jurgen » 25 Mar 2011, 19:05

gkontos wrote:I can see the issue with "legacy" mount point. However, something like that should work:
Code: Select all
Fixit# kldload /mnt2/boot/kernel/opensolaris.ko
Fixit# kldload /mnt2/boot/kernel/zfs.ko
Fixit# mkdir /boot/zfs
Fixit# export LD_LIBRARY_PATH=/mnt2/lib
Fixit# zpool import -fR /zroot zroot
Fixit# zfs set mountpoint=/zroot zroot
Fixit# zpool export -f zroot
Fixit# zpool import -f zroot
Fixit# cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
Fixit# zfs unmount -a
Fixit# zfs set mountpoint=legacy zroot
Fixit# zfs set mountpoint=/tmp zroot/tmp
Fixit# zfs set mountpoint=/usr zroot/usr
Fixit# zfs set mountpoint=/var zroot/var


As said, in my previous post, this does not work, because there is no [FILE]/zroot[/FILE] directory.

The [FILE]kldload[/FILE] lines, I don't use this, because I do load those modules at the loader prompt (using option 6)
Code: Select all
load zfs


To access the legacy, you need to perform the [FILE]mount[/FILE] command, not zfs commands like [FILE]zfs import[/FILE].

Code: Select all
mount -t zfs zroot /mnt3/zroot


My previous answer was the complete solution to this issue. But you did put me on track. Thanks.
Jurgen
Junior Member
 
Posts: 7
Joined: 24 Mar 2011, 20:36

Postby gkontos » 26 Mar 2011, 11:07

@Jurgen, I am sorry if my answer offended you. I didn't want to play smart believe me.
However, because cloning a full ZFS system is something that interests me a lot, I recreated the scenario trying to find an easier and faster way. Here is what I did and it worked like a charm:
Download mfbsd bootable iso from Martin Matuska site. Boot from it and create the zfs pool as usual:
Code: Select all
gpart create -s gpt ad0
gpart create -s gpt ad1
gpart add -b 34 -s 64k -t freebsd-boot ad0
gpart add -t freebsd-zfs -l disk0 ad0
gpart add -b 34 -s 64k -t freebsd-boot ad1
gpart add -t freebsd-zfs -l disk1 ad1

In this scenario swap is located within the ZFS pool that's why I did not create it.
Code: Select all
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad0
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ad1
zpool create zroot mirror /dev/gpt/disk0 /dev/gpt/disk1
zpool set bootfs=zroot zroot

Now send the complete snapshot from the source machine:
Code: Select all
zfs send -R zroot@bck | ssh root@10.10.10.141 zfs recv -Fdv zroot

That should take a while. After that the last steps, on the target machine.
Code: Select all
zfs destroy -r zroot@bck
zfs set mountpoint=/zroot zroot
zpool export -f zroot
zpool import -f zroot
cp /boot/zfs/zpool.cache /zroot/boot/zfs/zpool.cache
zfs umount -a
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var

That's what I did and it worked.
Regards,
Powered by BareBSD
User avatar
gkontos
Senior Member
 
Posts: 1370
Joined: 09 Dec 2009, 08:36
Location: Polidendri, GR

Postby carlton_draught » 01 May 2011, 10:43

Jurgen wrote:Indeed, gkontos, the problem was [FILE]zpool.cache[/FILE]

Your solution did put me on the road to the final solution.

Your commands do not work because the root system is stored in legacy. As I got some
Code: Select all
/libexec/ld-elf.so.1: Shared object ... not found
errors.

There is a very quick and easy way to fix these errors - exit back into sysinstall menu with
[CMD="#"]exit[/CMD]
And then go back into #Fixit.
[PORT]sysutils/zxfer[/PORT] - transfer everything on ZFS easily and reliably. www.zxfer.org
User avatar
carlton_draught
Member
 
Posts: 288
Joined: 18 Mar 2010, 00:07


Return to General

Who is online

Users browsing this forum: No registered users and 3 guests