I try to write a wiki about to recover or clone a full ZFS snapshot on a virtual machine by use of SSHd So I want to clone a physical, production server as virtual machine.
(to use as development server.) But I can not boot the destination server, probably because the IDE drives are named differently in the legacy part of the snapshot of the zpool. How can I fix that ?
Setup.
The physical server does run root on ZFS v14 (freeBSD 8.1).
it is full zfs based with a raidz1 mirror.
note: 3x IDE drives, named ad4, ad5, ad6
The zpool contains a legacy part and subdirs:
So I made a snapshot of the source system:
On the destination machine,
I did setup a virtual machine with 3 IDE disks, unfortionally they are ad0, ad1 and ad3 (!)
I did boot from the FreeBSD dvd, but did exit to the loader (option 6), using commands:
Now the kernel is aware of the ZFS system. And I did enter FIXIT mode from sysinstall.
I did partition the destination system.
Wrote bootcodes:
And I did setup raidz1:
Afterwards, I did setup SSHd in FIXIT mode. It did run.
From the source server, I did send the snapshot to the destination server (192.168.1.8) with:
note: I could export the snapshot to a file, then send it over too. But this was one step.
Good so far. All did transfer, and I see the zpool on the destination server.
I did some commands to prepare to boot the destination server:
And I did reboot the destination system.
The system seems to boot, kernel loads, zfs, geom,
but it halts with a ROOT MOUNT ERROR
the loader variables are:
using the ? I do see some geom managed disks,
so ad0, ad1, ad3 (ad0p1, ad0p2, ad0p3, ...)
So what did I miss in my setup to boot.
Or do I need to tweak the setup to manage the new hardware
or do I need to change the legacy part ?
thanks.
(to use as development server.) But I can not boot the destination server, probably because the IDE drives are named differently in the legacy part of the snapshot of the zpool. How can I fix that ?
Setup.
The physical server does run root on ZFS v14 (freeBSD 8.1).
it is full zfs based with a raidz1 mirror.
Code:
#gpart show
=> 34 976773101 ad4 GPT (466G)
34 128 1 freebsd-boot (64K)
162 16777216 2 freebsd-swap (8.0G)
16777378 959995757 3 freebsd-zfs (458G)
=> 34 976773101 ad5 GPT (466G)
34 128 1 freebsd-boot (64K)
162 16777216 2 freebsd-swap (8.0G)
16777378 959995757 3 freebsd-zfs (458G)
=> 34 976773101 ad6 GPT (466G)
34 128 1 freebsd-boot (64K)
162 16777216 2 freebsd-swap (8.0G)
16777378 959995757 3 freebsd-zfs (458G)
note: 3x IDE drives, named ad4, ad5, ad6
The zpool contains a legacy part and subdirs:
Code:
#zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 12.3G 885G 521M legacy
zroot/tmp 4.23G 885G 4.23G /tmp
zroot/usr 5.66G 885G 1.56G /usr
zroot/usr/home 2.52G 885G 2.51G /usr/home
zroot/usr/ports 1.27G 885G 1.04G /usr/ports
zroot/usr/ports/distfiles 241M 885G 241M /usr/ports/distfiles
zroot/usr/ports/packages 24.0K 885G 24.0K /usr/ports/packages
zroot/usr/src 315M 885G 315M /usr/src
zroot/var 1.91G 885G 235K /var
zroot/var/crash 26.0K 885G 26.0K /var/crash
zroot/var/db 1.91G 885G 1.91G /var/db
zroot/var/db/pkg 2.43M 885G 2.43M /var/db/pkg
zroot/var/empty 24.0K 885G 24.0K /var/empty
zroot/var/log 278K 885G 226K /var/log
zroot/var/mail 99.2K 885G 69.3K /var/mail
zroot/var/run 114K 885G 79.3K /var/run
zroot/var/tmp 33.3K 885G 33.3K /var/tmp
So I made a snapshot of the source system:
Code:
zfs snapshot -r zroot@20110319
On the destination machine,
I did setup a virtual machine with 3 IDE disks, unfortionally they are ad0, ad1 and ad3 (!)
I did boot from the FreeBSD dvd, but did exit to the loader (option 6), using commands:
Code:
load ahci
load geom_mirror
load zfs
boot
Now the kernel is aware of the ZFS system. And I did enter FIXIT mode from sysinstall.
I did partition the destination system.
Code:
#gpart show
=> 34 976852925 da0 GPT (466G)
34 128 1 freebsd-boot (64K)
162 16777216 2 freebsd-swap (8.0G)
16777378 970075581 3 freebsd-zfs (458G)
...
Wrote bootcodes:
Code:
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da0
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da1
gpart bootcode -b /mnt2/boot/pmbr -p /mnt2/boot/gptzfsboot -i 1 da3
And I did setup raidz1:
Code:
mkdir /boot/zfs
zpool create zroot raidz1 /dev/gpt/disk0 /dev/gpt/disk1 /dev/gpt/disk2
zpool set bootfs=zroot zroot
Afterwards, I did setup SSHd in FIXIT mode. It did run.
From the source server, I did send the snapshot to the destination server (192.168.1.8) with:
Code:
zfs send -R zroot@20110319 | ssh root@192.168.1.8 'zfs recv -vFd zroot'
note: I could export the snapshot to a file, then send it over too. But this was one step.
Good so far. All did transfer, and I see the zpool on the destination server.
I did some commands to prepare to boot the destination server:
Code:
zfs unmount -a
zfs set mountpoint=legacy zroot
zfs set mountpoint=/tmp zroot/tmp
zfs set mountpoint=/usr zroot/usr
zfs set mountpoint=/var zroot/var
And I did reboot the destination system.
The system seems to boot, kernel loads, zfs, geom,
but it halts with a ROOT MOUNT ERROR
the loader variables are:
Code:
vfs.root.mountfrom=zfs:zroot
using the ? I do see some geom managed disks,
so ad0, ad1, ad3 (ad0p1, ad0p2, ad0p3, ...)
So what did I miss in my setup to boot.
Or do I need to tweak the setup to manage the new hardware
or do I need to change the legacy part ?
thanks.