Solved Help converting a ZFS-based FreeBSD installation to a UFS image for virtualization

Hello, I have a physical FreeBSD server I installed using ZFS on root. To ease maintenance I'd like to move it into a virtual machine (VMware ESXi), but since virtualizing ZFS is not recommended, I would like to convert it to UFS. Here's my plan so far, can anyone confirm whether I am on the right track?

1. Create a new UFS disk image:
dd if=/dev/zero of=ufsroot bs=`expr 1024 \* 1024 \* 1024` count=50
mdconfig -f ufsroot -u 0
bsdlabel -w md0 auto
newfs -U md0a
mount /dev/md0a /mnt


2. Create a ZFS snapshot of / and copy to the UFS image:
zfs snapshot zroot/ROOT/default@mysnapshot
(cd /.zfs/snapshot/mysnapshot ; tar cf - *) | (cd /mnt; tar xvfp -)


3. Convert the raw image to VMDK:
umount /mnt
qemu-img convert -f raw -O vmdk ufsroot ufsroot.vmdk
# then import it into a new machine in VMware


The advantage of this approach is I will (hopefully) be able to use the exact kernel (custom) and userland versions of FreeBSD that I have on the physical machine, rather than reinstalling from scratch and copying over the data. Basically, I want the virtual system to be as identical as possible to the physical machine, except on UFS instead of ZFS. Only the root filesystem, so I can boot it and tinker virtually before deploying. Sounds reasonable, no? Except I'm running into a few problems I don't understand:

To create the UFS image for the ZFS root, I first looked at df -h / and it was about 10GB, so I used about this size. However, it quickly filled up. I learned from this post: Discrepancy in expected vs. actual capacity of ZFS filesystem that " df could return a value smaller or larger than zfs list since it does not understand how to interpret compression, snapshots, etc.", so I examined the zfs list size and it was a few gigabytes larger, about 15 GB. Not too bad, but...

When I actually copy the data using the commands above, it is much larger. (It also copies very slowly, presumably because I'm storing the image on the physical disks I'm copying from so it has to read and write back, through the disk image layer). At this point, it is still copying but is 24 GB and growing. Can this discrepancy be attributed to additional "slack" in the UFS filesystem versus ZFS? Is it possible to get a better estimate of how big of an image I should allocate? Or am I copying the data incorrectly somehow?

Secondly, piping tar with the -p flag preserves permissions, but it seems to choke on sockets and other special files:

tar: var/run/devd.seqpacket.pipe: tar format cannot archive socket


Is there a more accurate tool to copy files from ZFS to UFS? My first choice would be dump(8)/restore(8), but of course that is only for UFS. cp similarly refuses to copy sockets ("cp: /var/run/devd.seqpacket.pipe is a socket (not copied)."). Fortunately with devfs I don't have to worry about copying /dev device nodes, but there are about 80 socket files that are not archived. Is this a problem, or can I just let the app that uses them recreate them? Looks like I can manually create a socket with nc -lkU /tmp/sock, but this won't copy the ownership/permissions. Are there other filesystem nodes that would be backed up by the UFS dump/restore tools that wouldn't be by tar/cp?

Finally, does it make sense what I'm trying to do? I searched around for "convert ZFS to UFS" but all the results seem to be about people going in the opposite direction, from UFS to ZFS. Thanks in advance for any advice and suggestions, and I hope this is the appropriate section.
 
I'm partial to rsync. :) We use it for anything involving moving large numbers of files across filesystems, drives, systems, etc. Handles ownership, permissions, special files, devices, etc. Works across local mount points, remote filesystems, and anything you can connect to via SSH. And it handles interrupted transfers.
 
Thank you phoenix, I originally passed on rsync because it isn't in the base system, but looking at it closer it appears to be a good tool for this task. With the -a (archive) flag it even supports copying sockets, very nice:

tmp $ nc -lkU sock
tmp $ ls -l sock
srwxr-xr-x 1 admin wheel 0 Jun 17 17:08 sock
tmp $ rsync sock foo
skipping non-regular file "sock"
tmp $ rsync -a sock foo
tmp $ ls -l foo/
srwxr-xr-x 1 admin wheel 0 Jun 17 17:08 sock


Although the manpage has a caveat --archive doesn't preserve hardlinks, the --hard-links flag can do this. I'll have to try it out.


An additional piece I was missing in my step 3 above, in case anyone else is trying to do something similar: installing the bootcode on the filesystem image. This is easy enough to do with bsdlabel -B (looks like boot0cfg can do it too):

bsdlabel -B /dev/md0
umount /mnt
mdconfig -d -u 0
file ufsroot

ufsroot: DOS/MBR boot sector, BSD disklabel


then after converting to a vmdk with qemu-img, I can successfully create a new custom virtual machine in VMware using this existing disk image, and it boots! At least to the kernel, but of course, it can't find my root filesystem; not surprising since it is no longer ZFS. But the bootloader can't find any disks:

Code:
Loader variables:

Manual root filesystem specification:
  <fstype>:<device> [options]
      Mount <device> using filesystem <fstype>
      and with the specified (optional) option list.

    eg. ufs:/dev/da0s1a
        zfs:tank
        cd9660:/dev/acd0 ro
          (which is equivalent to: mount -t cd9660 -o ro /dev/acd0 /)

  ?               List valid disk boot devices
  .               Yield 1 second (for background tasks)
  <empty line>    Abort manual input

mountroot> ?
List of GEOM managed disk devices:

Looking closer at the dmesg, I see pci0: <mass storage, SCSI> at device 16.0 (no driver attached). VMware defaulted to bus type: SCSI, but if I change to SATA, then ada0 is detected as a 51200MB disk, but ? at the loader prompt only shows cd0 as a valid boot device. I guess I'm missing the next stage of the bootstrap?

FreeBSD Architecture Handbook: Chapter 1. Bootstrapping and Kernel Initialization: boot1 Stage explains:

Strictly speaking, unlike boot0, boot1 is not part of the boot blocks [3]. Instead, a single, full-blown file, boot (/boot/boot), is what ultimately is written to disk. This file is a combination of boot1, boot2 and the Boot Extender (or BTX). This single file is greater in size than a single sector (greater than 512 bytes). Fortunately, boot1 occupies exactly the first 512 bytes of this single file, so when boot0 loads the first sector of the FreeBSD slice (512 bytes), it is actually loading boot1 and transferring control to it.

[3] There is a file /boot/boot1, but it is not the written to the beginning of the FreeBSD slice. Instead, it is concatenated with boot2 to form boot, which is written to the beginning of the FreeBSD slice and read at boot time.

Seems what I'm missing is the partition table and second (or third) boot stage...


$ boot0cfg -B /dev/md0
$ gpart show /dev/md0
=> 0 104857600 md0 BSD (50G)
0 16 - free - (8.0K)
16 104857584 1 !0 (50G)


Found this helpful reference for gpart bootcode: Disk Setup On FreeBSD by Warren Block. Going to try these commands then recopying the root filesystem data (with a partition table this time, md0s1a instead of md0a (using MBR instead of GPT here for testing, though GPT may ultimately be a better option)):

gpart destroy -F md0
gpart create -s mbr md0
gpart bootcode -b /boot/mbr md0
gpart add -t freebsd md0
gpart set -a active -i 1 md0
gpart create -s bsd md0s1
gpart bootcode -b /boot/boot md0s1
gpart add -t freebsd-ufs md0s1
newfs -U /dev/md0s1a
mount /dev/md0s1a /mnt



Update: these commands worked (creating the partition table and making it bootable), after editing /etc/fstab accordingly I was able to boot and login! I'll still have to copy of the remainder of my data, but this problem of creating a UFS-based image from a physical ZFS installation in order to use in a VMware virtual machine can be considered solved.
 
Although the manpage has a caveat --archive doesn't preserve hardlinks, the --hard-links flag can do this. I'll have to try it out.

Correct. And, the --hard-links option NEEDS to be used when copying a FreeBSD system. Hard links are used all over the base system. A sub-5 GB install will balloon out to 20+ GB if you rsync it without --hard-links! We found that out the hard way. :D

All of our nightly backups are done via rsync from a FreeBSD storage server using ZFS. All our off-site replication is done via ZFS send/receive to another FreeBSD storage server using ZFS. And, all our server imaging / restore-from-backup is done using rsync. Works great! (If you search the forums for rsbackup, you'll find the scripts we use.)
 
Back
Top