Disk migration under zfs

Hi,

I'm currently managing a rented server from a provider (OVH) that does not allow to natively install freeBSD. However, it is still possible and so I installed freebsd 14.1 from the official sources (the BASIC-CLOUDINIT-zfs version).

My server has 4 identical hard drives (ada0, ada1, ada2 and ada3). The system is installed on ada0 on the fresh install and the others disk are currently not used.

What I would like to di is to add the other 3 disks to the pool. I could just do a simple "zpool add zroot ada1 ada2 ada3" after after changing the default blocksize from 9 to 12 (sysctl vfs.zfs.min_auto_ashift=12). This work fine. Thing is, the only rescue I can use in case of problems (and I do a lot of stupid things sometimes) is a debian10 based rescue which doesn't support all features from the zroot pool. So what I'm trying to do is to migrate the pool from ada0 to ada1 (or 2 or 3, doesn't matter) by creating a pool supported by the rescue OS. Here is what I have been trying to do :

Code:
gpart create -s gpt ada1
gpart add -a 4k -s 500 -t freebsd-boot -l bootz ada1
gpart add -a 1M -s 40M -t efi -l efiz ada1
gpart add -a 1M -s 2G -t freebsd-swap -l swapz ada1
gpart add -a 1M -t freebsd-zfs -l zfsz ada1

So the gpart show output is:
Code:
=> 34 3907029094 ada0 GPT (1.8T)
34 345 1                           freebsd-boot (173K)
379 66584 2                     efi (33M)
66963 2097152 3             freebsd-swap (1.0G)
2164115 2048 4               freebsd (1.0M)
2166163 10485760 5       freebsd-zfs (5.0G)
12651923 3894377205 -  free - (1.8T)

=> 40 3907029088 ada2 GPT (1.8T)
40 3907029088 - free - (1.8T)

=> 40 3907029088 ada3 GPT (1.8T)
40 3907029088 - free - (1.8T)

=> 40 3907029088 ada1 GPT (1.8T)
40 496 1                           freebsd-boot (248K
536 1512 -                       free - (756K)
2048 81920 2                   efi (40M)
83968 4194304 3             freebsd-swap (2.0G)
4278272 3902750720 4   freebsd-zfs (1.8T)
3907028992 136 -            free - (68K)

Then I create the pool with support for debian10 and copy the data :
Code:
zpool create -o feature@vdev_zaps_v2=disabled -o feature@head_errlog=disabled -o feature@zilsaxattr=disabled master ada1p4
zfs snapshot -r zroot@migration
zfs send -R zroot@migration | zfs recv -F master

Then I install bootloader on disk ada1
Code:
newfs_msdos /dev/gpt/efiz
mount -t msdosfs /dev/gpt/efiz /mnt
mkdir -p /mnt/efi/boot
cp /boot/loader.efi /mnt/efi/boot/bootx64.efi
umount /mnt

Then I configure the pool "master" as the boot root:
Code:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 2 ada1
zpool set bootfs=master/ROOT/default master
echo 'vfs.root.mountfrom="zfs:master/ROOT/default"' >> /boot/loader.conf

Finally I edit /etc/fstab so it looks like this:
Code:
# Custom /etc/fstab for FreeBSD VM images
# ZFS datasets
master/ROOT/default  /       zfs     rw      0       0
master/home          /home   zfs     rw      0       0
master/tmp           /tmp    zfs     rw      0       0
master/usr           /usr    zfs     rw      0       0
master/var           /var    zfs     rw      0       0

# Swap and EFI partitions
/dev/gpt/swapz  none    swap    sw      0       0
/dev/gpt/efiz   /boot/efi       msdosfs     rw      2       2

# Older conf was :
#/dev/gpt/swapfs  none    swap    sw      0       0
#/dev/gpt/efiesp /boot/efi msdosfs rw 2 2

But yet when I try to reboot, nothing happens. The server doesn't reboot. What have I been doing wrong ? I'm thinking that ada1 is not set to be boot on in bios, but if this is it, then I can't access bios...
 
Last edited by a moderator:
First, you should figure out whether your server boots using UEFI or the Legacy BIOS method. Some commands you executed apply to UEFI and some to Legacy BIOS. For UEFI the man page efibootmgr(8) might be useful.

Second, do you have console access to the machine and can you interact with the FreeBSD loader before it loads the kernel?

Third, what you are trying to do on the server I would first try to do on a virtual machine running on my own computer. Only when the steps are validated to achieve the desired result, I would perform them on the server.

Fourth, if the problem at hand is that the rescue OS doesn't understand the latest ZFS features, then how about creating your own rescue partition with FreeBSD on UFS? In case of need, you would use the Debian rescue merely to activate your own rescue partition.
 
Yes, absolutely. This is the output of zfs list -r master

Code:
NAME                  USED  AVAIL  REFER  MOUNTPOINT
master               3.59G  1.75T    68K  none
master/ROOT          3.59G  1.75T    68K  none
master/ROOT/default  3.59G  1.75T  3.59G  /
master/home            68K  1.75T    68K  /home
master/tmp             68K  1.75T    68K  /tmp
master/usr            272K  1.75T    68K  /usr
master/usr/obj         68K  1.75T    68K  /usr/obj
master/usr/ports       68K  1.75T    68K  /usr/ports
master/usr/src         68K  1.75T    68K  /usr/src
master/var           3.28M  1.75T    68K  /var
master/var/audit       68K  1.75T    68K  /var/audit
master/var/crash       72K  1.75T    72K  /var/crash
master/var/log       1.19M  1.75T  1.19M  /var/log
master/var/mail      1.82M  1.75T  1.82M  /var/mail
master/var/tmp         68K  1.75T    68K  /var/tmp
 
Last edited by a moderator:
They must not only be imported but also mounted.
Am I not doing it through the /etc/fstab file ?

First, you should figure out whether your server boots using UEFI or the Legacy BIOS method
This is the output of the cmd :

Code:
Boot to FW : false
BootCurrent: 0005
Timeout    : 1 seconds
BootOrder  : 0005, 0006, 0007, 0004, 0001
+Boot0005* UEFI: IP4 Intel(R) Ethernet Connection X552/X557-AT 10GBASE-T
 Boot0006* UEFI: IP4 Intel(R) Ethernet Connection X552/X557-AT 10GBASE-T
 Boot0007* UEFI OS
 Boot0004  UEFI: Built-in EFI Shell
 Boot0001  Hard Drive

Second, do you have console access to the machine
Nope

Third, what you are trying to do on the server I would first try to do on a virtual machine
I agree, but I'm not able to virtualize the same machine with 2 or 4 disks atm.

Fourth, if the problem at hand is that the rescue OS doesn't understand the latest ZFS features, then how about creating your own rescue partition with FreeBSD on UFS?
That seems very interesting, but I do not know how to do that. Is there somewhere I can start to learn ? Thanks for the answers.
 
The efibootmgr command has a -v flag which might help make the boot process clearer. I wouldn't touch the first two boot entries (Boot0005 and Boot0006) without consulting the hosting provider's support first. Your FreeBSD system was somehow booted from Boot0005. I would try to research how that works because it seems to be important.
I agree, but I'm not able to virtualize the same machine with 2 or 4 disks atm.
For learning about the boot process and validating procedures all you need is a virtual machine with two 5GB virtual disk drives. You don't need to completely replicate your server environment.

That seems very interesting, but I do not know how to do that. Is there somewhere I can start to learn ? Thanks for the answers.
Maybe the hosting provider would be willing to allow you to use a rescue image that you supply instead of their Debian-based one. You should talk to them and save yourself some trouble.

To create your own FreeBSD rescue partition on your server you would need to:
  1. Create the FreeBSD rescue image
  2. Transfer the image to a partition on the server
  3. Learn how to boot into it when needed
To create the FreeBSD rescue image you need to:
  1. Install FreeBSD on a virtual machine. Use the old and reliable UFS filesystem and the GPT partitioning scheme. The partition for the UFS filesystem would be 2GB in size, for example.
  2. Verify that all settings are what they need to be, including the root filesystem specification in /etc/fstab and the sshd configuration for remote access.
To transfer the rescue partition to the server you would:
  1. Shut down the virtual machine
  2. Boot the VM from the FreeBSD install media in rescue mode (or boot into single-user mode) and transfer the partition containing your new FreeBSD rescue system to a file. You would use the dd(8) command for that. The file may be written to another filesystem in the VM or you could transfer it over the network if you are more creative. Some virtualization solutions may allow you to read from the VM's disk when the VM is not running.
  3. Transfer the file containing the rescue image to your server
  4. Write the file to a new partition on any of the disks. The partition must be of type 'freebsd-ufs' and have the same size as the image (2GB in our example). The dd(8) command would be used again.
If you consider yourself either a very advanced user or extremely lucky, you can create the rescue image directly on the server.

To be able to boot into your FreeBSD rescue partition you need to:
  1. Mount the rescue partition read-only to /mnt
  2. Create a new boot method with efibootmgr(8). The invocation would look like this: "efibootmgr -c -l /boot/efi/efi/freebsd/loader.efi -k /mnt/boot/kernel -L MyRescue". The -k option basically tells it where to look for the root filesystem.
  3. Unmount the rescue partition
  4. Learn what is the equivalent of efibootmgr on Debian Linux and how to use it.
Now, when the need for rescue arises:
  1. Boot from the Debian-based rescue image that the hosting provider offers.
  2. (maybe not necessary) Change the type of all 'freebsd-zfs' partitions to something else in order to hide them from loader.efi.
  3. Change the order of the boot methods or set BootNext so that you will boot from the FreeBSD rescue system.
  4. Reboot. There is a chance that requesting not to boot from the Debian-based rescue image will mess up the boot methods list.
  5. Now you are in your FreeBSD rescue system. Fix all problems and prepare the server for booting from the normal production system.

As long as the Debian-based rescue works, you can make mistakes and still recover, including by transferring a brand new version of your FreeBSD rescue system to the server over the network.
 
Last edited:
Back
Top