How to move my live freebsd system to another larger drive

I'm running freebsd14-p5 but I want to change the drive to a faster and larger one. The new drive now comes up as ada1. I created 3 partitions, a boot EFI (and copied the boot file to it), a swap partition and a freebsd-zfs, and now I'm stuck because I don't know how to move my zroot into my freebsd-zfs partitions which is ada1p3 I have looked at zfs send receive and zpool import but the more I read the more I'm confused.

Running zpool list I get this:

Code:
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
zroot  27.5G  11.4G  16.1G        -         -     8%    41%  1.00x    ONLINE  -

and running zfs list I get this:

Code:
NAME                                           USED  AVAIL  REFER  MOUNTPOINT
zroot                                         11.4G  15.2G    96K  /zroot
zroot/ROOT                                    9.92G  15.2G    96K  none
zroot/ROOT/14.0-RELEASE-p2_2023-12-29_153155     8K  15.2G  5.22G  /
zroot/ROOT/14.0-RELEASE-p4_2024-02-25_145909     8K  15.2G  6.70G  /
zroot/ROOT/14.0-RELEASE_2023-12-07_153215        8K  15.2G  4.62G  /
zroot/ROOT/default                            9.92G  15.2G  6.71G  /
zroot/home                                     718M  15.2G   718M  /home
zroot/tmp                                      216K  15.2G   216K  /tmp
zroot/usr                                      816M  15.2G    96K  /usr
zroot/usr/ports                                 96K  15.2G    96K  /usr/ports/usr/src                                  816M  15.2G   816M  /usr/src
zroot/var                                      924K  15.2G    96K  /var
zroot/var/audit                                 96K  15.2G    96K  /var/audit
zroot/var/crash                                 96K  15.2G    96K  /var/crash
zroot/var/log                                  396K  15.2G   396K  /var/log
zroot/var/mail                                 144K  15.2G   144K  /var/mail
zroot/var/tmp                                   96K  15.2G    96K  /var/tmp

What's easiest to get all of this into my new ada1p3 (freebsd-zfs) partition?
 
Do you have some extra ports available? You could set up a mirror with the new drive. Let it resilver and when it's done break the mirror so you can remove the old drive.
 
The idea of mirror will work when we move the system to another partition of same size. Recently, I needed to move OS to a smaller ZFS pool. First I used ZFS send+receive. Then, I disconnected the source disk, that was no longer needed. I started computer from a installation disc and entered the shell. Finally, I renamed new pool to the correct name (rpool).
 
The idea of mirror will work when we move the system to another partition of same size
The thread starter mentioned a bigger and faster drive. But yeah, the mirror trick will only work if the new drive is the same size or bigger. It will not work if the new drive is smaller because you won't able to create a mirror with it.
 
The idea of mirror will work when we move the system to another partition of same size. Recently, I needed to move OS to a smaller ZFS pool. First I used ZFS send+receive. Then, I disconnected the source disk, that was no longer needed. I started computer from a installation disc and entered the shell. Finally, I renamed new pool to the correct name (rpool).
Can you post the steps on how to do that with send and receive? I'm still new to zfs and the man pages are actually very confusing to me.
 
Can you dd the smaller drive over to the larger drive and expand your zfs pool on the new disk? This is a general question I have, not a suggestion.
 
You can take this as an example.

We have a 1GB MD0 disk that would be the old disk that I want to migrate, and the 2GB MD1 disk that would be the new one.

Code:
md0
        512             # sectorsize
        1073741824      # mediasize in bytes (1.0G)
        2097152         # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        MD-DEV11686338411606443580-INO3 # Disk ident.
        Yes             # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

md1
        512             # sectorsize
        2147483648      # mediasize in bytes (2.0G)
        4194304         # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        MD-DEV11686338411606443580-INO3 # Disk ident.
        Yes             # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

MD0 is the one that contains the POOL and is partitioned as follows:

Code:
root@xv0:~ # gpart show md0
=>     40  2097072  md0  GPT  (1.0G)
       40     1024    1  freebsd-boot  (512K)
     1064      984       - free -  (492K)
     2048   204800    2  freebsd-swap  (100M)
   206848  1843200    3  freebsd-zfs  (900M)
  2050048    47064       - free -  (23M)

root@xv0:~ # zpool status z99
  pool: z99
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        z99         ONLINE       0     0     0
          md0p3     ONLINE       0     0     0

errors: No known data errors

Now we are going to create the same partition scheme on the new MD1 disk:

Code:
gpart show md0
=>     40  2097072  md0  GPT  (1.0G)
       40     1024    1  freebsd-boot  (512K)
     1064      984       - free -  (492K)
     2048   204800    2  freebsd-swap  (100M)
   206848  1843200    3  freebsd-zfs  (900M)
  2050048    47064       - free -  (23M)

root@xv0:~ # gpart add -t freebsd-boot -s 512k -l gptboot1 md1
root@xv0:~ # gpart add -t freebsd-swap -b2048 -s 100m -l swap1 md1
md1p2 added
root@xv0:~ # gpart add -t freebsd-zfs -s 900m -l zfs1 md1
md1p3 added
root@xv0:~ # gpart show md1
=>     40  4194224  md1  GPT  (2.0G)
       40     1024    1  freebsd-boot  (512K)
     1064      984       - free -  (492K)
     2048   204800    2  freebsd-swap  (100M)
   206848  1843200    3  freebsd-zfs  (900M)
  2050048  2144216       - free -  (1.0G)

root@xv0:~ # gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 md1
partcode written to md1p1
bootcode written to md1

Please note the labels. Now join the partitions to the pool.

Code:
# zpool attach z99 md0p3 md1p3
root@xv0:~ # zpool status z99
  pool: z99
 state: ONLINE
  scan: resilvered 660K in 00:00:02 with 0 errors on Sat Mar  2 09:30:31 2024
config:

        NAME        STATE     READ WRITE CKSUM
        z99         ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            md0p3   ONLINE       0     0     0
            md1p3   ONLINE       0     0     0

errors: No known data errors

Once the sync is finished, you want the old partition from the pool.

Code:
root@xv0:~ # zpool detach z99 md0p3
# zpool status z99
  pool: z99
 state: ONLINE
  scan: resilvered 984K in 00:00:02 with 0 errors on Sat Mar  2 09:32:39 2024
config:

        NAME        STATE     READ WRITE CKSUM
        z99         ONLINE       0     0     0
          md1p3     ONLINE       0     0     0

errors: No known data errors

You would also have to modify fstab for the swap partition.

Now you simply have to boot into Live as you cannot increase the partition size.

Code:
gpart resize -i 3 -s NEW_SIZE md1

When you are on the system and the zpool is running, simply expand it, if you do not have the autorize option you will have to do it manually:

Code:
zpool online -e z99 md1p3
 
You can take this as an example.

We have a 1GB MD0 disk that would be the old disk that I want to migrate, and the 2GB MD1 disk that would be the new one.

Code:
md0
        512             # sectorsize
        1073741824      # mediasize in bytes (1.0G)
        2097152         # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        MD-DEV11686338411606443580-INO3 # Disk ident.
        Yes             # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

md1
        512             # sectorsize
        2147483648      # mediasize in bytes (2.0G)
        4194304         # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        MD-DEV11686338411606443580-INO3 # Disk ident.
        Yes             # TRIM/UNMAP support
        Unknown         # Rotation rate in RPM

MD0 is the one that contains the POOL and is partitioned as follows:

Code:
root@xv0:~ # gpart show md0
=>     40  2097072  md0  GPT  (1.0G)
       40     1024    1  freebsd-boot  (512K)
     1064      984       - free -  (492K)
     2048   204800    2  freebsd-swap  (100M)
   206848  1843200    3  freebsd-zfs  (900M)
  2050048    47064       - free -  (23M)

root@xv0:~ # zpool status z99
  pool: z99
 state: ONLINE
config:

        NAME        STATE     READ WRITE CKSUM
        z99         ONLINE       0     0     0
          md0p3     ONLINE       0     0     0

errors: No known data errors

Now we are going to create the same partition scheme on the new MD1 disk:

Code:
gpart show md0
=>     40  2097072  md0  GPT  (1.0G)
       40     1024    1  freebsd-boot  (512K)
     1064      984       - free -  (492K)
     2048   204800    2  freebsd-swap  (100M)
   206848  1843200    3  freebsd-zfs  (900M)
  2050048    47064       - free -  (23M)

root@xv0:~ # gpart add -t freebsd-boot -s 512k -l gptboot1 md1
root@xv0:~ # gpart add -t freebsd-swap -b2048 -s 100m -l swap1 md1
md1p2 added
root@xv0:~ # gpart add -t freebsd-zfs -s 900m -l zfs1 md1
md1p3 added
root@xv0:~ # gpart show md1
=>     40  4194224  md1  GPT  (2.0G)
       40     1024    1  freebsd-boot  (512K)
     1064      984       - free -  (492K)
     2048   204800    2  freebsd-swap  (100M)
   206848  1843200    3  freebsd-zfs  (900M)
  2050048  2144216       - free -  (1.0G)

root@xv0:~ # gpart bootcode -b /boot/pmbr -p /boot/gptboot -i 1 md1
partcode written to md1p1
bootcode written to md1

Please note the labels. Now join the partitions to the pool.

Code:
# zpool attach z99 md0p3 md1p3
root@xv0:~ # zpool status z99
  pool: z99
 state: ONLINE
  scan: resilvered 660K in 00:00:02 with 0 errors on Sat Mar  2 09:30:31 2024
config:

        NAME        STATE     READ WRITE CKSUM
        z99         ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            md0p3   ONLINE       0     0     0
            md1p3   ONLINE       0     0     0

errors: No known data errors

Once the sync is finished, you want the old partition from the pool.

Code:
root@xv0:~ # zpool detach z99 md0p3
# zpool status z99
  pool: z99
 state: ONLINE
  scan: resilvered 984K in 00:00:02 with 0 errors on Sat Mar  2 09:32:39 2024
config:

        NAME        STATE     READ WRITE CKSUM
        z99         ONLINE       0     0     0
          md1p3     ONLINE       0     0     0

errors: No known data errors

You would also have to modify fstab for the swap partition.

Now you simply have to boot into Live as you cannot increase the partition size.

Code:
gpart resize -i 3 -s NEW_SIZE md1

When you are on the system and the zpool is running, simply expand it, if you do not have the autorize option you will have to do it manually:

Code:
zpool online -e z99 md1p3
How would you resize so it take the rest of the free space on the disk?
 
How would you resize so it take the rest of the free space on the disk?

Just delete the partition and create a new one with the same beginning. Reboot to make the new partition table active. Set the autoexpand flag in your Zpool and it will do the rest.

Obviously you might want to have a backup when doing this.
 
To avoid creating partitions manually you can use backup and restore. It hadn't occurred to me before, example:

Code:
# gpart backup md0 > schema.md0
# gpart restore -F md1 < schema.md0
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 md1
# gpart modify -i 3 -l zfs1 md1
.................

In my previous publication I was wrong when installing the bootcode, you have to install gptzfsboot, as you can see in the example I would also have to create new labels for the partitions since they have not been imported, you can still do it.

But this process is simpler.
 
I decide to test the mirror on virtualbox first, everything went fine until detatching them. I successfully created the mirror and waited for silvering to finish and I detached them I shutdown and changed the new drive to be the primary default and got this:
Git:
gptboot: no UFS partition was found

Then I tried to boot from the old drive and got this:
Old-system.jpg
 
To avoid creating partitions manually you can use backup and restore.
For the time being it's better to create the partition scheme and table manually, instead of gpart backup <device> | gpart restore <device>.

There is a regression in 14.0 with this kind of cloning. I observed it myself, thinking of filing a bug report, but someone else did it first. The clone is not the exact image of the original.

Bug 277358 problem with gpart backup & restore
 
If you copied the bootcode from my first publication I mistakenly put gptboot, I may not be able to find the ZFS boot now, apologies.

You have to use gptzfsboot as I said above.

On the other hand, I see two disks in your BOOT, are they the ones from the old mirror?

You will not be able to boot with the old disk, because it does not belong to any POOL. If your system boots from disk C the old one will have that problem since you have a boot partition, but no pool in ZFS partition 3, delete the old one from your VM. As I told you before, the problem is that gptbool searches for UFS partitions, you have to change it to gptzfsboot to solve that problem.
 
Ok I was able to fix that but my freebsd-zfs partition is not growing to full size.
gpart show -p ada0 shows that my freebsd-zfs partition is 60G
Code:
=>       40  134217648    ada0  GPT  (64G)
         40       1024  ada0p1  freebsd-boot  (512K)
       1064        984          - free -  (492K)
       2048    8388608  ada0p2  freebsd-swap  (4.0G)
    8390656  125827032  ada0p3  freebsd-zfs  (60G)

but running
zfs list sys shows the same amount as the old drive:
Code:
NAME   USED  AVAIL  REFER  MOUNTPOINT
sys   11.4G  15.2G    96K  /zroot

I did run this command:
zpool set autoexpand=on sys
 
When you list the pool:

zpool list sys

What value does the EXPANDSZ column show?

Remember to run:

zpool online -e sys ada0p3
 
I did forget running that command above and it seems fixed now. Before running the command that column said 32G and after running it says:
Code:
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
sys   59.5G  11.4G  48.1G        -         -     3%    19%  1.00x    ONLINE  -
 
Great, considering that you followed the recommendations of all the users participants in the thread, before doing this on your machine. Try to make a backup of your data. Before carrying out the process just for safety.

On the other hand, you could consider trying to do this in your virtual environment:

zpool-split(8)

Instead of removing the vdev from the POOL, you can detach a vdez to create a new one, that is, the two disks will contain the same pool, just with different names.

But do not run them both at the same time on the same machine unless it is with an altroot, since they share the same mount points. In just another point of view before proceeding. Good luck.
 
I just tried to do it again with virtualbox but this time I used a efi compatible machine which auto zfs partitioned it into 4 partitions, efi, freebsd-boot, swap and freebsdzfs. I followed everything like before but this time it failed and I got this:
2024-03-03 19_03_23-BSDflux [Running] - Oracle VM VirtualBox.jpg
 
... this time I used a efi compatible machine ...I followed everything like before but this time it failed and I got this:
The cause is most likely the UEFI boot manager entry of FreeBSD (created automatically during the installation process) doesn't point to the right disk (disk partition uuid) where the FreeBSD efi loader lives. A new entry must be created.

Assuming you have copied the FreeBSD efi boot loader to the new disk
Code:
dd if=/dev/ada0p1 of=/dev/ada1p1 bs=1m

Optional on UEFI systems, copy BIOS gptzfsboot loader:

dd if=/dev/ada0p2 of=/dev/ada1p2

At the UEFI Shell> prompt enter exit, browse to Boot Manager, select the device the system is installed on.

Boot system, execute as root

efibootmgr -v

This will show the UEFI boot manager entries and their device paths.

Delete the FreeBSD entry

efibbootmgr -B -b 4

-b 4 here 0004. In case the boot entry number differs, change to the correct one.

Create new UEFI boot entry:

efibootmgr -c -a -L FreeBSD -l /boot/efi/efi/freebsd/loader.efi

For command details see efibootmgr(8).

The above steps can be done before rebooting the system, after detaching the old disk from the pool.




Don't forget to edit /etc/fstab to point the efi partition and swap to the correct devices.

Furthermore, in fstab, instead of device names, GPT labels can be used. This makes it resilient to device name changes.

For example the efi boot loader entry looks like:
Code:
# Device              Mountpoint    FStype    Options        Dump    Pass#
/dev/gpt/efiboot0     /boot/efi      msdosfs   rw            2            2

When creating the partitions on the new disk give them labels.

List the labels of the old disk (ada0):
Code:
gpart show -l ada0

Create labels for the new disk (ada1):
Code:
gpart add -t efi -s 260m -l efiboot1 ada1
gpart add -t freebsd-swap -a 1m -s Ng -l swap1 ada1

Or add labels after partitions are created:
Code:
gpart modify -i 1 -l efiboot1 ada1
gpart modify -i 3 -l swap1 ada1

Set GPT labels in fstab:
Code:
# Device               Mountpoint    FStype    Options        Dump    Pass#
/dev/gpt/efiboot1     /boot/efi         msdosfs   rw               2            2
/dev/gpt/swap1        none              swap        sw               0            0
 
Thanks for the advice, it worked. At firs it was telling me an error on the path but soon I was able to find the real path to the efi boot file which was:

/boot/efi/EFI/BOOT/BOOTX64.efi/CMD]
 
For ZFS I've always used zfs send -RLec foobar@snap | zfs receive -Fu newpool

For UFS, cd /new && dump 0af - | restore xf -

It's something I used with Solaris and Tru64, and now FreeBSD. The method works with Linux too.

Then swap your drives.

Remember to write out boot blocks.
 
Back
Top