ZFS Moving entire zroot to a new system

Hi all,

So, I had originally installed my first BSD system on a Dell T140 with 4x12TB drives in a stripe of mirrors:

Code:
  pool: zroot
 state: ONLINE
config:

    NAME        STATE     READ WRITE CKSUM
    zroot       ONLINE       0     0     0
      mirror-0  ONLINE       0     0     0
        da0p4   ONLINE       0     0     0
        da1p4   ONLINE       0     0     0
      mirror-1  ONLINE       0     0     0
        da2p2   ONLINE       0     0     0
        da3p2   ONLINE       0     0     0

This pool contained the OS and all the data I was sharing over SMB. I've since outgrown my storage and gotten a new server (a 45Homelab HL15) so I can add more disks, but now I'm running into a problem. How do I move this pool to the new machine without destroying my data. I only have 2 more 12tb disks so I can't just backup my data, destroy the pool, create a new pool, and restore the data to the new pool. So I was thinking of doing either 1 of 2 things:

  1. Install FreeBSD on the ssd of the HL15 and try to import and mount zroot from my previous system on the new system and then add my 2 new disks to the pool, (I had already installed FreeBSD on the SSD, but forgot to rename the default pool from zroot)
  2. Or: Move the disks over and see if I can boot the original zroot pool on the new system.

I thought option 1 was safest, so I installed freeBSD on the SSD, forgot to rename the default pool from zroot, and then realized I had a few problems

  1. I now have 2 zroots
  2. Can I export the zroot on the original machine even though it's the boot pool and the OS is still running?
  3. can I then import the original zroot on the new machine and mount it without issue?
The data on the original pool was structured like this:

Code:
NAME                   USED  AVAIL     REFER  MOUNTPOINT
zroot                 20.6T  1.07T       96K  /zroot
zroot/ROOT            2.04G  1.07T       96K  none
zroot/ROOT/default    2.04G  1.07T     2.04G  /
zroot/srv             20.6T  1.07T       96K  /srv
zroot/srv/smb         20.6T  1.07T     12.0G  /srv/smb
zroot/srv/smb/public  20.6T  1.07T     20.6T  /srv/smb/public
zroot/tmp              128K  1.07T      128K  /tmp
zroot/usr             62.2M  1.07T       96K  /usr
zroot/usr/home        61.9M  1.07T     61.9M  /usr/home
zroot/usr/ports         96K  1.07T       96K  /usr/ports
zroot/usr/src           96K  1.07T       96K  /usr/src
zroot/var             57.6M  1.07T       96K  /var
zroot/var/audit         96K  1.07T       96K  /var/audit
zroot/var/crash         96K  1.07T       96K  /var/crash
zroot/var/log         3.46M  1.07T     3.46M  /var/log
zroot/var/mail        53.8M  1.07T     53.8M  /var/mail
zroot/var/tmp           96K  1.07T       96K  /var/tmp

And I was planning to mount the original ZROOT/srv/smb/public to the same location (/srv/smb/public) on the new machine and just copy over my samba config and be back in business. Or, mount the original zroot to somewhere like /mnt/oldserver/ and share over smb from there until I can get some more disks and do everything proper. Either way, I'm a bit out of my depth and I really can't afford to lose this data.

I'm literally living that joke right now:

Server has crashed > Where is backup? > On the server.

Any guidance would be greatly appreciated

(note: that terabyte of free space is data that has been backed up to the cloud, it has to be restored soon)

(other notes: previous system was FreeeBSD 13.2-RELEASE, I had installed the 15.0-RELEASE on the HL15)
 
we would disconnect the real pool, leave the SSD in, and reinstall onto the SSD with another pool name. there's probably some trick you can do with zpool export and zpool import $UID newpool but it's much safer to take some downtime and reinstall.
 
Back
Top