ZFS (Very likely FAQ but cannot find in FAQs) Move entire system

Understood, I've had the same complaint as Mikkol when trying to rescue a system from install media, and was just hopefully helping should the need to alter the root filesystem arise in a future rescue situation.

As an aside, you can tell the design of ZFS is brilliant because it is simple and obvious in retrospect. I struggled to remember the difference between VGs and PVs in Linux LVM, and always had to go read the docs again to figure out what to do. I resorted to keeping text files with the commands I'd run in various situations as a primer for a future me dealing with the latest LVM disaster. Yes, just about every "upgrade" was a disaster.
 
As an aside, you can tell the design of ZFS is brilliant because it is simple and obvious in retrospect. I struggled to remember the difference between VGs and PVs in Linux LVM, and always had to go read the docs again to figure out what to do. I resorted to keeping text files with the commands I'd run in various situations as a primer for a future me dealing with the latest LVM disaster. Yes, just about every "upgrade" was a disaster.

LVM. Ugh. At the time, it was so nice to have flexibility compared to traditional partitions, but in retrospect (and next to ZFS), it's just sooooo hard to do what you want. (In fairness, a large part of that is because it was still layering whatever filesystem you chose on top of a "partition" manager, whereas ZFS says "just give me the backing store and I'll do everything.")
 
Eric A. Borisch Thank you. I was under the impression that /tmp should be writable by default. I don't know what the hiccup is but if I map the installation media, FreeBSD-12.1-RELEASE-amd64-dvd1.iso, as a virtual CD via Dell's iDRAC7 and then boot from that, /tmp is not writable in single user mode and it is writable in the live CD mode. To make matters worse, on my first attempts I was under the false impression that it would effectively work as a big ramdisk and, seeing as the server has 96GB, I was surprised to see it filled after 20MB of writing.

In fact, booting with the installation media as the boot source, the media in the virtual CD drive, and choosing single user mode gets me to a completely useless environment where I cannot mount absolutely anything because even /tmp is read-only and even remounting it rw does not help.

I will next give it a try to recreate the test one-disk-one-vdev pool with an altroot and, for a good measure, leave it unmounted, and receive the snapshot to it. I will keep this thread updated.
 
Understood, I've had the same complaint as Mikkol when trying to rescue a system from install media, and was just hopefully helping should the need to alter the root filesystem arise in a future rescue situation.

As an aside, you can tell the design of ZFS is brilliant because it is simple and obvious in retrospect. I struggled to remember the difference between VGs and PVs in Linux LVM, and always had to go read the docs again to figure out what to do. I resorted to keeping text files with the commands I'd run in various situations as a primer for a future me dealing with the latest LVM disaster. Yes, just about every "upgrade" was a disaster.

Just for clarification, when I booted from the install media, I had physically removed the disks that made up the vdev that made up the original zroot pool. They were not present in the computer. Only an NVMe disk that contained the snapshot as it was sent to a gz file and a disk that was to receive the snapshot. So no, I could not remount the root file system read-write because, as far as I understood, the only root filesystem was the one on the install media.

As a curiosity, I am intentionally not referring to any Linux distributions here because I am faithfully trying to keep this request for help FreeBSD-only (I've read the intro and the frequently provided answers and, as someone earlier so elegantly put it, the FM).
 
Solved. Steps:

  1. Read the manual and curse its ambiguity (don't call it FM because that's just rude). zfs receive does not take as its argument what you want to receive but where you want it received. This effectively removes the need to mount anything. (As for importing, can be done, just use the -t switch for zpool import to specify that the imported pool will have a temporary name and specify the temporary name. zpool import -t fast temppool.)
  2. Grab a disk large enough to make a backup of the system.
  3. Create partitioning on the disk consistent with the partitioning of your current zroot vdev disks. gpart show da0 in my case to show the existing partitioning (can repeat with da1 to da4 to verify that all have identical partitioning), then gpart destroy -F da6 to wipe the disk to which I will receive the copy of zroot, then gpart create -s gpt da6, then gpart add -t efi -s 200M da6, then gpart add -t freebsd-swap -s 2G da6, then gpart add -t freebsd-zfs da6.
  4. Create a new zpool with a vdev large enough to hold the source system, using a temporary name and alternate mount point. zpool create -fR /mnt -t temppool zroot da6p3.
  5. Snapshot running system. zfs snapshot -r zroot@backup.
  6. Send snapshot to da6. zfs send -R zroot@backup | pv -br | zfs recv -uF temppool.
  7. Set boot file system on receiving vdev. zpool set bootfs=temppool/ROOT/default temppool.
  8. Format the new EFI partition. newfs_msdos /dev/da6p1.
  9. Copy EFI files from any original vdev to da6. mkdir /mnt/efiold /mnt/efinew. mount -t msdosfs -o rw /dev/da6p1 /mnt/efinew/. mount -t msdosfs -o rw /dev/da0p1 /mnt/efiold/. cp -va /mnt/efiold/* /mnt/efinew/.
  10. Umount the EFI partitions. umount /mnt/efiold /mnt/efinew.
  11. Export receiving zpool. zpool export temppool.
  12. Power off the system and remove da0 to da4. Then power on and enjoy a booting system. poweroff. Pull handles on da0 to da4. Press power button.
  13. Repeat steps 1 thru 12, adapting where necessary, to create a 6-vdev RAID-Z2 zpool and receive the snapshot from the previous iteration there.
  14. Write instructions for the other guy that has been wondering how to do this.
 
Last edited:
i want to do a disk migration :500GB HDD to 120GB SDD, all the tutorials i've found assume that new disk must be equal or larger than the old one.
Can someone give me a hint ?
 
i want to do a disk migration :500GB HDD to 120GB SDD, all the tutorials i've found assume that new disk must be equal or larger than the old one.
Can someone give me a hint ?
The above instructions should work, assuming you are using ZFS and there is enough space on the target drive. The partition does not need to be the same size.

EDIT: I assume that you are changing all of your disks or recreating the pool. What I did above was recreating a pool, not expanding an existing one. As a temporary backup, I used a considerably smaller hard drive.
 
Back
Top