ZFS (Very likely FAQ but cannot find in FAQs) Move entire system

I have a working setup (12.1 RELEASE) with 5 disks configured as a single zroot pool with RAID-Z2. I changed the computer around it and have more slots available for disks. I understand that the optimal number of disks for RAID-Z2 is 6. I understand that I cannot really add a new vdev to the pool and expect the capacity to increase. I understand that I am supposed to be able to snapshot zroot, zfs send zroot to a file and then zfs receive zroot from said file.


I have spent a good two weeks looking at all kinds of instructions on moving a (running or offline, doesn't matter) FreeBSD system from one computer to another. The vast majority of the instructions are from an era with 4:3 aspect ratio on TV and MBR as the industry standard. The best I got was https://dan.langille.org/2018/12/31/adding-a-zroot-pool-to-an-existing-system/.


What I have done is create a new pool (fast) in the live system, zfs -r snapshot zroot@backup, zfs -R send zroot@backup | pigz >file in the other pool, then checked with gpart show da0 (through da4) to see that the partitioning on all five is identical (save for the label), then gpart -F destroy da5 (which is a new, blank drive with enough space), gpart create -s gpt da5, exported the and reboot to live mode. I have created a zroot pool and imported the pool fast with mountpoint in /tmp/fast and created a new zroot pool with the third partition of the target drive as the target (has partition type freebsd-zfs.


I am currently catting the snapshot file to gunzip and to zfs recv -F zroot.


Am I doing it right? Is there a much more simple way that I cannot find because I am thinking it too complicated? Shall I dd the efi boot partition and swap partition to the target drive?

This forum post talks of the same issue, I can see, but talks of zfs sending a partition, not a pool, and that, again, confuses me.
 
Some points are not completely clear:
  • your new computer does not boot from the zroot pool, but has some internal boot device?
  • Do you want to transfer the data from the old pool to another one ( zfs send/receive), or do you want to connect the old pool to your new machine ( zpool import)?
  • Usually you can easily import a zpool from another system, 1st you set an altroot zpool import -R /prevsys poolname, optionally adjust some mountpoints, and finally import it permanently.
 
mjollnir Takker for your reply. The new computer is the same old computer. What I need to do is to recreate the zpool with 6 drives instead of 5.

  • It will boot from a zroot pool. I have, for testing, made a pool consisting of exactly one disk. I am attempting to make the system boot from that now. If I succeed, I will make a pool of six disks, repeat what I have done, and be happy with it.
  • I want to destroy the old pool because it is my understanding that I cannot just slap in a sixth disk and expect zfs to slightly increase the capacity of the single pool. If I am wrong, I will slap it in the sixth disk and be happy with that.
  • I cannot import the existing pool because it is on five disks, all of which will be used for the new pool. I will use the disks of the old pool plus one more identical disk and recreate a newer, better pool. In the mean time, the pool will sit on one disk.
 
  • You can add a new disk to an existing pool easily. RTFM zpool(8): zpool add. attach is for mirrors, don't get confused. The pool will make use of the new space.
  • You can not change the pool type from e.g. raidz2 to raidz3, this requires backup/restore.
  • Before doing anything, back up your data and always use the dry-run flag '-n' first.
 
I thought vdev expansion was not a thing yet? Are you proposing he create a new vdev with just one drive in it?
 
Hopefully I do not get confused myself with what I used to do on OpenSolaris many times (I had free electricity from wind power ;) )...
IMHO if you have a raidzX pool with 5 disks, you can simply zpool add a 6th disk, and the pool expands w/o further manual intervention. Am I wrong?
 
  • You can add a new disk to an existing pool easily. RTFM zpool(8): zpool add. attach is for mirrors, don't get confused. The pool will make use of the new space.
  • You can not change the pool type from e.g. raidz2 to raidz3, this requires backup/restore.
  • Before doing anything, back up your data and always use the dry-run flag '-n' first.
Yes, I know that I can add a new disk to the pool. I have RTFMd. However, it is my understanding that this will not, however, increase the size of the pool. From all the FMs that I have Rd, the message is and has been: destroy and recreate.

I do not recall writing anything about attaching anything (nor adding, for that matter), so don't worry, I'm not being confused there.
 
You're going to need temporary storage for the data in your current pool.
Thank you. Yes, this is what that one-disk pool is doing right now. And not only that, it's also an exercise in moving a system that boots from a pool to another pool and have said system boot from that.
 
Thank you. Yes, this is what that one-disk pool is doing right now. And not only that, it's also an exercise in moving a system that boots from a pool to another pool and have said system boot from that.
I'm confused. Is this the disk you want to add to the existing 5-disk vdev? If so, you're going to need a seventh disk for temporary storage. I suggest you start specifying "vdev" or "pool". They are different things. A pool is made up of one or more vdevs. There's a great diagram in the link I posted above: https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
 
I'm confused. Is this the disk you want to add to the existing 5-disk vdev? If so, you're going to need a seventh disk for temporary storage. I suggest you start specifying "vdev" or "pool". They are different things. A pool is made up of one or more vdevs. There's a great diagram in the link I posted above: https://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html
Sorry, my bad terminology (just as I was cursing gpart's manpage's terminology).

The existing pool consists of 5 physical disks that form a single vdev. There is another pool, consisting of a single vdev that is a single disk. To that other pool, I have copied the output of zfs send of the snapshot of the 5-disk pool. As I understand, I cannot add a sixth disk to the vdev in the original pool.

I have a shelf full of disks of varying sizes, and one of them is identical to the 5 physical ones in the original 1-vdev pool. I want to use that one as the sixth disk in the pool. The zfs snapshot has been copied to the seventh disk.

What I am now trying to do is restore (zfs term: receive) the snapshot from the seventh disk to an eighth disk to show myself that yes, it will work and it will boot. Once it does so, I will wipe the six identical disks and restore (receive) to them the copy that's on the seventh disk.

I hope this clarifies things.
 
I could not find any info on the Freebsd status of vdev expansion.

If this is what you mean, I have migrated some time ago live system from smaller ZFS mirror to a bigger one with a procedure like this:
  1. removed one drive from mirror;
  2. replaced the removed drive by new drive with larger partition;
  3. allowed the system to resilver;
  4. replaced the other drive by a new and bigger one;
  5. resilver again and voila! I had doubled my storage.
I assume the same technique will work also with raidz configuration. Just replace drives one-by-one with bigger ones and in the end the storage is upgraded. Also, the removed drives remain operational and can be kept in a drawer as a backup.
 
If this is what you mean, I have migrated some time ago live system from smaller ZFS mirror to a bigger one with a procedure like this:
  1. removed one drive from mirror;
  2. replaced the removed drive by new drive with larger partition;
  3. allowed the system to resilver;
  4. replaced the other drive by a new and bigger one;
  5. resilver again and voila! I had doubled my storage.
I assume the same technique will work also with raidz configuration. Just replace drives one-by-one with bigger ones and in the end the storage is upgraded. Also, the removed drives remain operational and can be kept in a drawer as a backup.
Thank you. I am not trying to add bigger disks. I'm trying to add one more disk.
 
Mirror vdevs are different from RAIDZ vdevs, so no, that won't work.....

I'm taking note of the pain encountered in mikkol's system expansion. Maybe some FreeBSD folk could agree with the rules I've evolved:
1. Invest in a cheap backup server that can hold a complete backup of everything from the main workstation(s). Once you have a file server you have infinite flexibility in re-installing and re-arranging your workstations.
2. Alternatively, use one sata port for a 5-1/4" hot swap bay. Then you can tar each filesystems onto a spare hard drive, to be restored by tar with no filesystems required on the drives so no assumptions about the target operating system, except that it supports tar.
3. Use only mirror vdevs for zfs pools.
 
I'm taking note of the pain encountered in mikkol's system expansion. Maybe some FreeBSD folk could agree with the rules I've evolved:
1. Invest in a cheap backup server that can hold a complete backup of everything from the main workstation(s). Once you have a file server you have infinite flexibility in re-installing and re-arranging your workstations.
2. Alternatively, use one sata port for a 5-1/4" hot swap bay. Then you can tar each filesystems onto a spare hard drive, to be restored by tar with no filesystems required on the drives so no assumptions about the target operating system, except that it supports tar.
3. Use only mirror vdevs for zfs pools.
I was with you until number 3. That one is a matter of hot debate from what I've found online. I've only just now set up my first ever ZFS system, and I chose to go with 6 HDs in a RAIDZ2 vdev. That system also has two SSDs in a geom mirror, and I'm using that geom mirror for my ZIL. Works pretty well so far, but if I had to do again, I probably would've set up a ZFS mirror vdev and pool with the two SSDs for my root filesystem.
 
I was with you until number 3. That one is a matter of hot debate from what I've found online. I've only just now set up my first ever ZFS system, and I chose to go with 6 HDs in a RAIDZ2 vdev.....
That's a lot of redundancy. Sweet. I'd be interested in seeing a benchmark of that raidz2 with 6 drives vs. 3 x 2-disk mirrors with only single disk failure protection. It would be a LOT faster (?) and my protection against loss of more than a single drive at a time comes from the backup server. In other words, invest in at least a JBOD backup server before investing too much in fancy redundancy on the workstation itself. But I may regret my approach eventually (so far I've muddled through all sorts of self-inflicted mayhem).
 
What I am now trying to do is restore (zfs term: receive) the snapshot from the seventh disk to an eighth disk to show myself that yes, it will work and it will boot. Once it does so, I will wipe the six identical disks and restore (receive) to them the copy that's on the seventh disk.

I hope this clarifies things.
It does. I think zfs send/receive should do exactly what you want. Trying it from disk 7 to disk 8 first is a great idea. I like people who test their backups by restoring from them.
 
It does. I think zfs send/receive should do exactly what you want. Trying it from disk 7 to disk 8 first is a great idea. I like people who test their backups by restoring from them.
Thank you. During my initial post here, I was struggling to restore the snapshot to the test environment. I ran into what I felt was a Kafkan situation where every move I made limited my options: I was utterly unable to zfs receive the zroot snapshot on anything live that had a zroot pool to begin with. The complaint was about not being able to unmount /. Booting into the live environment or the single user mode offered by the installation media got me either to a situation where the only place I could mount anything was /tmp or where I could not mount anything anywhere because the whole filesystem was read only (what is the purpose of the single user mode if you cannot fiddle with an existing filesystem at all?). I could not zpool import a pool anywhere even with the -R switch because the -R switch, while taking a directory as an argument, insists on making a subdirectory in that place, and that place is read only.

Given the long history of FreeBSD and its popularity among those who want reliability, I have strong faith that receiving zroot is possible and has been done before, even in the post-MBR era. Surely someone could, and perhaps would, tell me what I am thinking wrong here?
 
You have two options for receiving a pool containing a “/“ mountpoint:

1) Create/import the destination pool with an altroot set (something like /tmp/newpool while in the live cd). Setting altroot is not persistent with the pool, just for the current session, and forces all mounts to be relative to altroot). See zpool(8).

2) Receive the pool with -u (unmounted). This does not change the on-disk mountpoint/canmount settings, but just says don’t actually mount the file systems while receiving. A filesystem doesn’t need to be mounted for ZFS to receive / update it. See zfs(8).

Edit: There is a bug in the manual page source causing the wrong formatting of 11.3+ zfs manual on the web; updated the zfs(8) link to last working one. Formatting.
 
...I could not mount anything anywhere because the whole filesystem was read only (what is the purpose of the single user mode if you cannot fiddle with an existing filesystem at all?)...
You can remount the root filesystem read-write with mount -o rw /. I seem to recall this also needed -o remount on some systems, but I of course cannot remember on which systems.
 
You can remount the root filesystem read-write with mount -o rw /. I seem to recall this also needed -o remount on some systems, but I of course cannot remember on which systems.

Indeed; or on the livecd, the /tmp filesystem is writable by default.

But as I mentioned above, you don't need to mount a zfs filesystem at all to create/modify/receive it. This feels strange if you are used to traditional formatted partitions with filesystems, where a tool (like rsync) on the system operates through via posix filesystem layer (via open(2), write(2), and close(2)) can't modify an unmounted or read-only filesystem, but as zpool/zfs commands control the "whole stack" from the byte layout on the drives/paritiions to the individual files, it is possible create/modify a dataset (filesystem here) that isn't currently mounted.

In fact, you can have a zfs filesystem mounted read-only, and still zfs receive incremental snapshots into it. This is how I run my backups, as I don't want — can't have, actually, as the destination needs to stay in lock-step with the source — the backups modified via the filesystem layer on the backup side, but I want to be able to update (modify) the filesystem via a zfs recv.
 
Back
Top