ZFS "cloning" a zfs boot disk on a usb drive

I have FreeBSD 12.2 on a Lenovo PC that has only one 3.5'' drive bay. The drive is a 500GB Seagate.
According to smarctl, it is in danger of failing. I'm taking this indication at face value, and I want to
replace the drive with an identical new one. The procedure I've thought of is:

1. Connect the new drive (I have an enclosure for it) to one of the USB 3.0 ports.
It becomes known as /dev/da0.
2. Copy the GPT partition table of the old drive to the new one:
gpart backup ada0 | gpart restore -F da0
3. Use ZFS's replace facility:
zpool replace zroot ada0p3 da0p3
4. Add boot block:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

My question is: if I now unmount and remove the USB drive, shut down the system, and replace the old internal Seagate with the new one, will the system boot? Will there be a problem having to do with "ada0" vs. "da0"?
(Or any other issue that you can think of?)
Thanks for any help.
 
I have FreeBSD 12.2 on a Lenovo PC that has only one 3.5'' drive bay. The drive is a 500GB Seagate.
According to smarctl, it is in danger of failing. I'm taking this indication at face value, and I want to
replace the drive with an identical new one. The procedure I've thought of is:

1. Connect the new drive (I have an enclosure for it) to one of the USB 3.0 ports.
It becomes known as /dev/da0.
2. Copy the GPT partition table of the old drive to the new one:
gpart backup ada0 | gpart restore -F da0
3. Use ZFS's replace facility:
zpool replace zroot ada0p3 da0p3
4. Add boot block:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
I have done this, but with a little bit different procedure:

1. Connected the USB drive to laptop
2. Manually partitioned that drive with gpart (I had UEFI boot and ZFS partition) Also the new ZFS partition was bigger.
3. Copied /boot/loader.efi boot loader to the EFI partition as described in the mauals.
4. Attached the new ZFS partition to the existing pool as a mirror.
5. Let it resilver.
6. Removed old drive and booted the system up from new drive. Now it was a perfect copy, just the old drive missing from mirror.
7. Removed old drive from the ZFS pool.
8. Done, and also updated the storage capacity. The old drive remained as a cold backup in the drawer.
 
That looks like a recipe for disaster to me. You should read up on ZFS data management some more. For example: how do you imagine your data to get copied in this scenario?

(edit): this works well when you replace a drive within a mirrored setup, but from what I can tell your ZFS pool consists of one drive. However... I'm not too sure if my information is fully up to date on this part.

Still, don't "just" copy your partition table but instead create a new one, this will ensure nothing weird can happen, if your new disk isn't exactly of the same size then you're taking some risks wrt the replacement.
 
I have done this, but with a little bit different procedure:

1. Connected the USB drive to laptop
2. Manually partitioned that drive with gpart (I had UEFI boot and ZFS partition) Also the new ZFS partition was bigger.
3. Copied /boot/loader.efi boot loader to the EFI partition as described in the mauals.
4. Attached the new ZFS partition to the existing pool as a mirror.
5. Let it resilver.
6. Removed old drive and booted the system up from new drive. Now it was a perfect copy, just the old drive missing from mirror.
7. Removed old drive from the ZFS pool.
8. Done, and also updated the storage capacity. The old drive remained as a cold backup in the drawer.
Thanks. I believe your step 5 is equivalent to my "zpool replace".
But in step 6, was the new drive still connected to USB? And in step 7 was the old drive inside the
laptop?
 
That looks like a recipe for disaster to me. You should read up on ZFS data management some more. For example: how do you imagine your data to get copied in this scenario?

(edit): this works well when you replace a drive within a mirrored setup, but from what I can tell your ZFS pool consists of one drive. However... I'm not too sure if my information is fully up to date on this part.

Still, don't "just" copy your partition table but instead create a new one, this will ensure nothing weird can happen, if your new disk isn't exactly of the same size then you're taking some risks wrt the replacement.
Except for step 1, the procedure I described is taken verbatim from the manual, section 20.3.5.
Am I missing something? (And the new drive is identical to the old one.)
 
Thanks. I believe your step 5 is equivalent to my "zpool replace".
But in step 6, was the new drive still connected to USB? And in step 7 was the old drive inside the
laptop?
Actually I used a cheap USB to SATA adapter for that. I do not exactly remember, but I think I did not try to boot from USB, but just replaced the drive in laptop after resilver. And did another working clone of that drive later with zfs send procedure.

I am not sure if zfs replace is exactly the same. Probably not. My procedure felt completely safe. Even now I can take that old drive from drawer and boot it up.
 
The procedure I've thought of is:

1. Connect the new drive (I have an enclosure for it) to one of the USB 3.0 ports.
It becomes known as /dev/da0.
2. Copy the GPT partition table of the old drive to the new one:
gpart backup ada0 | gpart restore -F da0
3. Use ZFS's replace facility:
zpool replace zroot ada0p3 da0p3
4. Add boot block:
gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0
Why so complicated? Just plug in the new disk [1], boot a FreeBSD installation image, dd(1) from old to new disk.

[1] If the PC has only one drive bay, open the enclosure, plug in the new disk to a free disk interface port. No need to involve a USB enclosure.
 
Why so complicated? Just plug in the new disk [1], boot a FreeBSD installation image, dd(1) from old to new disk.

[1] If the PC has only one drive bay, open the enclosure, plug in the new disk to a free disk interface port. No need to involve a USB enclosure.
Do not recommend dd in this case. Especially when there are bad blocks already. Using ZFS is much smarter, also probably faster and gives a better result (resizing the pool).
 
Actually I used a cheap USB to SATA adapter for that. I do not exactly remember, but I think I did not try to boot from USB, but just replaced the drive in laptop after resilver. And did another working clone of that drive later with zfs send procedure.

I am not sure if zfs replace is exactly the same. Probably not. My procedure felt completely safe. Even now I can take that old drive from drawer and boot it up.
I am also using a USB-to-SATA adapter/enclosure with the new drive inside it.

What is different about "zpool replace" is that the new drive doesn't have to belong to the pool,
like in your case. (At least that's how I read sec. 20.3.5 of the manual.) Otherwise, it looks like the two procedures are equivalent.
 
Why so complicated? Just plug in the new disk [1], boot a FreeBSD installation image, dd(1) from old to new disk.

[1] If the PC has only one drive bay, open the enclosure, plug in the new disk to a free disk interface port. No need to involve a USB enclosure.
[1] is a good idea. There are free SATA connectors on the motherboard. But I need a cable ...
 
What is different about "zpool replace" is that the new drive doesn't have to belong to the pool,
like in your case. (At least that's how I read sec. 20.3.5 of the manual.) Otherwise, it looks like the two procedures are equivalent.
I am not sure here. Feels risky, but I have not tried this. zpool replace is for replacing failed drives in pool. If your data is important, I wouldn't experiment with that.

The by-the-book method, which I have also used several times is:

1. Partition a new disk and create a new pool on it (with a different name)
2. Create recursive snapshots of your datasets on the original pool
3. Using zfs send and zfs receive move the snapshots to the new pool
4. Set the new pool bootable

P.S. I have described it in detail somewhere in my posts.
 
I am not sure here. Feels risky, but I have not tried this. zpool replace is for replacing failed drives in pool. If your data is important, I wouldn't experiment with that.

The by-the-book method, which I have also used several times is:

1. Partition a new disk and create a new pool on it (with a different name)
2. Create recursive snapshots of your datasets on the original pool
3. Using zfs send and zfs receive move the snapshots to the new pool
4. Set the new pool bootable

P.S. I have described it in detail somewhere in my posts.
I understand what you're saying, but "zpool replace" is specifically meant to replace a *functioning* device. Take a look at section 20.3.5. of the manual.
 
I understand what you're saying, but "zpool replace" is specifically meant to replace a *functioning* device. Take a look at section 20.3.5. of the manual.
OK then. I have no experience with this. zfs send is probably the safest way to do it.
 
OK then. I have no experience with this. zfs send is probably the safest way to do it.
The doubt remaining in my mind is that after the "replace", or after your procedure also, I believe, the pool has a different device name associated with it: that of the disk on the USB adapter. Now when you shut down and replace the old disk with the one in the USB adapter,
when the system tries to boot, will ZFS still look for the pool associated with the USB device name? And thus fail to boot?
 
The doubt remaining in my mind is that after the "replace", or after your procedure also, I believe, the pool has a different device name associated with it: that of the disk on the USB adapter. Now when you shut down and replace the old disk with the one in the USB adapter,
when the system tries to boot, will ZFS still look for the pool associated with the USB device name? And thus fail to boot?
No. The boot scans the partition table and finds all bootable ZFS partitions. Then boots one of them. Boot is no problem. You can move the ZFS pools between interfaces.

Just personally I have not tried this replace. Consult somebody who has personal experience or try on a test machine. I can only tell you about mirror ( zpool attach) and zfs send methods. I have used them both and feel comfortable. Be sure you do not mistype add instead of attach or you are in trouble.

EDIT: Here is my post from last year.
 
Back
Top