Solved send/receive snapshots when «path» of source changes

Hello,

I’m in the process of synchronizing some data between a remote server (13.x-RELEASE) and a local server (14.0-RELEASE). I’ll then ship the local server SSD to the remote server (full project here).

Currently I’m running something like this:

Bash:
ssh ${OPT} ${TARGET} zfs snapshot sas/compat_devuan01@${D}
ssh ${OPT} ${TARGET} zfs snapshot -r sas/vm@${D}
ssh ${OPT} ${TARGET} "zfs send -v -i @$PREV sas/compat_devuan01@${D}|pv -qL 15M" | zfs receive ssd/compat_devuan01
ssh ${OPT} ${TARGET} "zfs send -vR -i @$PREV sas/vm@${D}|pv -qL 15M" | zfs receive ssd/vm

For the last synchro, the SSD will act as boot drive in the remote server, and remote server’s native zpool «sas» will be mounted in /mnt/sas.

I just want to make sure I can run a last send/receive between old source zpool (sas) and new destination zpool (ssd), even though the source zpool is now hooked to a different system. And I don’t want a last minute surprise :)
Something like this may be?

Bash:
zfs snapshot sas/compat_devuan01@last
zfs snapshot -r sas/vm@last
zfs send -v -i @$PREV sas/compat_devuan01@last | zfs receive ssd/compat_devuan01
zfs send -vR -i @$PREV sas/vm@last | zfs receive ssd/vm
 
I think it is importing with altroot that needs to be done to mount the old disk without it mounting over the new disk's mountpoints if not using fstab entries to control mounting. I didn't think mounting was a requirement to send/recv itbut haven't tested. `zfs recv` likely needs the -F flag as the destination is not read only and booting likely altered it. Keeping it as read only until after the final transfer would also work but then you wouldn't likely be booting from that dataset.

Depending how booting is setup, you would want to make sure your bootable drive is referenced properly (fstab vs zfs boot properties, driver dependent /dev entry vs gpt/bsd label, etc.).

Though it would likely work, it would be preferable to not boot from either media during its transfer. Booting from old media means the filesystem will be live so make sure services are not running and booting from the new means you are replacing the filesystem while running from it. Doing so is more of a concern for program crashes and data inconsistencies of running software during the transfer and not of ZFS inconsistencies. If the datasets being read/written are not the datasets used in boothing then this should not matter.
 
Maybe not necessary in this case, but I once made the mistake of not using --replicate for a replication stream package, when doing so would have made things much simpler for me.
 
Hi,

Thank you for your remarks.
Few days ago I was able to test my theory and it just worked great. So first post includes the answer :)
I can safely «altmount» the source storage and boot from the destination storage because none of the filesystems I intend to send/receive are «active».
And iirc mounting is not compulsory, but I’ll do it anyway because some particular data are to be rsync-ed.
 
Back
Top