The original title was going to be "Does anybody back up their ZFS server?" but user Eric A. Borisch claims to do just that in this post.
The issue is that a full send/receive using the
The following example in Section 20.4.7.2. Sending Encrypted Backups over SSH of the handbook will make the receiving system unuseable if the source includes /lib with an older version of zfs:
This has been reported as Bug 210222 against the handbook I noticed that the other examples have had the
What I have tried (skipping previous 2018 attempt where I had the same problem, and ended up not upgrading):
I have been testing with my bootpool dataset (/boot from the old machine, exported mountpoint is /bootpool) because it is smaller, and by happenstance does not conflict with the directory tree of the new (FreeBSD 12.2) system.
On old machine:
On new machine:
This wrote /bootpool in the root directory. If I had run that command with my old zroot, the system would have become unuseable. I omitted two previous attempts using the
An old forum thread suggested a possible work-around of using the
The issue is that a full send/receive using the
zfs send -R
with the -R option causes the mountpoints of the source pool to be included in the datastream. On FreeBSD 12 and old versions of OpenZFS, it is not possible to override the mountpoint
option with the zfs receive
command. The overall effect is that the receiving system becomes unuseable to the point of needing a rescue disk if the datasteam includes vital mount points already in use. This has been at least partially fixed in OpenZFS.The following example in Section 20.4.7.2. Sending Encrypted Backups over SSH of the handbook will make the receiving system unuseable if the source includes /lib with an older version of zfs:
Code:
% zfs snapshot -r mypool/home@monday
% zfs send -R mypool/home@monday | ssh someuser@backuphost zfs recv -dvu recvpool/backup
This has been reported as Bug 210222 against the handbook I noticed that the other examples have had the
-R
option removed, so maybe that example was overlooked.What I have tried (skipping previous 2018 attempt where I had the same problem, and ended up not upgrading):
I have been testing with my bootpool dataset (/boot from the old machine, exported mountpoint is /bootpool) because it is smaller, and by happenstance does not conflict with the directory tree of the new (FreeBSD 12.2) system.
On old machine:
Code:
# zfs snapshot -r bootpool@2021-04-19.boot
# mount -t ext2fs /dev/da0p1 /mnt
# zfs send -R -D -v bootpool@2021-04-19.boot > /mnt/granny.boot.2021-04-19.zfs
# umount /mnt
On new machine:
Code:
# mount -t ext2fs /dev/ada1p1 /mnt/
# zfs receive zroot/granny.boot < /mnt/granny.boot.201-04-19.zfs
This wrote /bootpool in the root directory. If I had run that command with my old zroot, the system would have become unuseable. I omitted two previous attempts using the
zfs receive -d
and zfs receive -e
options. By my reading of zfs(), those options have no impact on the mountpoint
option anyway (changing only the name of the snapshot). Because I did not clobbler the ZFS library, I am able to clean up failed attempts with:
Code:
# zfs destroy -r zroot/granny.boot
An old forum thread suggested a possible work-around of using the
-u
option to prevent immediate mounting. I found this did not work (Update: there is a transcription error here: I used the zpool set
not zfs set
command (missed it in the man page)):
Code:
# zfs receive -u zroot/granny.boot < /mnt/granny.boot.2021-04-19.zfs
# zfs list
#zfs set moutpoint=zroot/granny.boot zroot/granny.boot
cannot open 'zroot/granny.boot': invalid character '/' in pool name
# zfs destroy -r zroot/granny.boot