Solved Best practice for backing up/restoring data with FreeBSD ZFS?

The original title was going to be "Does anybody back up their ZFS server?" but user Eric A. Borisch claims to do just that in this post.

The issue is that a full send/receive using the zfs send -R with the -R option causes the mountpoints of the source pool to be included in the datastream. On FreeBSD 12 and old versions of OpenZFS, it is not possible to override the mountpoint option with the zfs receive command. The overall effect is that the receiving system becomes unuseable to the point of needing a rescue disk if the datasteam includes vital mount points already in use. This has been at least partially fixed in OpenZFS.

The following example in Section 20.4.7.2. Sending Encrypted Backups over SSH of the handbook will make the receiving system unuseable if the source includes /lib with an older version of zfs:

Code:
% zfs snapshot -r mypool/home@monday
% zfs send -R mypool/home@monday | ssh someuser@backuphost zfs recv -dvu recvpool/backup

This has been reported as Bug 210222 against the handbook I noticed that the other examples have had the -R option removed, so maybe that example was overlooked.

What I have tried (skipping previous 2018 attempt where I had the same problem, and ended up not upgrading):

I have been testing with my bootpool dataset (/boot from the old machine, exported mountpoint is /bootpool) because it is smaller, and by happenstance does not conflict with the directory tree of the new (FreeBSD 12.2) system.

On old machine:
Code:
# zfs snapshot -r bootpool@2021-04-19.boot
# mount -t ext2fs /dev/da0p1 /mnt
# zfs send -R -D -v bootpool@2021-04-19.boot > /mnt/granny.boot.2021-04-19.zfs
# umount /mnt

On new machine:
Code:
# mount -t ext2fs /dev/ada1p1 /mnt/
# zfs receive zroot/granny.boot < /mnt/granny.boot.201-04-19.zfs

This wrote /bootpool in the root directory. If I had run that command with my old zroot, the system would have become unuseable. I omitted two previous attempts using the zfs receive -d and zfs receive -e options. By my reading of zfs(), those options have no impact on the mountpoint option anyway (changing only the name of the snapshot). Because I did not clobbler the ZFS library, I am able to clean up failed attempts with:
Code:
# zfs destroy -r zroot/granny.boot

An old forum thread suggested a possible work-around of using the -u option to prevent immediate mounting. I found this did not work (Update: there is a transcription error here: I used the zpool set not zfs set command (missed it in the man page)):
Code:
# zfs receive -u zroot/granny.boot < /mnt/granny.boot.2021-04-19.zfs
# zfs list
#zfs set moutpoint=zroot/granny.boot zroot/granny.boot
cannot open 'zroot/granny.boot': invalid character '/' in pool name
# zfs destroy -r zroot/granny.boot
 
Disclaimer: it's late over here, we're having a fun evening so... not 100% sober.

On FreeBSD 12 and old versions of OpenZFS, it is not possible to override the mountpoint option with the zfs receive command. The overall effect is that the receiving system becomes unuseable to the point of needing a rescue disk if the datasteam includes vital mount points already in use.
To my knowledge versions 12.2 and prior didn't use OpenZFS by default, and as a result... I don't recognize myself in this scenario and I have restored a few servers through external backups over ZFS this way.

The following example in Section 20.4.7.2. Sending Encrypted Backups over SSH of the handbook will make the receiving system unuseable if the source includes /lib with an older version of zfs:
Wait... are you backing up a whole system using only one dataset? And then blaming ZFS for mishaps during restoration? Sorry, but that would be an example of mismanagement IMO.
 
Here's the post you refer to. (Link broken in your post.)

To restore a root pool on a new system: (off the top of my head; I think I've got everything.)
  1. Bring up new system with live cd / usb
  2. Parition the new boot drive and make sure you've got the appropriate loader installed on the boot disk (I'm considering that outside of the 'how to zfs' scope here; you would need to do this for any filesystem restore onto a new system.)
  3. Create the new (empty) pool with zpool create -R /altroot/path <newpool> [...]. This imports the pool with an altroot value set, so created filesystems (with, for example, mountpoint=/) cannot interfere with your normal mounts. (Alternatively import your already-created pool with zpool import -R /altroot/path <newpool>)
  4. Make your zfs send -R <originalpool> bytestream available in some fashion. (Can be saved on a disk, streamed directly from other drives in the system, streamed over SSH, etc.)
  5. Pipe the stream into zfs recv <newpool>. If it wasn't created / imported with altroot, use '-u' to prevent mounting.
  6. Make sure the pool's bootfs property is set. It needs to be poolname/root/dataset/path.
  7. Adjust mountpoints if needed on the new pool (you shouldn't need to unless you're intentionally changing things from your original system's setup). A setting of mountpoint=/var will be (temporarily, during this import) mounted at /altroot/path/var, but at /var "for real" on the next import without an altroot setting.
  8. Reboot and enjoy.
The key point here is that if you are receiving filesystems with mountpoints that are already existing (like /) on your system, you're best off with an altroot (if you can, which you can here on a new system running off live media), or unmounted with zfs recv -u.

Setting your mountpoint ( zfs set moutpoint=zroot/granny.boot zroot/granny.boot) didn't work since you need an absolute path (starts with '/') when setting a mountpoint.
 
Wait... are you backing up a whole system using only one dataset? And then blaming ZFS for mishaps during restoration? Sorry, but that would be an example of mismanagement IMO.

That worked just fine when I was ingesting old hard-drives into ZFS using dump and restore. I even had a valid reason for copying everything in at least one case. The problem is that the ZFS stream includes mount point information that the older tools do not.

Since the mountpoint override option appears to have been fixed in OpenZFS, I may install FreeBSD 13 instead.
 
The problem is that the ZFS stream includes mount point information that the older tools do not.
Again, you can receive into a pool with altroot set and adjust the mount points before export / re-import. Or receive with -u. (Your command above failed because you need an absolute path for a mountpoint.)
 
I've never really got on with -R. It does a bunch of stuff I don't want it to. I could probably use it if I sat down and read the man page but I've never bothered.

Generally most of my systems have a single dataset for the system that will be something like pool/ROOT/system. For a specific server I usually see little benefit in a dozen different datasets for parts of the os that will only contain half a dozen files, but will usually have something like pool/data for actual data storage (websites/mail/etc). This also makes a backup system with 20 different servers on it a lot easier to manage.

My backups are generally just some sort of simple custom script that sends the necessary datasets off one by one to my backup system. In most cases this is controlled by a custom property on the dataset (e.g. net.company:backup). I have an on-site ZFS backup system, and this pretty much just replicates all datasets to a second ZFS system off-site.

For restore I usually just follow the "ZFS madness" instructions (a thread on here somewhere that gave some good info on creating a ZFS system by hand, which I usually use for new systems as well). This requires booting into a live cd, partitioning the disks how I want and creating my pool, but then I simply pull the datasets from backup instead of creating them and extracting FreeBSD. I can do this simply by running nc -l 1234 |zfs recv pool/ROOT/system on the live cd, then zfs send backup/machine/system |nc live-system-ip 1234 on the backup system. At this point I can pretty much just boot back into the fully restored system (after setting bootfs and double checking I put bootcode on the disks).

I find this fairly straight forward for myself and I know exactly what's going on. There's no issue with recursive send removing snapshots on the destination, no alt-root confusion, no random stuff getting mounted on my backup system in places I don't want it, etc.
 
I had started my own thread on zfs and backup, but then - seeing zero feedback - I stopped
Replication, both local (another BSD), remote (another BSD) and on iSCSI (NAS), is just one of the strategies I adopt.
 
I've never really got on with -R. It does a bunch of stuff I don't want it to. I could probably use it if I sat down and read the man page but I've never bothered.

[...]

I find this fairly straight forward for myself and I know exactly what's going on. There's no issue with recursive send removing snapshots on the destination, no alt-root confusion, no random stuff getting mounted on my backup system in places I don't want it, etc.
It's not zfs send -R that's removing snapshots, it's using that combined with zfs recv -F. It's the recv -F that I really try to avoid, as you're more likely to lose what you wanted to preserve. (If a filesystem was deleted on the source (in error), and then a scheduled backup occured with zfs send -R | zfs recv -F before the error was noticed, for example — you've now lost the deleted filesystem on both source and backup.)

zfs send -R can be very nice if you want to back up a tree of filesystems; or if you're using a number of properties that you want carried over; but if you've configured down to one filesystem to back up, it certainly isn't as beneficial. (But then, you're likely not leveraging boot environments, one of the best features of root-on-zfs.)

Yes, you do need to pay attention to what happens on the receive side with mountpoints; using altroot when doing things like a system restore, where the system is already not in a normal operating mode, is tailor made for making this easier to do. FreeBSD 13 adds zfs-receive(8) features of -o property=value and -x property that can help reduce the complexity of this by overwriting or ignoring certain properties embedded in the stream.
 
I've never really got on with -R. It does a bunch of stuff I don't want it to. I could probably use it if I sat down and read the man page but I've never bothered.

The man page is very terse on this subject, and does not explicitly warn that mountpoint information is included in the datastream:

-R, --replicate
Generate a replication stream package, which will replicate the
specified filesystem, and all descendent file systems, up to the
named snapshot. When received, all properties, snapshots, descen-
dent file systems, and clones are preserved.

If the -i or -I flags are used in conjunction with the -R flag,
an incremental replication stream is generated. The current val-
ues of properties, and current snapshot and file system names are
set when the stream is received. If the -F flag is specified when
this stream is received, snapshots and file systems that do not
exist on the sending side are destroyed.

Again, you can receive into a pool with altroot set and adjust the mount points before export / re-import. Or receive with -u. (Your command above failed because you need an absolute path for a mountpoint.)

The altroot property is described in the zpool() man page and applies on a per pool, not per dataset basis. So is of no use if the system has only one pool.

No, my command still failed even after using an absolute path for the mountpoint because the zpool set command also operates on a per pool basis (note: I had transcribed the incorrect command above: zfs instead of zpool):
Code:
root@janis:~ # zpool set mountpoint=/zroot/granny.boot zroot/granny.boot
cannot open 'zroot/granny.boot': invalid character '/' in pool name
root@janis:~ # zpool set mountpoint=/zroot/granny.boot /zroot/granny.boot
cannot open '/zroot/granny.boot': invalid character '/' in pool name


I had started my own thread on zfs and backup, but then - seeing zero feedback - I stopped
Replication, both local (another BSD), remote (another BSD) and on iSCSI (NAS), is just one of the strategies I adopt.

I looked into using a binary diff utility when I had plans to back up all of the computers in one household (many running Windows, which does not support dump). Plans kind of fell through when my brother got a gaming computer with more storage than my fileserver. Also had issues with what I believe was too many buffer underruns when trying to run dump directly to DVD+R disks.

My current backup server plans are to use zfs send to an offsite, offline, encrypted backup server (using a portable hard drive): If I can figure out how to properly use zfs receive. I also avoided upgrading my "desktop" harddrives over the years to force me to keep the amount of data manageable. The fileserver I am setting up has only 500GB drives; in part so that incremental backups will fit comfortably in (doubly redundant -- 5 copies total taking single redundancy on the fileserver into account) 2TB on the backup server. Bulk data from my Internet downloading hobby (or potential video editing) has to be handled separately (and does not require the same redundancy).

It occurred to me that I can try OpenZFS on my backup server in parallel to see if that will resolve my issues.
 
I looked into using a binary diff utility ...
So try zpaq or zpaqfranz :)

My current backup server plans are to use zfs send to an offsite, offline, encrypted backup server (using a portable hard drive)
For local hard disk: zpaqfranz with encryption

For WAN (copy on "something" via ssh) sanoid / syncoid, I do every hour for server and virtualbox machines (note: <13, not OpenZFS)

Code:
if ping -q -c 1 -W 1 test.francocorbelli.com >/dev/null; then
/bin/date +"%R ---------- PING => replica"

/usr/local/bin/syncoid  -r --sshkey=/root/script/root_backup --identifier=antoz2 zroot/interna root@test.francocorbelli.com:zroot/copia_rambo_interna
/usr/local/bin/syncoid  -r --sshkey=/root/script/root_backup --identifier=bakrem tank/d root@test.francocorbelli.com:zroot/copia_rambo
/bin/date +"%R ----------REPLICA locale: fine replica su backup"
else
    /bin/date +"%R backup server kaputt!"
fi

It's not a good way to keep "forever to forever" copies, in fact I use it in conjunction with zpaqfranz file replication (which keep versions forever, without purging snapshots)
 
You're correct, in the one pool case, altroot won't help you if you only want to impact some filesystems. In the system restore case, it works well; not so much for a backup server / backup copy on the same server.

No, my command still failed even after using an absolute path for the mountpoint because the zpool set command also operates on a per pool basis (note: I had transcribed the incorrect command above: zfs instead of zpool):
Code:
root@janis:~ # zpool set mountpoint=/zroot/granny.boot zroot/granny.boot
cannot open 'zroot/granny.boot': invalid character '/' in pool name
root@janis:~ # zpool set mountpoint=/zroot/granny.boot /zroot/granny.boot
cannot open '/zroot/granny.boot': invalid character '/' in pool name

Aha. The transcription error hid the actual error: the command needs to be zfs set mountpoint=[...], not zpool set mountpoint=[...].
 
My current backup server plans are to use zfs send to an offsite, offline, encrypted backup server (using a portable hard drive): If I can figure out how to properly use zfs receive
I do this exact thing, and it works very well once you've got it up and running. In this case, it is a separate pool that you can zpool import with an altroot.
 
Corrected version of -u work-around ( zfs list recommended after the first two commands to see what is going on):
Code:
# zfs receive -u zroot/granny.boot < /mnt/granny.boot.2021-04-19.zfs
# zfs set mountpoint=/zroot/granny.boot zroot/granny.boot
# zfs mount zroot/granny.boot

Seems simple enough, but when I did it for the complete backup, I found out why everybody was talking about scripting:
Code:
# zfs receive -u zroot/granny < /mnt/granny.2021-04-19.zfs
# zfs list
... Output omitted; but it brought in a bunch of datasets with mountpoints in /* instead of /zroot/*
# zfs set mountpoint=/zroot/granny/zroot zroot/granny
# zfs set mountpoint=/zroot/granny/ zroot/granny/ROOT/default
# zfs set mountpoint=/zroot/granny/tmp zroot/granny/tmp
# zfs set mountpoint=/zroot/granny/usr zroot/granny/usr
# zfs set mountpoint=/zroot/granny/usr/home zroot/granny/usr/home
# zfs set mountpoint=/zroot/granny/usr/ports zroot/granny/usr/ports
# zfs set mountpoint=/zroot/granny/usr/src zroot/granny/usr/src
# zfs set mountpoint=/zroot/granny/var zroot/granny/var
# zfs set mountpoint=/zroot/granny/var/audit zroot/granny/var/audit
# zfs set mountpoint=/zroot/granny/var/crash zroot/granny/var/crash
# zfs set mountpoint=/zroot/granny/var/log zroot/granny/var/log
# zfs set mountpoint=/zroot/granny/var/mail zroot/granny/var/mail
# zfs set mountpoint=/zroot/granny/var/tmp zroot/granny/var/tmp

Omitting the -R option on the sending side would also require a similar amount of scripting (on the sending side): since the top level dataset is a trivial dataset containing all of the other datasets (but not your actual data).

It's not zfs send -R that's removing snapshots, it's using that combined with zfs recv -F. It's the recv -F that I really try to avoid, as you're more likely to lose what you wanted to preserve. (If a filesystem was deleted on the source (in error), and then a scheduled backup occured with zfs send -R | zfs recv -F before the error was noticed, for example — you've now lost the deleted filesystem on both source and backup.)
Point well taken. I can see how using the zfs recv -F command can defeat the whole point of offline backups (other than the power savings of course). In my reading, a typical reason for needing the -F option for incremental backups is access times invalidating the previous snapshot on the backup server. So your options are to zfs rollback to the previous snapshot; or simply prohibit the backup server from modifying the dataset with either zfs set atime=off or even zfs set readonly=on.

Code:
# zfs set readonly=on zroot/granny
# zfs set readonly=on zroot/granny/ROOT/default
# zfs set readonly=on zroot/granny/tmp
# zfs set readonly=on zroot/granny/usr
# zfs set readonly=on zroot/granny/usr/home
# zfs set readonly=on zroot/granny/usr/ports
# zfs set readonly=on zroot/granny/usr/src
# zfs set readonly=on zroot/granny/var
# zfs set readonly=on zroot/granny/var/audit
# zfs set readonly=on zroot/granny/var/crash
# zfs set readonly=on zroot/granny/var/log
# zfs set readonly=on zroot/granny/var/mail
# zfs set readonly=on zroot/granny/var/tmp
# zts set readonly=on zroot/granny/chester
# zts set readonly=on zroot/granny/dusty0
# zts set readonly=on zroot/granny/dusty1
# zts set readonly=on zroot/granny/moonbeam
# zts set readonly=on zroot/granny/torchlight
# zts set readonly=on zroot/granny/ubuntu
# zts set readonly=on zroot/granny/workhorse

(The later ones are just the hard-drive images I had ingested with dump and restore.)
 
Your set of mountpoint settings could be replaced with one zfs inherit -r mountpoint zroot likewise, your set of readonly commands should only need the first (top level) one, as it is an inherited property. Note you can still receive snapshot updates into a “readonly” filesystem, as the “readonly” applies to changes through the (mounted / posix) filesystem layer.

Incidentally, setting readonly on the backup is a best practice in my view, for precisely the reasons you suggest (avoiding -F due to access times / accidental modifications; avoiding rollbacks likewise.)
 
Also, of note, you’ll only need to do this after the initial backup; subsequent incremental updates will only transfer newly assigned properties.
 
Update: When I finally went to retrieve my data, I did not find it where I expected. There are messages on boot that some of my imported backups did not mount properly. Current plan is to unmount, and fiddle with things. For some reason mc refuses to run under a limited user account (complaining pty console is unimplemented) -- not sure if related. Will fix mounting first. Narrator: Try this tread to solve the mc problem (it does not like the default shell).

Sorry for the delay. Was stalled for a month making the second drive bootable. Finally read the gpart() page to figure out what I had to change from this line in my notes from the last time I did something similar (previous command included for context):

# gpart backup /dev/ada0 | gpart restore /dev/ada2
# gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada2

In my case I only needed to change the destination drive ada2 to ada1 because I was going from 3 drives to 2.

I was able to confirm that those cryptic filenames were in /boot with the correct file size with this command:

$ ls -l /boot/

Edit: I was able to confirm the partition index with this command:

gpart show

Code:
=>       40  976773088  ada0  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   25165824     2  freebsd-swap  (12G)
   25167872  951605248     3  freebsd-zfs  (454G)
  976773120          8        - free -  (4.0K)

=>       40  976773088  ada1  GPT  (466G)
         40       1024     1  freebsd-boot  (512K)
       1064        984        - free -  (492K)
       2048   25165824     2  freebsd-swap  (12G)
   25167872  951605248     3  freebsd-zfs  (454G)
  976773120          8        - free -  (4.0K)

After I did that, then had to work out how to mirror my first drive properly without compromising the encryption. I very nearly did the WRONG thing (doing a similar backup/restore on the geli() meta-data). IIRC If you do that, any write differences between mirrors would allow trivial decryption: since the cypher text can then act as a decryption key (the only uncertainty being the underlying data: which will often simply be all zeros -- the 'salt' (in the metadata) prevents that attack).

Luckily for me, somebody already figured out how to do what I wanted, and documented the process. The answer:
"Since the root partition was GELI-encrypted during the installation, check the installer log to find out what options were used for the geli init and geli attach commands."
# grep geli /var/log/bsdinstall_log

(You can read that document for next steps. Archive link)
 
Last edited:
Messages on start-up that did not appear in /var/log/messages or /var/run/dmesg.boot (checked with grep mount /var/log/messages and similar):

Code:
Mounting local filesystems:
cannot mount 'zroot/granny/dusty1': mount failed
cannot mount 'zroot/granny/chester': No such file or directory
cannot mount 'zroot/granny/chester': No such file or directory
cannot mount 'zroot/granny/chester': mount failed
cannot mount 'zroot/granny/chester': mount failed
cannot mount 'zroot/granny/chester': mount failed
cannot mount 'zroot/granny/chester': mount failed

Output of mount:

Code:
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/granny/var/tmp on /zroot/granny/usr/var/tmp (zfs, local, noatime, nosuid, read-only, nfsv4acls)
zroot/granny/var/crash on /zroot/granny/usr/var/crash (zfs, local, noatime, noexec, nosuid, read-only, nfsv4acls)
zroot/granny/var/audit on /zroot/granny/usr/var/audit (zfs, local, noatime, noexec, nosuid, read-only, nfsv4acls)
zroot/granny/tmp on /zroot/granny/tmp (zfs, local, noatime, nosuid, read-only, nfsv4acls)
zroot/granny/var/log on /zroot/granny/usr/var/log (zfs, local, noatime, noexec, nosuid, read-only, nfsv4acls)
zroot/granny.boot on /zroot/granny.boot (zfs, local, noatime, nfsv4acls)
zroot/granny/var/mail on /zroot/granny/usr/var/mail (zfs, local, read-only, nfsv4acls)
zroot/granny on /zroot/granny/zroot (zfs, local, noatime, read-only, nfsv4acls)
zroot/granny/ROOT/default on /zroot/granny (zfs, local, noatime, read-only, nfsv4acls)
zroot/granny/usr/src on /zroot/granny/usr/src (zfs, local, noatime, read-only, nfsv4acls)
zroot/granny/usr/ports on /zroot/granny/usr/ports (zfs, local, noatime, nosuid, read-only, nfsv4acls)
zroot/granny/usr/home on /zroot/granny/usr/home (zfs, local, noatime, read-only, nfsv4acls)

What I suspect is happening is that mounting the nested datasets as read-only prevents proper mounting of children: because the parent dataset is mounted read-only.

The files I am looking for should be in /zroot/granny/usr/home, but the directory is empty: despite the mount output saying it is mounted.

Next steps are to umnount the datasets in question and remove the read-only attribute, and try the recursive command suggested by Eric A. Borisch (applying the read-only attribute). Not sure that will change anything: may need to make do with only 'noatime'
 
OK, I may have fixed it, but need to do a WindowsTM style restart to [clean up]: because the data I want is stuck in a half mounted, half unmounted state. I decided that since I won't be doing further incremental backups from my old fileserver: I can safely mount the dataset as read-write. Going forward, I will avoid the use of nested datasets (replication streams) for data backups. The replication streams appears to be literally designed to have a spare backup server that you can quickly bring up in the case of hardware failure (with identical settings).

Code:
root@janis:~ # zfs inherit -r readonly zroot/granny
root@janis:~ # zfs mount -a
root@janis:~ # mc

root@janis:~ # zfs mount zroot/granny/usr/home
cannot mount 'zroot/granny/usr/home': filesystem already mounted
(used mc to look for my data).

The other datasets that failed to mount previously mounted successfully (output of mount):

Code:
zroot/ROOT/default on / (zfs, local, noatime, nfsv4acls)
devfs on /dev (devfs, local, multilabel)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/granny/var/tmp on /zroot/granny/usr/var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/granny/var/crash on /zroot/granny/usr/var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/granny/var/audit on /zroot/granny/usr/var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/granny/tmp on /zroot/granny/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/granny/var/log on /zroot/granny/usr/var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/granny.boot on /zroot/granny.boot (zfs, local, noatime, nfsv4acls)
zroot/granny/var/mail on /zroot/granny/usr/var/mail (zfs, local, nfsv4acls)
zroot/granny on /zroot/granny/zroot (zfs, local, noatime, nfsv4acls)
zroot/granny/ROOT/default on /zroot/granny (zfs, local, noatime, nfsv4acls)
zroot/granny/usr/src on /zroot/granny/usr/src (zfs, local, noatime, nfsv4acls)
zroot/granny/usr/ports on /zroot/granny/usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/granny/usr/home on /zroot/granny/usr/home (zfs, local, noatime, nfsv4acls)
zroot/granny/workhorse on /zroot/granny/zroot/workhorse (zfs, local, noatime, nfsv4acls)
zroot/granny/moonbeam on /zroot/granny/zroot/moonbeam (zfs, local, noatime, nfsv4acls)
zroot/granny/dusty0 on /zroot/granny/zroot/dusty0 (zfs, local, noatime, nfsv4acls)
zroot/granny/chester on /zroot/granny/zroot/chester (zfs, local, noatime, nfsv4acls)
zroot/granny/dusty1 on /zroot/granny/zroot/dusty1 (zfs, local, noatime, nfsv4acls)
zroot/granny/ubuntu on /zroot/granny/zroot/ubuntu (zfs, local, noatime, nfsv4acls)
zroot/granny/torchlight on /zroot/granny/zroot/torchlight (zfs, local, noatime, nfsv4acls)

Was able to confirm my data is there by unmounting the dataset before trying to remount it (still restarting to 'fix' things):
Code:
root@janis:~ # zfs unmount zroot/granny/usr/home
root@janis:~ # zfs mount zroot/granny/usr/home
root@janis:~ # mc
 
I use incremental backups on another drive, same PC. Works fine. Just only mount it when you need to do a restore.

How does that setup handle retrieving a file you accidentally deleted?

My current backup server plans are to use zfs send to an offsite, offline, encrypted backup server (using a portable hard drive): If I can figure out how to properly use zfs receive.

I recommend some off-site backups in case your place burns down. Edit: mounting also serves to verify the backups. If I did not try mounting: I would not have known that mounting does not work properly for a readonly, nested, dataset. I was not even sure my data was present: other than the diskspace it takes up (if zfs send completes immediately; that is a sign you just backed up a "trivial" (parent) dataset (and not your data)).

Edit for tangent: because the data will be encrypted, I can continue to have the machine moonlight [factoring large prime candidates] (during the low power demand time) with Linux booting off a USB key. I suppose there is a risk of DOS (drive deletion) if Linux is compromised. (Will want to spin down the drives while Linux is running.)
 
For instance for today i can retreive since the boot all versions of all files on my /usr/home directory with a differenct of 30 minutes. Say my computer is running 5 hours. I can easily pick a version of a file let's say 2.5 hours ago.
The incremental backup is just a way to minimise datatransfer. Under to hood zfs behaves as it would have been a full backup at that moment of time.
You just mount the snaphot of that time :
Command:
Code:
mount -t zfs  MYZPOOL/myzfsremotedataset@7hours30minutes /mnt/myrecoverydirectory
Then all files of that moment in time of the dataset are available in /mnt/myrecoverydirectory.
 
OK, my old home directories still don't mount properly on boot, but the work around I used to avoid rebooting still works (umounting, then remounting the dataset).

Will have to read through this when I have time: zfs - two pools with nested mountpoints and conflicting mount order [Server Fault].

Edit: bug report on issue:
Bug 237397 - 'zfs mount -a' mounts filesystems in incorrect order

Edit: despite being listed as a "new" bug, this is the last comment:
Jim Long said:
Based on my testing, it appears that this bug has been fixed. If no one has objections, this PR can be closed.
 
If you have the backup in a separate pool, and they are full copies of each other (all zfs filesystems A/* map to B/*) I do this:

Bash:
#!/bin/sh

# Bail on any non-zero return.
set -e

SRC=newsys
DST=sysbackup

POOLS="${SRC} ${DST}"

cleanup() {
    zpool export ${DST}
}

# Always make sure ${DST} is exported when exiting.
trap cleanup EXIT

# Don't re-import if already imported for some reason.
zpool list ${DST} >/dev/null 2>&1 || zpool import -N -R /${DST} ${DST}

# Error out if either pool is not imported
for x in ${POOLS}; do zpool status ${x} > /dev/null; done

zfs snapshot -r ${SRC}@SNAPUP
zfs send -RI @SNAP ${SRC}@SNAPUP | zfs recv -Fv ${DST}

# Rename new snaps from @SNAPUP to @SNAP
for x in ${POOLS}; do
    zfs destroy -r ${x}@SNAP
    zfs rename -r ${x}@SNAPUP @SNAP
done

# zpool export called via trap

The whole thing takes less than three seconds to run when there have been no changes; and is essentially IO bound for any significant changes. I break my own suggestion of avoiding zfs recv -F since this is a backup of the system volume with boot environments and auto-snapshots, and it doesn't actually have anything that can't be regenerated (actual user data files / pictures / etc.) on it — so this is really a quick disaster recovery backup rather than a "I want to preserve this data" backup.

Note with the set -e the code will bail (intentionally, and triggering an e-mail via cron) on any failures, so there isn't any other error checking or recovery in the script.

This imports the backup pool with altroot (so no conflict on mountpoints), and then exports it (imported with -N to make sure there isn't something that has decided to walk into the directories preventing the export) after completing the backup so it isn't imported/mounted (which would cause conflicts, as the mountpoints are set the same) on reboot.
 
Back
Top