Solved Frequent incremental zfs backup problems.

Here i share the scripts I use. Maybe someone sees a problem,

On boot,
cat once_usr_home
Code:
export       source="ZT/usr/home"
export         dest="ZHD/backup_usr_home"
export           mp="/mnt/snap_usr_home_hourly"
export       mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export      current=${source}@${mydate}
/sbin/zfs list -t snap ${source} | /usr/bin/grep ${source}@  | /usr/bin/awk '{print $1}' | /usr/bin/xargs -I {} /sbin/zfs destroy -v {}
/sbin/zfs destroy -r -f -v ${dest}
/sbin/zfs create -v ${dest}
/sbin/zfs snapshot ${current}
echo "SRC:" ${current}
echo "DST:" ${dest}
/sbin/zfs send ${current} | /sbin/zfs receive -o snapdir=visible -o checksum=skein -o compression=lz4 -o atime=off -o relatime=off -o canmount=off -o mountpoint=${mp} -v -u -F ${dest}

Every 30 minutes,
cat increment_usr_home
Code:
export       source="ZT/usr/home"
export         dest="ZHD/backup_usr_home"
export       mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export      current=${source}@${mydate}
export previous=` /sbin/zfs list -t snap -r ${source} | /usr/bin/grep ${source}@  | /usr/bin/awk 'END{print}' | /usr/bin/awk '{print $1}'`
/sbin/zfs snapshot             ${current}
echo "SRC:" ${previous} ${current}
echo "DST:" ${dest}
/sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -F -v -u ${dest}
Taking the snapshot works fine, but zfs-send-receive frequently fails because a previous zfs-send-receive failed ...& the destination is not up-to-date.
Manually it seems to work fine but not automatic by fcron which is a weird behavior.

Did i missed an option/flag. Or otherwise, not using scripts , is there a tool to do these incremental backups ?
 
Where is your error handling in the script? We can mostly assume that sed, awk and grep will work. I think it's excusable to not have any error checking on them.

But what happens if the initial "zfs list" or "zfs destroy" (in line 5) fails? In particular, if the "zfs list" fails, are you 100% sure you will not run "zfs destroy" with invalid arguments?

Same argument with "zfs snapshot". If that fails, do not proceed.

And as you already pointed out: If the "zfs send/receive" pair has ever failed in the past, then you must not run this script again, until something (presumable a human) fixes things.

Serious suggestion: Add error checking and handling for every (interesting) step in the script, where "interesting" means: Not trivial commands such as sed/awk/grep. Note that everything related to ZFS can and will fail, because it uses disk hardware underneath, and disks do misbehave. Decide what to do for every error case; one example might be: Save the output of the command (with error messages) in a tmp file, and on error, e-mail that tmp file to some sort of human administrator. This will make the script look much longer, but if done carefully, it will not make it more complex.

The big problem is how to prevent this script from running again if the previous one fails. Here is how I handle it in my (home-brew) backup program: When it starts, it creates a file that indicates that it is currently running. When it finishes successfully, it deletes that file. Before it actually starts, it checks that the file exists; if it already does, it immediately aborts and sends mail to the sys admin that the previous run had failed. But: Adding this mechanism is pretty heavyweight (dozens of lines of script), and it has its own failure mode: If the backup is interrupted (for example by a system crash caused by power outage), it will not run again until a human cleans things up. So I'm not seriously proposing this mechanism, as the "cure might be worse than the disease" for your situation.
 
Good ideas,
Instead of ,
Code:
/sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -F -v -u ${dest}
I now do
Code:
/sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -F -v -u ${dest}  || /usr/bin/logger "zfs-send-receive failed" ${previous} ${current} && /usr/bin/logger "zfs-send-receive succeed" ${previous} ${current}

This log info should guide me to solution.
[The "zfs list" i do is indeed dangerous but error handling is not simple]
 
Defence is the best medicine. Just about any shell script I write will start with:
Code:
PATH="/usr/local/bin:/bin:/usr/bin/:/sbin:/usr/sbin"; export PATH
PROG=$(basename $0)

LOGTAG="$PROG"
LOGPRI='local0.notice'
#LOGGING='FALSE'
LOGGING='TRUE'

Say()
{
        echo "$PROG: $*"
}

Barf()
{
        Say $@ 1>&2
        [ "$LOGGING" = "TRUE" ] && logger -p "$LOGPRI" -t "$LOGTAG" $@
}

XBarf()
{
        Barf "fatal: $*"
        exit 1
}
Note that XBarf exits, with non-zero status. So verifying the exit status of every command before you proceed is easy, e.g.
Code:
/sbin/zfs destroy -r -f -v ${dest} || XBarf "can't destroy ${dest}"
/sbin/zfs create -v ${dest} || XBarf "can't create ${dest}"
/sbin/zfs snapshot ${current} || XBarf "can't snapshot ${current}"
 
Others have commented on the need for some error handling; I’ll provide two quick tips that I find useful for send/recv backups:

You should never need recv -F unless you’re trying to prune snapshots via a replication (send -R), and even then, I would recommend against it. It’s just too easy to “back up” (especially when automated) a mistake like removing a filesystem, only to have it go poof on your backup, too.

Set your destination filesystem tree to readonly, as well. (zfs set readonly…) You can still receive updates into it, but it will keep it from being accidentally modified, and then refusing to update until a rollback is performed.
 
Eric makes a good point. There are two purposes of backup: One to guard against destruction of the original file system (for example due to software bugs, or to hardware problems that exceed the fault tolerance of the RAID layer). The other to guard against user errors. He mentions one user error, namely removing a file system completely.

But there is a much more common user error: deleting a file, and then (and hour or a day later) discovering that you still need it. For that reason, backups should retain deleted files for a while, perhaps forever. I don't know how to most easily implement that with zfs send/receive most efficiently. One idea is to keep about a dozen daily snapshots, and backup those; like that you always have a dozen days to find files in the snapshots. My solution at home is more radical: I never delete any files (nor older versions of files) from the backup. Given that my file system at home is mostly used for archiving and documents, this does not actually create a space crisis.
 
My backup strategy is the following:
After reboot i remove all backups & take immediately a full-backup.
Then each 30 minutes i take an incremental-backup.
[As long as i do not reboot i can revert to any 30 minutes of a file]
Maybe there are better strategies. But this one does not take alot of space.
This is my current "increment",
Code:
/sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -o readonly=on -v -u ${dest}  || /usr/bin/logger "zfs-send-receive failed" ${previous} ${current} {dest} && /usr/bin/logger "zfs-send-receive succeed" ${previous} ${current} ${dest}
 
My backup strategy is the following:
After reboot i remove all backups & take immediately a full-backup.
Why?

Then each 30 minutes i take an incremental-backup.
[As long as i do not reboot i can revert to any 30 minutes of a file]
Except if your backup just ran? I don't see you keeping any older snapshots...

This doesn't really sound like a backup strategy but more like a complicated way to achieve some redundancy. ZFS offers better mechanisms for that.

For backups, ask yourself why exactly you are doing them. One important motivation is to recover from user errors. For that, you should keep some snapshots. I keep 3 of them, while doing backups every 2 weeks. You might want to keep more snapshots and do backups more often, depending on your own risk assessment. Another typical motivation for backups is to guard against catastrophic events actually destroying hardware. For that, you don't want your backup media anywhere near the machine. I personally use an external USB drive and store it two floors above (not perfect, but still better chances).
 
To be honest, your scripts and configurations seem like serious overkill for just making a backup. I personally use rsync and have it backup my home directory. I also have a file where I list the directories that rsync should not backup. In principle I could create a sh file and run this command as a cron job. But I currently do it on a manual basis about every two weeks, or very rarely after a day when I've put a lot of work into something. The rsync never really takes long, almost always less than 2 minutes, so rsync is fast in my experience. I suspect that ZFS snapshots are even faster.

My current method seems much simpler than what you do, and it works fine.
 
Why?


Except if your backup just ran? I don't see you keeping any older snapshots...

This doesn't really sound like a backup strategy but more like a complicated way to achieve some redundancy. ZFS offers better mechanisms for that.

For backups, ask yourself why exactly you are doing them. One important motivation is to recover from user errors. For that, you should keep some snapshots. I keep 3 of them, while doing backups every 2 weeks. You might want to keep more snapshots and do backups more often, depending on your own risk assessment. Another typical motivation for backups is to guard against catastrophic events actually destroying hardware. For that, you don't want your backup media anywhere near the machine. I personally use an external USB drive and store it two floors above (not perfect, but still better chances).
Here snapshots of my home directory, each 30 minutes
Code:
ZT/usr/home                                   19.9G   169G  19.9G  /usr/home
ZT/usr/home@2022_11_24__18_53_02              2.89M      -  19.9G  -
ZT/usr/home@2022_11_24__19_00_18              3.52M      -  19.9G  -
ZT/usr/home@2022_11_24__19_30_18              5.41M      -  19.9G  -

Which i send incrementaly to another drive,
Code:
ZHD/backup_usr_home                           19.9G   196G  19.9G  /mnt/snap_usr_home_hourly
ZHD/backup_usr_home@2022_11_24__18_53_02      2.79M      -  19.9G  -
ZHD/backup_usr_home@2022_11_24__19_00_18      3.44M      -  19.9G  -
ZHD/backup_usr_home@2022_11_24__19_30_18         0B      -  19.9G  -
The increment takes 3Mb which is almost nothing.
 
To be honest, your scripts and configurations seem like serious overkill for just making a backup. I personally use rsync and have it backup my home directory. I also have a file where I list the directories that rsync should not backup. In principle I could create a sh file and run this command as a cron job. But I currently do it on a manual basis about every two weeks, or very rarely after a day when I've put a lot of work into something. The rsync never really takes long, almost always less than 2 minutes, so rsync is fast in my experience. I suspect that ZFS snapshots are even faster.

My current method seems much simpler than what you do, and it works fine.
I also use "clone" instead of "rsync".
 
Still problems. I share my easy scripts,
On boot,
Code:
export       source="ZT/usr/home"
export         dest="ZHD/backup_usr_home"
export           mp="/mnt/snap_usr_home_hourly"
export       mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export      current=${source}@${mydate}
/sbin/zfs list -t snap ${source} | /usr/bin/grep ${source}@  | /usr/bin/awk '{print $1}' | /usr/bin/xargs -I {} /sbin/zfs destroy -v {}
/sbin/zfs destroy -r -f -v ${dest}
/sbin/zfs create -v ${dest}
/sbin/zfs snapshot ${current}
echo "SRC:" ${current}
echo "DST:" ${dest}
/sbin/zfs send ${current} | /sbin/zfs receive -o readonly=on -o snapdir=hidden -o checksum=skein -o compression=lz4 -o atime=off -o relatime=off -o canmount=off -o mountpoint=${mp} -v -u -F ${dest} || /usr/bin/logger "zfs-send-receive-once failed" ${current} {dest} && /usr/bin/logger "zfs-send-receive-once succeed" ${current} ${dest}

Each 30 minutes,
Code:
export       source="ZT/usr/home"
export         dest="ZHD/backup_usr_home"
export       mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export      current=${source}@${mydate}
export previous=` /sbin/zfs list -t snap -r ${source} | /usr/bin/grep ${source}@  | /usr/bin/awk 'END{print}' | /usr/bin/awk '{print $1}'`
/sbin/zfs snapshot             ${current}
echo "SRC:" ${previous} ${current}
echo "DST:" ${dest}
/sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -o readonly=on -v -u ${dest}  || /usr/bin/logger "zfs-send-receive failed" ${previous} ${current} {dest} && /usr/bin/logger "zfs-send-receive succeed" ${previous} ${current} ${dest}
 
I'll share mine, maybe it helps.
Bash:
#!/bin/sh

TIMESTAMP=$(date +%Y%m%d)
BACKUPDIR=/BSD_common/backups
RETVAL=

if ! [ -f %%ENVDIR%%/etc/backup.conf ]; then
    echo "Configuration file not found" >&2
    exit 1
fi

. %%ENVDIR%%/etc/backup.conf || exit 1

if [ -f ${BACKUPDIR}/backup.tar ]; then
    tar -uPf ${BACKUPDIR}/backup.tar $FILES
else
    tar -cPf ${BACKUPDIR}/backup.tar $FILES
fi
RETVAL=$?

for fs in $ZFSINC; do
    case $fs in
    */home/*)    ds=$(echo ${fs##*home/} | sed 's,/,-,g') ;;
    *)        ds=${fs##*/} ;;
    esac

    if [ $(zfs list -H -t snapshot $fs | wc -l) -eq 5 ]; then
        snapname=$(zfs list -H -t snapshot -o name | head -1)
        zfs destroy $snapname && \
            zfs destroy ${BACKUPDIR#/}/$ds@${snapname##*@}
        [ $? -ne 0 ] && RETVAL=1
    fi

    lastsnap=$(zfs list -H -t snapshot -o name $fs | tail -1)
    if [ "${lastsnap%_[0-9]}" = "$fs@$TIMESTAMP" ]; then
        inc="${lastsnap##*_}"
        inc=$((inc + 1))
    else
        inc=0
    fi
    snapname="${TIMESTAMP}_${inc}"

    if zfs snapshot $fs@$snapname; then
        if [ "$lastsnap" ] ; then
            zfs send -i $lastsnap $fs@$snapname | \
                zfs receive ${BACKUPDIR#/}/$ds
        else
            zfs send $fs@$snapname | \
                zfs receive ${BACKUPDIR#/}/$ds
        fi
    fi

    [ $? -ne 0 ] && RETVAL=1
done

exit $RETVAL
 
So we're sharing backup scripts here? Yet another one, partially stolen somewhere, but changed/tuned a lot:
Bash:
#!/bin/sh -e

KEEPOLD=3
PREFIX=backup

MASTERPOOL=${1:-zroot}
BACKUPPOOL=${2:-backup}

echo Backup from $MASTERPOOL to $BACKUPPOOL.

zpool import -N $BACKUPPOOL

recentBSnap=$(zfs list -rt snap -H -o name $BACKUPPOOL/$MASTERPOOL \
    | grep "$BACKUPPOOL/$MASTERPOOL@${PREFIX}-" | tail -1 | cut -d@ -f2)
NEWSNAP=$MASTERPOOL@$PREFIX-$(date '+%Y%m%d-%H%M%S')

if test -z "$recentBSnap"; then
    zfs snapshot -r $NEWSNAP
    zfs send -Rcv $NEWSNAP | zfs recv -Fuv $BACKUPPOOL/$MASTERPOOL
else
    origBSnap=$(zfs list -rt snap -H -o name $MASTERPOOL \
        | grep $recentBSnap | head -n1 | cut -d@ -f2)

    if test "$recentBSnap" != "$origBSnap"; then
        echo Error, snapshot $recentBSnap does not exist in $MASTERPOOL.
        zpool export $BACKUPPOOL
        exit 1
    fi

    zfs snapshot -r $NEWSNAP
    zfs send -RcvI @$recentBSnap $NEWSNAP \
        | zfs recv -Fuv $BACKUPPOOL/$MASTERPOOL

    zfs list -rt snap -H -o name $BACKUPPOOL/$MASTERPOOL \
        | grep "$BACKUPPOOL/$MASTERPOOL@${PREFIX}-" | tail -r \
        | tail +$(($KEEPOLD + 1)) | xargs -n 1 zfs destroy -r
    zfs list -rt snap -H -o name $MASTERPOOL \
        | grep "$MASTERPOOL@${PREFIX}-" | tail -r \
        | tail +$(($KEEPOLD + 1)) | xargs -n 1 zfs destroy -r
fi

zpool export $BACKUPPOOL

And yes, this will remove deleted datasets in the backup as well. It's a conscious decision...
 
I guess I'm just to lazy as I use zfs-auto-snapshot for, well that what the name says 😅, zxfer for pull request to an offside backup and rclone for cloud backup. All is handled by cron so it's pretty neat and straightforward.:cool:
 
  • Like
Reactions: mer
I made some modification, this is the last.
cat once_usr_home:
Code:
export source="ZT/usr/home"
export mp="/mnt/snap_usr_home_hourly"
export mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export destsmall="ZHD/backup_usr_home"
export dest=${destsmall}@${mydate}
export current=${source}@${mydate}
/sbin/zfs destroy -r -f -v ${destsmall}
/sbin/zfs list -t snap ${source} | /usr/bin/grep ${source}@  | /usr/bin/awk '{print $1}' | /usr/bin/xargs -I {} /sbin/zfs destroy -v {}
/sbin/zfs snapshot ${current}
echo "SRC:" ${current}
echo "DST:" ${dest}
( /sbin/zfs send ${current} | /sbin/zfs receive -o readonly=on -o snapdir=hidden -o checksum=skein -o compression=lz4 -o atime=off -o relatime=off -o canmount=off -o mountpoint=${mp} -v -u ${dest} ) || /usr/bin/logger "zfs-send-receive-once failed" ${current} ${dest}

cat increment_usr_home:
Code:
export source="ZT/usr/home"
export mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export current=${source}@${mydate}
export dest="ZHD/backup_usr_home"@${mydate}
export previous=` /sbin/zfs list -t snap -r ${source} | /usr/bin/grep ${source}@  | /usr/bin/awk 'END{print}' | /usr/bin/awk '{print $1}'`
/sbin/zfs snapshot             ${current}
echo "SRC:" ${previous} ${current}
echo "DST:" ${dest}
( /sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -o readonly=on -v -u ${dest} ) || ( /usr/bin/logger "zfs-send-receive failed" ${previous} ${current} ${dest} ; /root/Root/backup/once_usr_home )
 
DISCLAIMER: I am the developer of zpaqfranz, so I'm mean and steal candy from children
DISCLAIMER2: It is an opensource software, I gain nothing
DISCLAIMER3: the main credits go to the original developer, Matt Mahoney

Solution: do not use zfs backups for home.
Totaly nonsense
Use, instead, zpaq (or zpaqfranz,if you don't have candy to get stolen)

0) delete a snapshot of home/tank/whatever (just in case for an highlander)
1) make a snapshot of home/tank/whatever
2) zpaq/zpaqfranz the snapshot
3) delete the snapshot

That's all

Very reliable
Simple
Almost no problem in restoring (much, much, MUCH easier vs zfs backup)
You can always see "what inside" (MUCH blablabla)
Much easier to encrypt
(and a LOT more)

Code:
zfs destroy  tank/d@franco
zfs snapshot tank/d@franco
/usr/local/bin/zpaqfranz a /monta/mynewbackup.zpaq /tank/d/.zfs/snapshot/franco
zfs destroy  tank/d@franco
 
If you really, really, really want to use zfs' backups you can do something like that
Code:
# make the /flusso/c/ folders inside
backupfolder=/somewhere/

if [ -f ${backupfolder}/flusso/c/partc.zfs ]; then
  /bin/date +"%R----------LOCALE: partc.zfs exists (do not rebuild)"
  zfs destroy -f  tank/d@differenza
  zfs snapshot -r tank/d@differenza
  /bin/date +"%R----------LOCALE: start differential send zfs"
  NOW=`/bin/date +"%Y%m%d-%H%M%S"`
  zfs send -R -i tank/d@franco tank/d@differenza |pv >${backupfolder}/flusso/differenza_$NOW
.zfs
  ls -tp ${backupfolder}/flusso/differenza*.zfs |grep -v '/$' | tail -n +101 | tr '\n' '\0'
| xargs -0 rm --
  /bin/date +"%R----------LOCALE: end   differential send zfs"
 else
    /bin/date +"%R----------LOCALE: partc.zfs does not exists"
    zfs destroy  -f tank/d@franco
    zfs snapshot -r tank/d@franco
    /bin/date +"%R----------LOCALE: sending partc.zfs"
    /sbin/zfs send -R tank/d@franco |pv >${backupfolder}/flusso/c/partc.zfs
    /bin/date +"%R----------LOCALE: end     partc.zfs"
fi
 
Alain De Vos I'd like to know what you consider the "requirements" for your backup strategy.
The way I understand it, you want snapshots every 30 mins between reboots of the system.
Are you rebooting every day, so effectively you want "24 hours of 30 minute snapshots so you lose at maximum 30 minutes of work?"
Or do you reboot infrequently so you have more than one day of 30 minute snapshots?
Do you want to be able to go back to any 30 minute snapshot and retrieve data from it?

Anyway, on the "zfs send" take a look at the "-I" option instead of "-i".
It's a difference between "incremental" and "differential" replication.

Incremental Replication:
zfs send snapshot1
zfs send -i snapshot1 snapshot2
zfs send -i snapshot2 snapshot3 <<<<< assume this fails
zfs send -i snapshot3 snapshot4 <<<<< then this will fail

Differential Replication:
zfs send snapshot1
zfs send -I snapshot1 snapshot2
zfs send -I snapshot1 snapshot3 <<<<< assume this fails
zfs send -I snapshot1 snapshot4 <<<<< then this will send snapshot3 and snapshot4

I think doing that would also simplify the logic in your scripts:
reboot, delete all snapshots, take the "initial" snapshot
every 30 minutes snapshot, do differential send from the initial snapshot to the current snapshot

That way even if sending one of the 30 minute snapshots fail, subsequent one will send everything not there yet.
 
I think doing that would also simplify the logic in your scripts:
reboot, delete all snapshots, take the "initial" snapshot
every 30 minutes snapshot, do differential send from the initial snapshot to the current snapshot

That way even if sending one of the 30 minute snapshots fail, subsequent one will send everything not there yet.
If you really, really, really want to use zfs' backups you can do something like that...
 
  • Like
Reactions: mer
Alain De Vos I'd like to know what you consider the "requirements" for your backup strategy.
The way I understand it, you want snapshots every 30 mins between reboots of the system.
Are you rebooting every day, so effectively you want "24 hours of 30 minute snapshots so you lose at maximum 30 minutes of work?"
Or do you reboot infrequently so you have more than one day of 30 minute snapshots?
Do you want to be able to go back to any 30 minute snapshot and retrieve data from it?

Anyway, on the "zfs send" take a look at the "-I" option instead of "-i".
It's a difference between "incremental" and "differential" replication.

Incremental Replication:
zfs send snapshot1
zfs send -i snapshot1 snapshot2
zfs send -i snapshot2 snapshot3 <<<<< assume this fails
zfs send -i snapshot3 snapshot4 <<<<< then this will fail

Differential Replication:
zfs send snapshot1
zfs send -I snapshot1 snapshot2
zfs send -I snapshot1 snapshot3 <<<<< assume this fails
zfs send -I snapshot1 snapshot4 <<<<< then this will send snapshot3 and snapshot4

I think doing that would also simplify the logic in your scripts:
reboot, delete all snapshots, take the "initial" snapshot
every 30 minutes snapshot, do differential send from the initial snapshot to the current snapshot

That way even if sending one of the 30 minute snapshots fail, subsequent one will send everything not there yet.
I currently catch the error when incremental fails and do a fresh full backup from scratch script "once_usr_home". So far it seems good.
Code:
( /sbin/zfs send -i ${previous} ${current} | /sbin/zfs receive -o readonly=on -v -u ${dest} ) || ( /usr/bin/logger "zfs-send-receive failed" ${previous} ${current} ${dest} ; /root/Root/backup/once_usr_home )
But differential snapshots is indeed a good idea.
 
Or otherwise, not using scripts , is there a tool to do these incremental backups ?

Yes, there is. It's called sanoid (freshports), and it is excellent. You write a config file to specify your snapshot schedule, and it handles the snapshots and pruning automatically.

It also comes with a replication tool called syncoid. It figures out the common snapshots between a source and destination, and sends new snapshots to the destination.

I really would not bother with a custom script for this. There's a lot of nuance and error handling that needs to take place, and sanoid/syncoid cover it all.

Here's the script I use to snapshot and sync, which runs via cron:

Bash:
#!/bin/sh
set -e
sudo sanoid --cron --quiet
sudo syncoid --quiet -r --no-privilege-elevation --no-sync-snap --sendoptions="w" --recvoptions="u" \
    --sshkey=/usr/home/patmaddox/.ssh/nas-rsyncnet zdata/crypt/istudo nas-user@myhost.rsync.net:zsync/snaps/istudo

My host uses an encrypted dataset. syncoid sends the raw blocks to rsync.net, so they are fully encrypted on the remote backup server. To restore, I just pull the snapshots using syncoid, load the encryption key on my local host, and away I go.

Just configure /usr/local/etc/sanoid/sanoid.conf for your snapshot schedule.
 
How you can restore a single file from snapshots?
Suppose you want pippo.txt from a backup

Syncoid (from sanoid) is great for replica over ssh, I use all the time, for "filesystem" AND backup-backups

But for data backups (plural) snapshots are not really good
Even hb is much better

Of course the best is... zpaq :)
 
In my above post, by "restore" I meant "disaster recovery" where my local disk caught on fire and I need to restore it completely.

ZFS snapshots are without a doubt the best backup format that I've ever used. covacat already showed how to restore a file. It couldn't be easier.

That's the thing about ZFS - it's just a file system. It's the exact files that you put on the file system. It's not some weird mashup of hard/sym-links, nested directories per day, moving / renaming things. A tool doesn't need to churn through files looking at mod times or checksums to see what's changed, read bits from the middle of the file to do incremental backups, etc.

Backing up is a three-step process:

1. Save your file to disk.
2. Snapshot it (backup).
3. zfs send | zfs receive (remote backup)

You have a crypographically verified, auto-healing (with mirrored raidz), bit-perfect backup. Backups are incremental because of copy-on-write, and replication is incremental because it just sends the changed blocks.

Sanoid and syncoid do all the dirty work. All you have to do is lay out your file system how you want, and configure the backup plan.

Yeah, I am a total fanboy. I consider my files temporary until they are on two geographically separated ZFS systems.
 
Back
Top