Solved ZFS Backup server

Hi all,

Last week, I lost my backup server and I am now rebuilding a new one.
Hopefully this time I will get it done right..

I have 3 servers all running FreeBSD 11.1 to backup.
I use a script to do a differential backup and sent the snapshots to an external server...

When I installed FreeBSD from scratch on the new backup server, what do I need to do in order to receive all the zfs pool from the other server?
Are there specific precautions to take?

I have the following naming convention for my zfs pool (all pool has 6 disks in raidz2 [1vdev]):
server 1: zroot
Server 2 zprod

The backup server zfs pool will be called zback and has 4disks also in raidz2.

My plan was to run the following once FreeBSD is installed on the backup server:
zfs create zback/zroot
zfs create zback/zprod

then send the initial snapshot with:
zfs send -R zback@-2018-04-26 | mbuffer -q -v 0 -s 128k -m 1G | ssh root@62.30.xxx.xxx "mbuffer -s 128k -m 1G | zfs receive -Fduv zback/zprod"
zfs send -R zroot@-2018-04-26 | mbuffer -q -v 0 -s 128k -m 1G | ssh root@62.30.xxx.xxx "mbuffer -s 128k -m 1G | zfs receive -Fduv zback/zroot"


Also, there is any tool that can automate zfs backup or am I better doing it this way via cronjob nightly?

Thank you in advance

Code:
#!/bin/sh
export PATH=/bin:/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:
pool="zprod"
destination="zback/zprod"
host="62.30.xxx.xxx"

if [ -f /tmp/backupscript.lock ]; then
        logger -p local5.notice "Backup did not complete yesterday FAILED"
    echo  "Backup did not complete yesterday FAILED" | /usr/bin/mail -s "Backup Report" root
        exit 1
else
        touch /tmp/backupscript.lock
fi

today=`date +"$type-%Y-%m-%d"`
yesterday=`date -v -1d +"$type-%Y-%m-%d"`
day=`date -v -30d +"$type-%Y-%m-%d"`

# create today snapshot
snapshot_today="$pool@$today"

# look for a snapshot with this name
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_today$" > /dev/null; then
        logger -p local5.notice "snapshot, $snapshot_today, already exists skipping"
else
        logger -p local5.notice "Taking todays snapshot, $snapshot_today"
        zfs snapshot -r $snapshot_today
fi

# look for yesterday snapshot
snapshot_yesterday="$pool@$yesterday"
if zfs list -H -o name -t snapshot | sort | grep "$snapshot_yesterday$" > /dev/null; then

        if zfs send -R -i $snapshot_yesterday $snapshot_today | mbuffer -q -v 0 -s 128k -m 1G | ssh root@$host "mbuffer -s 128k -m 1G | zfs receive -Fdu $destination" > 0; then
                logger -p local5.notice "Backup OK"
        echo  "Backup OK" | /usr/bin/mail -s "Backup Report" root
        else
                logger -p local5.error "Backup FAILED"
        echo  "Backup FAILED" | /usr/bin/mail -s "Backup Report" root
        exit 1
        fi
        rm /tmp/backupscript.lock
        zfs destroy -r $day
        exit 0
else
        logger -p local5.error "missing yesterday snapshot Backup FAILED"
    exit 1
fi
 
There really isn't a "one size fits all" kind of solution here, but for what's its worth I wouldn't try to actually "install" the backups on your backup server like that. Unless of course we're talking about a spare fallback which should be able to replace any of the other servers instantaneously.

I see 2 problems with the current scenario. First is diskspace. Single backup images will take up less space than having them fully installed. So instead of running zfs recv I'd use dd of=/opt/backups/image.zfs. Optionally you could even compress them.

The second problem is the restoration process. It will take more time for your backup server to create a new image and sent that over (which in its turn would then need to be processed again as well) than simply grabbing the image from the backup server and then processing it. Just like it will take less time to sent over one single image than having to copy dozens of different files.

So my advise would be to only store image files and don't bother yourself with unpacking them. The final reason why I feel this way is even if you need to restore a single file you can already do so on the server which keeps the snapshots itself. For example: if you make periodic snapshots of /home then you'll have /home/.zfs/snapshot at your disposal in which you can access all the individual snapshots and their files.

Hope this can help a bit.

I can't comment on scripts and such because I always set up my own customized backup schemes with ditto scripts. For example, this is what I use to sort out my snapshots:

Code:
#!/bin/sh

## Snapshot.ZFS v1.0
##
## A script which will manage ZFS snapshots on the
## filesystem(s) of your choosing.

### Configuration section.

# ZFS pool to use.
POOL="zroot";

# Filesystem(s) to use.
FS="/home /local /var /opt/spigot"

# Retention; how many snapshots should be kept?
RETENTION=7

# Recursive; process a filesystem and all its children?
RECURSE=yes;

## System settings 
PATH=$PATH:/sbin:/usr/sbin

### Script definitions <-> ** Don't change anything below this line! **

CURDAT=$(date "+%d%m%y");
PRVDAT=$(date -v-${RETENTION}d "+%d%m%y");
PROG=$(basename $0);

if [ ${RECURSE} == "yes" ]; then
        OPTS="-r";
fi

### Script starts here ### 
if [ "$1/" == "/" ]; then
        echo "Error: no command specified."             > /dev/stderr;
        echo                                            > /dev/stderr;
        echo "Usage:"                                   > /dev/stderr;
        echo "${PROG} y : Manage snapshots."            > /dev/stderr;
        echo                                            > /dev/stderr;
        exit 1;
fi

if [ "$1" == "y" ]; then

        # Make & clean snapshot(s)
        for a in $FS; do
                ZFS=$(zfs list -r ${POOL} | grep -e "${a}$" | cut -d ' ' -f1);
                if [ "$ZFS/" == "/" ]; then
                        echo "${PROG}: Can't process ${a}: not a ZFS filesystem." >/dev/stderr;
                else 
                        $(zfs snapshot ${OPTS} ${ZFS}@${CURDAT} > /dev/null 2>&1) || echo "${PROG}: Error creating snapshot ${ZFS}@${CURDAT}" > /dev/stderr
                        $(zfs destroy ${OPTS} ${ZFS}@${PRVDAT} > /dev/null 2>&1) || echo "${PROG}: Error destroying snapshot ${ZFS}@${PRVDAT}" > /dev/stderr
                fi
        done;
else
        echo "Error: wrong parameter used."             > /dev/stderr;
        echo                                            > /dev/stderr;
        echo "Usage:"                                   > /dev/stderr;
        echo "${PROG} y : Manage snapshots."            > /dev/stderr;
        echo                                            > /dev/stderr;
        exit 1;
fi
In short: specify the pool, then specify the filesystem(s) you wish to process and the retention. The script will do the rest and create new snapshots and remove old ones. It's a rather old script and I could probably rewrite a few segments (at the very least use functions) but alas... it does the job ;)

Maybe this could be useful to you, dunno :)
 
I have a system with fairly simple scripts that sends my snapshots to a zpool on an external drive on a desktop running Freebsd. I export the pool on the external drive and swap it out weekly for an offsite copy. My shell scripts are not worth posting as what I see here is better. A couple of observations though:
1.I used to backup to files instead of live zfs filesystems. But I saw people saying that if said files get corrupted ,they are more likely to be unusable than zfs filesystems which have error handling capability.
2. I also used to automatically snapshot quite often. The number of snapshots got unwieldy. I now snapshot when I run the backup script, usually after doing something I wouldn't want to lose.
3. when I first send (without -I<last snapshot backed up>) to get things set up, on the desktop machine that has the backup disk, the filesystems on the external drive mount themselves to the real mount points. I think there is a command line option to zfs recv that will stop this but I didn't get around to finding out what it is. I would recommend finding that out before doing something like this as the system had some problems when it was in that state.
 
There really isn't a "one size fits all" kind of solution here
+1

I have been using over the years sysutils/zfsnap2 to take snapshots and prune the old once. It is easy as
Code:
# minute        hour    mday    month   wday    command
# taking snapshots

30      9-16/2  *       *       1-5     /usr/local/sbin/zfsnap snapshot -s -S -a 2w storage/project
0       19-22/3 *       *       1-5     /usr/local/sbin/zfsnap snapshot -s -S -a 4w storage/data

# destroy expired snapshots
0       3       *       *       *       /usr/local/sbin/zfsnap destroy -r storage/project
0       4       *       *       *       /usr/local/sbin/zfsnap destroy -r storege/data

Those snapshots can be easily incrementally replicated to the remote machine with sysutils/zxfer. You don't need to install anything on the remote server sysutils/zxfer will delete stale snapshots for you on the remote machine.

Code:
# Incremental remote replication to backup1
0       5       *       *       *       /usr/local/sbin/zxfer -dFkP -o compression=lzjb -T root@backup1 -R storage/project
0      6       *       *       *       /usr/local/sbin/zxfer -dFkP -o compression=lzjb -T root@backup1 -R storage/data
 
nless of course we're talking about a spare fallback which should be able to replace any of the other servers instantaneously.
When I only had 1 server, the point of the above solution was to use daily differential snapshots so that in case of a disaster I can just plug this server to the DC as I leave less than 5 minutes away from it :).
/boot/loader.conf
Code:
- vfs.root.mountfrom="zfs:zback"
+ vfs.root.mountfrom="zfs:zback/zprod"
zpool set bootfs=zprod zprod
 
Back
Top