ZFS What's an easy/understandable way to backup my zpool across machines

I apologize in advance if this is inartfully asked...

I have a zpool on one machine that I want to backup to another - both are running FreeBSD 13 w/OpenZFS. I would like to run the backup every night, automatically. Currently, I run the following script:

~/backup-scm.sh
#!/usr/local/bin/bash -x
sudo zfs snapshot zfs/scm@snap1
sudo zfs send zfs/scm@snap1 | ssh destination sudo zfs recv -F zroot/backup/scm
ssh destination sudo zfs destroy zroot/backup/scm@snap1
sudo zfs destroy zfs/scm@snap1

I run the script as my normal user cuz it works :). But, I am looking to automate the process and run it either as a periodic or in a crontab. By way of background, every time I try to put it in crontab, I run into permission issues or path issues or whatnot. I'm curious as to what the current best practice is and how it works.
 
You could do:
su
crontab -e
Add a line to a script you write yourself.
For instance, take a snapshot each day and delete the snapshot of the same day the week before.
And take an additional snapshot which you don't delete each month.
 
Do it in root's crontab, and make sure whatever account you're SSHing into can run sudo zfs without a password (or just enable direct root login).
 
Crontab can run processes as root, so a password is not a problem.
Read his command again. He's sshing into another machine and immediately zfs recving the snapshot. This requires either root privileges on the destination machine (the method he's using, via sudo), suid abuse, or explicitly granting the privilege to a non-root user. I'll admit that the last option is probably the best way to do it, but at least his method doesn't abuse suid.
 
He's transferring the backup over the network to another machine. SSH is one way to do this, and not a bad one.
 
Yes I see. Note, with ssh you can configure automatic login.
Or you do an ssh login as root or you do ssh login as regular user and belong to wheel group or sudo-users.
 
Why would you want to replicate like that anyway? Only gobbles up extra storage space.

I always save my snapshots as files, so using dd on the backup server. Saves resources and the hassle of requiring root access.
 
You could also use some tools that do it for you like sysutils/zrepl, so that you can easily use the incremental replication,
define how much snapshot you want to keep, the frequency etc. .
If you are worried about space on the sender, you can set a retention grid that keep 1 hours of snapshot on the sender.
 
Read his command again. He's sshing into another machine and immediately zfs recving the snapshot. This requires either root privileges on the destination machine (the method he's using, via sudo), suid abuse, or explicitly granting the privilege to a non-root user. I'll admit that the last option is probably the best way to do it, but at least his method doesn't abuse suid.
What privileges could that be? I would prefer it adding user to a zfs group if such exists. I have been adding to wheel which it gross privilege.
 
That's not enough. The user will be faced with permission denied error on running the command - zfs send/receive - for transferring snapshots.
 
Do it in root's crontab, and make sure whatever account you're SSHing into can run sudo zfs without a password (or just enable direct root login)

Riffing off of this, I did:

# login as root, create an ssh key
su -
ssh-keygen
ssh-copy-id -i /root/.ssh/id_rsa.pub user@destination

# create a script directory and script
mkdir /root/crontab-scripts

vi root/crontab-scripts/backup-scm.sh
#!/usr/local/bin/bash -x
zfs snapshot zfs/scm@snap1
zfs send zfs/scm@snap1 | ssh user@destination sudo zfs recv -F zroot/backup/scm
ssh user@destination sudo zfs destroy zroot/backup/scm@snap1
zfs destroy zfs/scm@snap1

# test the script
/root/crontab-scripts/backup-scm.sh 1>>/root/logs/backup-scm.out.txt 2>>/root/logs/backup-scm.err.txt

# check for issues/success and install the script, test (set a time in the next minute, then wait)
crontab -e
55 3,13 * * * /root/crontab-scripts/backup-scm.sh 1>>/root/logs/backup-scm.out.txt 2>>/root/logs/backup-scm.err.txt

# tweak the time as desired

#todo - pretty up the script and add timestamps
 
Don't forget to set up the remote backup machine to accept SSH connections from your cron machine. Sometimes, a new SSH install has sensible default config values that you can leave alone, and sometimes you might want to check the default config just in case.
 
Back
Top