What is a descend zfs backup strategy.

Alain De Vos

Daemon

Reaction score: 546
Messages: 1,890

I'm planning of editing crontab to do some zfs backups.
I think of something like incremental backups each 3 hours.
A full backup each day
A full backup each week.
A full backup each month.
What's your idea ?
The computer is not continously powered but mostly when i surf internet or do stuff.
So backup not at a specific hour but after some time of poweron ?
fcron is flexible.
 

Phishfry

Beastie's Twin

Reaction score: 2,572
Messages: 5,457

I have been pondering the same.
Big SATA disk for "ZFS replication disk" in separate pool.
Would be nice if I could electronically power it up (GPIO?) for weekly backup and power down when done.
 

Menelkir

Active Member

Reaction score: 193
Messages: 183

I have been pondering the same.
Big SATA disk for "ZFS replication disk" in separate pool.
Would be nice if I could power it up for backup and power down when done.
USB-C have this feature afaik, maybe there's some usb-c enclosure capable of doing that?
 

Phishfry

Beastie's Twin

Reaction score: 2,572
Messages: 5,457

I wast thinking of a smart inline 4Pin Molex adapter to the drive for power control.
Controlled by GPIO or crude timer circuit.
None of my servers has USB-C. My Gigabyte MX31 has OTG. That's the highest tech USB3 I got..
 
OP
Alain De Vos

Alain De Vos

Daemon

Reaction score: 546
Messages: 1,890

I'm still looking for a good way to do incremental backups.
A full backup is more or less straightforward.
 

Menelkir

Active Member

Reaction score: 193
Messages: 183

I wast thinking of a smart inline 4Pin Molex adapter to the drive for power control.
Controlled by GPIO or crude timer circuit.
None of my servers has USB-C. My Gigabyte MX31 has OTG. That's the highest tech USB3 I got..
You can also disable a usb port entirely after the backup, but I don't think it's a clean solution, at last not to be automated. With a molex with power control, you still need to make sure that the drive interface supports hot-plugs, afaik, not all the SATA interfaces support this.
 

mer

Well-Known Member

Reaction score: 189
Messages: 329

Snapshots are useful for ZFS backups. There are ports that can be configured to do snapshots on whatever interval you want, once you do that, zfs send/receive the snapshots to an external USB drive that has been configured as a zpool.
Power control is common as long as the device is port powered.
 

Zirias

Daemon

Reaction score: 1,341
Messages: 2,362

I'm planning of editing crontab to do some zfs backups.
IMHO, a backup should be kept in a different (physical) place. So, cron would only be an option if you do it over the network.

ZFS snapshots already support "incremental" backups. There are lots of tools for that purpose around, or you just use some script based on zfs send/zfs recv.

I personally do these around once per month. For immediate protection, you'd have some redundancy in your pool. How often you do it depends on how much data you're prepared to lose in the event of a "catastrophic" failure (like two bad disks at the same time in a raid-z pool, or, of course, a "really stupid"™ command issued as e.g. root).
 
OP
Alain De Vos

Alain De Vos

Daemon

Reaction score: 546
Messages: 1,890

I've got something.
A full backup 15 minutes after booting.
An incremental backup 3 hours later , then 6 hours later, then 6 hours later.
Should be safe.
 

Neubert

Member

Reaction score: 27
Messages: 49

I've been using large removable drives as ZFS backup destinations for a year or so and it works pretty well. Swapping the drives takes about 15 seconds and the backup process automatically restarts when the next drive is inserted.

devd rules detect when the drive is inserted and call a shell script that loops to run backups until 3AM the next day. Snapshots on the source machines are done with zfs-auto-snapshot and the backup script calls zxfer to copy the latest snapshots to a backup zpool on the removable drive.

After the last backup finishes after 3AM, the script dismounts the removable drive and communicates the results of the backup with LED flashes on a blink(1) plugged into the backup server. I posted the shell script and devd rules here in case it helps. I have since added a --cancel option to interrupt a running backup and --init to integrate new backup drives into the rotation.
 

Phishfry

Beastie's Twin

Reaction score: 2,572
Messages: 5,457

Right now on one buildbox machine I use a third disk for gmirror. Rotate in disk weekly via tray.
My problem with this method is insertion cycles on the connectors.
In and out all the time with the trays will lead to diminished results.
Plus all that plastic. I had some stuff in storage. A unused six bay Thermaltake SATA 2.5" dock 5.25".
Went to check it out and eject a tray and the handle snapped right off like a chicken bone.
Cheezy plastic didn't handle storage well.

So I am looking for near-line single disk storage for a ZFS array of SSD.
Not looking for USB I want something faster like a HGST 12TB spinner. But essentially a backup.
I am tempted to use a simple on off switch mounted on the back.
Unmount the drive and use camcontrol eject. Then axe power.

I bought one of these for the purpose but haven't used it yet.
Seemed overkill but I like all the wiring provided.
 

fcorbelli

Active Member

Reaction score: 54
Messages: 167

I'm planning of editing crontab to do some zfs backups.
I think of something like incremental backups each 3 hours.
A full backup each day
A full backup each week.
A full backup each month.
What's your idea ?
The computer is not continously powered but mostly when i surf internet or do stuff.
So backup not at a specific hour but after some time of poweron ?
fcron is flexible.
You can make a full backup each hour, forever, if you do not change a lot the data.
Just take zpaqfranz :)

Typically you will use a secondary zfs spinning drive: that's the simpliest approach, just a single crontab line.
Two with a periodic scrub.

---
For data that changes often you can integrate with a zfs (syncoid) replica, but it has the side effect of requiring you to change the default shell (which is why I only use it with ssh on a remote server).

If you can afford it (about 120 euros / year) you can rent a FreeBSD machine with 2TB of space on which to put both a replica and an rsync of the zpaq file
 

fcorbelli

Active Member

Reaction score: 54
Messages: 167

I made an initial post a while ago about BSD backups, but with zero feedback.
The short version is: imagine you have a sort of 7z or RAR that takes "snapshots" of the folders, keeping them forever.

Since I use almost all BSD servers it is not difficult to compile (it only takes a few seconds)
Code:
g++ -O3 -march=native -Dunix zpaqfranz.cpp -pthread -o zpaqfranz -static-libstdc++ -static-libgcc


Seeing is believing.

For an old version
Code:
mkdir /tmp/testme
cd /tmp/testme
wget http://www.francocorbelli.it/zpaqfranz/ports-51.10.tar.gz
tar -xvf ports-51.10.tar.gz
make install clean

Then you will do something like
Code:
zpaqfranz a /copy/mybackup.zpaq /home/whatever /etc /usr /root
 
OP
Alain De Vos

Alain De Vos

Daemon

Reaction score: 546
Messages: 1,890

You can make a full backup each hour, forever, if you do not change a lot the data.
Just take zpaqfranz :)

Typically you will use a secondary zfs spinning drive: that's the simpliest approach, just a single crontab line.
Two with a periodic scrub.

---
For data that changes often you can integrate with a zfs (syncoid) replica, but it has the side effect of requiring you to change the default shell (which is why I only use it with ssh on a remote server).

If you can afford it (about 120 euros / year) you can rent a FreeBSD machine with 2TB of space on which to put both a replica and an rsync of the zpaq file

My root shell is zfs. Works fine.
I have toor with oksh just in case
 

fcorbelli

Active Member

Reaction score: 54
Messages: 167

My root shell is zfs. Works fine.
I have toor with oksh just in case
Syncoid/sanoid (the scripts I use for zfs replica)
Code:
Syncoid assumes a bourne style shell on remote hosts. Using (t)csh (the default for root under FreeBSD) will cause syncoid to fail cryptically due to 2>&1 output redirects.

To use syncoid successfully with FreeBSD targets, you must use the chsh command to change the root shell:
root@bsd:~# chsh -s /bin/sh

BUT the question is: do you have tried zpaqfranz?
:)
 
OP
Alain De Vos

Alain De Vos

Daemon

Reaction score: 546
Messages: 1,890

I'm sharing my current solution, although its a bit ugly,
rc.local:
Code:
/usr/bin/nohup /usr/local/bin/zsh -c "sleep   500;logger clone_daily        ;ls /mnt/ZUSB2_x ;/root/bin/clone_daily"              &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep    60;logger update_ports_source;ls /mnt/ZUSB2_x ;/usr/home/x/poudriere/update_ports_source" &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep   180;logger snapshot_daily     ;ls /mnt/ZUSB2_x ;/root/bin/snapshot_usr_home_daily"  &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep 10800;logger snapshot3h         ;ls /mnt/ZUSB2_x ;/root/bin/snapshot_usr_home_3h"     &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep 32400;logger snapshot9h         ;ls /mnt/ZUSB2_x ;/root/bin/snapshot_usr_home_9h"     &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep 54000;logger snapshot15h        ;ls /mnt/ZUSB2_x ;/root/bin/snapshot_usr_home_15h"    &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep 75600;logger snapshot21h        ;ls /mnt/ZUSB2_x ;/root/bin/snapshot_usr_home_21h"    &
export weekday=`/bin/date | /usr/bin/awk '{print $3 %7 }' `
echo $weekday
if [ "x${weekday}" = "x0" ]
then
/usr/bin/nohup /usr/local/bin/zsh -c "sleep  1000;logger clone_weekly       ;ls /mnt/ZUSB2_x ;/root/bin/clone_weekly"             &
/usr/bin/nohup /usr/local/bin/zsh -c "sleep  2000;logger snapshot_weekly    ;ls /mnt/ZUSB2_x ;/root/bin/snapshot_usr_home_weekly" &
fi
The purpose of the "ls" is to wake up a sleeping usb drive.
 

fcorbelli

Active Member

Reaction score: 54
Messages: 167

For the /home you can easilly do a
Code:
zpaqfranz a /mnt/ZUSB2_x/mycopyofthehome.zpaq /home

For entire /usr (so with /usr/local/etc)
Code:
zpaqfranz a /mnt/ZUSB2_x/mycopyofusr.zpaq /usr

With a crontab every hour or whatever
Probably a 3/5 minute run
That's all

If you want to do something better (for any open files) you can create an ad hoc snapshot (say @franco), do the zpaqfranz from there, and then destroy it.
Indeed, in general, the opposite is even better
- destroy the backup snapshot
- create it
- backup

That way you have one more snapshot
 

rootbert

Well-Known Member

Reaction score: 148
Messages: 400

I have not tried zpaqfranz, it is not in ports and I do not want to fiddle around with external software if I can avoid it.

At my clients infrastructures, two of the most convenient solutions regarding zfs:
*) cron job to do an hourly backup via zfsnap and immediately send it offsite via zxfer; in the evening run zfsnap and delete snapshots older than X days.
*) Also nice is zrepl, I use it for backing up jails (only system data, no user data): define how many snapshots you want to keep locally and how many you want to keep on the backup site (usually roughly 4x the number of local snapshots because the local snapshots are on the expensive application servers with SSDs, backup servers are using cheap spindle disks) - make a snapshot with base system tools and do a "zrepl wakup job_sync" and zrepl syncs to the backupserver and automatically deletes the old snapshots. You can use zrepl inside jails, with an unprivileged user.
 
OP
Alain De Vos

Alain De Vos

Daemon

Reaction score: 546
Messages: 1,890

The scripts i use are rather simple
clone works very nicely
Code:
clone -s /etc             /mnt/daily_clone/etc
Or an incremental zfs snapshot
Code:
zfs send -i @${oldname} ${source}@${snapname} | zfs receive -F ${dest}
 

fcorbelli

Active Member

Reaction score: 54
Messages: 167

I have not tried zpaqfranz, it is not in ports and I do not want to fiddle around with external software if I can avoid it.
In fact the BSD port... is mine, a fork by ZPAQ.
I will hardly be able to get it into the ports if everyone refuses to use it :)


EDIT:
root@aserver:/usr/ports/archivers/paq # ls -l
total 19
-rw-r--r-- 1 root wheel 4098 Jul 30 2018 Makefile
-rw-r--r-- 1 root wheel 2954 Dec 5 2014 distinfo
drwxr-xr-x 2 root wheel 5 Oct 17 2018 files
-rw-r--r-- 1 root wheel 979 Mar 28 2013 pkg-descr
-rw-r--r-- 1 root wheel 141 Aug 18 2014 pkg-plist
This is the ancient ZPAQ (a really old version)
As you can see it IS in the ports

I can understand the distrust, it is the same as I have.
However, you simply will not find anything (today) more advanced, and it is a program developed for a decade by one of the world's leading compression "gurus" (his company was bought by Dell and integrated into the latter's storage).

Certainly a much more restricted area (Russian compression forum) than BSD forum.

But I can make a bet: try it once.
Only once.
And you will change your mind.

Being opensource (I worked very hard to package everything into a single file) what does it cost you?
No dependencies, no junk left around, a single line to compile (no makefile)
 

fcorbelli

Active Member

Reaction score: 54
Messages: 167

The scripts i use are rather simple
clone works very nicely
Code:
clone -s /etc             /mnt/daily_clone/etc
Or an incremental zfs snapshot
Code:
zfs send -i @${oldname} ${source}@${snapname} | zfs receive -F ${dest}

It's not so simple: replica via snapshots require that snapshot... exists and are in sync.
I highly suggest sanoid (simply... it works) with the syncoid part.
 

astyle

Aspiring Daemon

Reaction score: 248
Messages: 560

Just pipe the ZFS backup output over to your NFS share or your mounted USB stick whenever... or edit zfs.conf for usable defaults.
What is an indecent zfs backup strategy?
when you back up stuff you don't want others to know about. ;)
 
Top