ZFS Help moving from manual to automatic backup

Greetings all,

I have now finished the arduous task of unifying files from a plurality of computers onto a central server and have made a backup to a backup server. I would like to automate the backups, which until now I have been doing manually.

As I want to take an advantage of snapshots, I have concentrated on such applications. It is rather difficult, at least for me, to make sense from the descriptions, but without any experience and with my limited understanding, I concluded that:

1. I would prefer an application written in sh(1). I am by no means an experienced shell script programmer, but with a help of search I can muddle through, so if the developer gets run over by a truck, I can maintain the application and/or modify it for my needs.
2. I would prefer a separate application for creating the snapshots and for transferring the snapshots between the central server and the backup server. I think that it simplifies the scripts and thus understanding. Furthermore, for less critical or infrequently changing data I can create several snapshots and transfer them in bulk with the -I option.
3. I would like to use pull from the backup server, that way there can be only a one-way (ssh) connection from the backup server to the central server.

If anyone can comment on the above and/or recommend such scripts, I would appreciate it.

Kindest regards,

M
 
I do not think it is doable only by script
Can you consider to use an archiver too?
 
in some environments I use sysutils/zfsnap2 to regularly create a snapshot, this tool also allows you to specify how long to keep a snapshot: I use several daily cronjobs to take snapshots and once daily the same tool to cleanup old snapshots. Furthermore, sysutils/zrepl or sysutils/zxfer is used to transfer the snapshots to other systems. Quite simple and effective
 
Hi fcorbelli,

thank you for the answer.
I do not think it is doable only by script
Can you consider to use an archiver too?
Could you please elaborate why not, and why I need an "archiver" whatever it is?

Hi rigoletto@,

thank you, I will start looking at them.

Hi rootbert,

I actually found the git-page for the predecessor, which states:
This branch contains the new 2.0 code-base which is in beta. While 2.0 is a big step forward and has far better testing, it has not been used as widely in production as the zfSnap 1.x line.

Testing is most welcome, but use at your own risk.
Since, as you could gather from my post, I am not quite understanding what features to look for, I shied away.

In a meanwhile, I have been doing some more reading, and if my current understanding is correct, the backup is not that easy.

First, most of the utilities are doing replication, i.e., the exact copy of the source is on the sink. I do not necessary mind that, but there is need to have a snapshot common to both the source and the sink, and I would like to have different (greater) numbers of snapshots on the sink. I have found one tool that seems to satisfy that - sanoid, but it is rather sophisticated, there is only a single developer, and its written in perl.

What further worries me, that some of the tools, e.g., both the zfsnap and the sanoid enforce a specific name for the snapshots. So, what if I do not like the tool and want to try a different one?

As you can see, I do not still quite understand the whole process.

Kindest regards,

M
 
First you need "something" (an archiver) that can retain snapshots, almost forever, on ANOTHER media
Code:
wget http://www.francocorbelli.it/zpaqfranz/freebsd/zpaqfranz-55.9.tar.gz
tar -xvf zpaqfranz-55.9.tar.gz
make install clean

Suppose you want to backup the /tank/d to /monta/backup_server/ (a mount of the backup_server, by NFS for example) into copy.zpaq
Code:
zfs destroy  tank/d@franco
zfs snapshot tank/d@franco
/usr/local/bin/zpaqfranz a /monta/backup_server/copy.zpaq /tank/d/.zfs/snapshot/franco
-to /tank/d
zfs destroy tank/d@franco

That's all

If you have local disk space, and a remote rsync something (example, NAS or whatever, rsync-n-ssh remote server, rclone etc), on /temporaneo/cloud
(just a snippet, in this example with 500K bandwith limitation)

Code:
zfs destroy  tank/d@franco
zfs snapshot tank/d@franco
/usr/local/bin/zpaqfranz a /temporaneo/cloud/copy.zpaq /tank/d/.zfs/snapshot/franco
-to /tank/d
zfs destroy tank/d@franco

/usr/local/bin/rsync  -I --append --bwlimit=500 --omit-dir-times --no-owner --no-perms --p
artial --progress -e "/usr/bin/ssh -p $PORTA -i $CHIAVE "  -rlt  --delete "/temporaneo/cloud/" "$UTENTE@$SERVER:/home/somewhere/cloud"
 
Don't confuse a snapshot with a backup.
Obviously it isn't.
The backup must be kept on a different device

If there is enough local space, the best way is to keep (at least) two copies
A local, for errors such as unwanted cancellations etc.
A remote (or even 7 remote, I can explain the various different modes) which is the copy of the local one
The difference is that ONLY the bytes added since the last backup need to be sented to the remote copy
This means a few seconds (LAN connection) or a few minutes (VDSL 2MB/s), or ~a minute (fiber 10MB/s), the "magic" is in --append

Just think zpaqfranz like 7z (or rar, or whatever) WITH snapshots support

Every time you run zpaqfranz, you get another "snapshot" of the data.
If you run on a "real" snapshot [@franco in my example], you will "freeze" the snapshot data into the archive
Forever

There are other tools to achieve this, mind you.
But let me advertise ... mine :)
 
For snapshot management: zfstools
Src: FreeBSD Mastery: ZFS pp. 180-186

For snapshot auto-replication incl. pull mode: zxfer
Src: FreeBSD Mastery: ZFS Advanced pp. 80-86
 
Backups is a wide field.
I gave it a thought myself, and it turned out for me I better "design" my own backup solution that suits my needs.

First of all don't mix up snapshots and backups.
They're something different.
I recommend to do both (especially on zfs doing snapshots neither costs time nor space.)
ZFS is capable of doing snapshots on external storages, too.
But as far as I know, those cost time and space, because those need to have a copy of the filesystem to be snaphot.

However, Backups:
General questions:
What do you want to backup where to, and how often?
You see, backing up several GB of your /home on an external disk is something completely different than backing up a 10TB server to some cloud.

Do you need to backup just to an external drive to be safe against hardware failures, only?
Or do you need/want additional webspace, having redundancy against break-ins/natural desasters?

Again, you see, you need to distinguish.
What data needs to be stored where?
Which is exsitential? Which not really?
Which needs to be encrypted?
Of course, all this can be automated - and should be.
But you need to know exactly what to do with what, where to,
what amounts of data you're dealing with,
the capacity of the lines your'e sending the data over (USB, SATA, (W)LAN, internet...)
etc.

To reduce the amount of data to be backupped, all serious bu-solutions provide technics to only transfer new/changed data.
Here you'll enter another wide field, again,
containing such things like differential or inkremental backups, databanks (MySQL) and much more.
This ain't the place and I'm not the person to explain all this here on correct level, since all this also belongs to:
What do you need/want?
But shortly:
I would not recommend to do such things on your own.
You'll better spend time learning to use an already existing bakup-solution like backula (There are also others, and if you ask, I'm sure you'll receive answers here),
than reinventing the wheel.

Downsize (at least for me) of those solutions is:
All data is stored in some kind of database.
In most cases you only have access to your data via the correspodening backup-tool, only.
If I need a backup, I mostly want to restore to a single file, only - (or everything; that's what snapshots are for.)
Depending on the tool this can be a bit more complicated than just copy and overwrite the damned file I junked. 😁

Since I am a one-man-show with small data amounts to be backupped daily (app. 15GB),
and I prefer to just have a copy of my files to ensure to be safe against hardwarefailures,
and figured out, that backula seems to be a very good tool, but a bit too big and complicated for my little needs,
I decided to write me a small shell-script, which is started by cron,
that simply tars my home-directory on my NAS. (Of course you may tar selected directories, only.)
Very simple, not to say primitive, but it fulfill my needs.

Additionally I do snapshots (also automated within the same script),
and for special files I also use svn.
At this combination I feel pretty safe against hardwarefailures and personal stupidity.
But I'm not safe against break-ins or natural desasters.

With large(r) amounts of data you must use some kind of differentail or inkremental backup,
or you'll run into a situation when your system is doing too much backupping (cost you cpu-time and bandwith),
or even may run into some point, when your system wants to start the new backup, but the former one is not finsihed yet... 🤪

Summery:
Become absolute clear about what you want to backup, where, how often....
Specific answers can only be given on specific questions.

Very important last tip:
Whatever you do - test it!
Don't blindly rely on the tool, handbooks, documentation, man-pages.... and what you've configured, only.
Ensure:
- you can handle the tool(s)
(falling back can be in stress situations [panic];
you do not want to mess up things even worse, just because you're not well versed in the tools you're using! 🤓 )
{you'll may back up your backup-databse } 😁
- ensure it produces the results you want.
so:
Test it!
 
Backups is a wide field.
I gave it a thought myself, and it turned out for me I better "design" my own backup solution that suits my needs.
Well, you have simply to try... zpaqfranz :)

However, Backups:
General questions:
What do you want to backup where to, and how often?
You see, backing up several GB of your /home on an external disk is something completely different than backing up a 10TB server to some cloud.
In fact no, not very big differences (with zpaqfranz)

Again, you see, you need to distinguish.
What data needs to be stored where?
Which is exsitential? Which not really?
Which needs to be encrypted?
All, all, and all
To reduce the amount of data to be backupped, all serious bu-solutions provide technics to only transfer new/changed data.
Here you'll enter another wide field, again,
containing such things like differential or inkremental backups, databanks (MySQL) and much more.
In fact no
The least amount of data is the de-duplicated delta, always added to the archive
Just like... zpaqfranz :)

I would not recommend to do such things on your own.
All you need is a single command, or half a dozen (in previous post

Downsize (at least for me) of those solutions is:
All data is stored in some kind of database.
In most cases you only have access to your data via the correspodening backup-tool, only.
If I need a backup, I mostly want to restore to a single file, only - (or everything; that's what snapshots are for.)
Depending on the tool this can be a bit more complicated than just copy and overwrite the damned file I junked. 😁
Well, it's zpaqfranz :)
Since I am a one-man-show with small data amounts to be backupped daily (app. 15GB),
and I prefer to just have a copy of my files to ensure to be safe against hardwarefailures,
and figured out, that backula seems to be a very good tool, but a bit too big and complicated for my little needs,
I agree... then.. zpaqfranz
With large(r) amounts of data you must use some kind of differentail or inkremental backup,
You need something better
You need a versioned (or snapshotted) archive

Very important last tip:
Whatever you do - test it!
Don't blindly rely on the tool, handbooks, documentation, man-pages.... and what you've configured, only.
Yess... you will get about 5 level of test, even with TWO named "paranoid" in zpaqfranz :)

Ensure:

- you can handle the tool(s)
(falling back can be in stress situations [panic];
Yes, you can, on Windows too

you do not want to mess up things even worse, just because you're not well versed in the tools you're using! 🤓 )
{you'll may back up your backup-databse } 😁
- ensure it produces the results you want.
so:
Test it!
Of course I'm joking
But everything you have indicated, and much, much, much more you can do directly with zpaqfranz
You do not believe me?
Trial yourself
 
Hi fcorbelli, Profighost,

as much as I appreciate your rather detailed answers, I am afraid that they are addressing points that show that you have not read my initial post and are not germane to selection of the tool. To support my allegation, I a have written:
. . . onto a central server and have made a backup to a backup server. I would like to automate the backups, which until now I have been doing manually.
As I want to take an advantage of snapshots, . . .
Thus, I would respectfully suggest that (i) I understand that I need at least two copies, thus a backup, and (ii) I would like to take an advantage of snapshots and not rely on proprietary solution a la Bacula, Amanda, (ii) so I understand that snapshot per se is not backup.

Regarding the tools, I understand that fcorbelli is offering his own tool, but I have not find much about it in search, and it suffers from similar disadvantages that I perceive.

Kindest regards,

M
 
Not really.
Okay, I may put it a bit general, but in the core I explained the same issues as your concerns.
so if the developer gets run over by a truck, I can maintain the application and/or modify it for my needs.
describes the same issue as I ment with

In most cases you only have access to your data via the correspodening backup-tool, only.

However, in my backup-evaluation-process all on FreeBSD available backup-solutions I found I gave at least a quick look at.
I also found solutions in form of shell-scripts, perl or python.
But those are either very (too) specific, something completely else as I wanted, and mostly even more complicated to get into (at least for me) and to adapt as a very complex backup-tool.

So if you cannot become friends with such things like backula (or zapfranz or whatever), for whatever reasons,
and because one of your main points is to stay the master of it,
I wanted to encourage you to think of to write your own script, using shell tools to do the job.

That's what I wanted to tell with my long answer:
Some tasks, especially backups can be so specific, you may not find a existing solution that fully satisfy your needs.
So either you compromise, find, chose, learn, adapt, and live with a given, complex solution (backula, zapfranz,...),
or you create your own solution.

Of course, one does not have to reinvent the wheel everytime.
But there are situations where creating your own solution is quicker and better than spending too much time for searching the not-existing.
That's why Unix, that's why shell:
If you do not find the satisfying solution that suits you,
assemble your own one, using the modular tools from the toolbox.

I am by no means an experienced shell script programmer,

Especially when you are versed in programming languages like C++ variable-handling and comments in [ba]sh-scripting are be bit weird for the start,
but it's not rocketscience neither. (I did it 😁)
One may know this side: Advanced Bash-Scritping

Or to put it very short:
If you're doing it already by hand, it cannot be impossible telling a script to do the same thing.

peace out.
 
Hi fcorbelli, Profighost,


Regarding the tools, I understand that fcorbelli is offering his own tool, but I have not find much about it in search, and it suffers from similar disadvantages that I perceive.

Kindest regards,

M

Well, ahem no :)
It is already in the port from about 8 years or more, it is even in Wikipedia (!)

You can install the ancient version (zpaq) or compile the newer (zpaqfranz), or get and compile the latest (7.15 of 2016) from Mahoney's site, or... from Debian :)

The quickest way is
Code:
pkg install paq
that's it.
You get zpaq 6.57 of 2014 (that's why zpaqfranz is not called... zpaq. You can get both without collisions)

For the 2022 version a (not so updated wiki) here

The current FreeBSD port-submitted (not yet accepted)

Code:
wget http://www.francocorbelli.it/zpaqfranz/freebsd/zpaqfranz-55.9.tar.gz
tar -xvf zpaqfranz-55.9.tar.gz
make install clean

Or (better) you can get the .cpp and compile yourself (1 line needed, no make at all)
---
Returning to your problem: try to ask all the questions of your "magical" backup, that is what you always wanted, and you never dreamed of finding in a opensource software

You will see that zpaqfranz already does everything you can ask (except mounting as a filesystem to restore, I'm working on it)

If not (i.e. you need a function that isn't there) ... I'll implement it :)

BTW you can use the 2014 version (in the port tree) to extract, list, add, whatever from 2022's zpaqfranz (I worked very hard to keep backward compatibility) archives
So you will "forever" sure to get back your data, even without zpaqfranz at all
 
* exception
zpaqfranz, on Windows, does have a SFX module (32/64 bit).
All non-Windows (Linux, BSD, Solaris, Mac...), of course, do not "understand" Windows SFX .EXE files
Thinking back I could make a function to extract files (on FreeBSD) from a Windows .EXE file, but I think it's a pretty borderline situation
Could this situation happen to you?
 
Hi Profighost,

thank you for your reply.
I wanted to encourage you to think of to write your own script, using shell tools to do the job.

That's what I wanted to tell with my long answer:
Some tasks, especially backups can be so specific, you may not find a existing solution that fully satisfy your needs.
So either you compromise, find, chose, learn, adapt, and live with a given, complex solution (backula, zapfranz,...),
or you create your own solution.
Yes, this is now much clearer that your original one.

Or to put it very short:
If you're doing it already by hand, it cannot be impossible telling a script to do the same thing.
I am not so much concerned with the script itself, I actually collected some of the scripts in addition to the ones rigoletto@ kindly posted, and I am reviewing them to see how different people approach the problem. Some are really simple, generate a snapshot, generated a date, attach it to the snapshot, and save/transfer it over; without any concern about pruning. Others have a lot of checks, e.g., is the user authorized, does it have the correct permission to create and transfer the snapshot, is a previously generated snapshot available, and the like.

It is more that fact that I am concerned that I overlook something, e.g., having common snapshot, and who knows what I still do not know.

Would it be possible for you to provide me with your script for another inspiration, e.g., via p.m.?

Hi fcorbelli,

I did not mean to demean your effort, what I meant was that it seems that you are the single developer.

Kindest regards,

M
 
I am not the original developer
It is a fork of a software written from about 2009
Maybe you do not know the author
In the "compression world" he is a superstar, like a "Bill Gates" or "Rafa Nadal" of compression (now retired)
On single developer :
do you use 7z?
It is single developer (Igor)'s work (part is embedded into zpaqfranz for Windows on AMD CPU, you can find in the sourceforge 7z project)

Do you know that the algos you use in zfs (lz4, zstd) came from a single developer (Yann)?

In the compression "world" you will not find a 50 crew. Because those are opensource projects. 2/3/5 are a big crew
Even the entire google or facebook departments runs on 10-20 developers (well, maybe some more :)

I understand you do not believe me.
It is just because you really do not know how this kind of software (7z, rar, paq, zpaq, zstd, zpaqfranz etc) is made

My suggestion?
Just try
You will quickly become an "evangelist" :)

Soon you will realize that you cannot find anything better in the all world, for any amount of money, for this task

A zfs script vs zpaq/zpaqfranz is just like bow and arrow vs hypersonic missile
:)
 
Would it be possible for you to provide me with your script for another inspiration, e.g., via p.m.?
My script is of course tailored to my needs.
Nothing big, just tar my home via nfs on my NAS
and doing snapshots.
And I'm not dealing with internetconnections.

Basic start-idea of a script is:
Put the commands you enter into the shell anyway into a file and add a shebang 😎

The next thing is, you do not want many lines containing the same command over and over again, you want to put such in a loop, of course.
That's where scripting starts.

Best - and here good scritping starts - would be having clean checks and control, which files/connections exist, what to do if not and so on.
In this part my script really is quick and dirty, nothing exemplary.
(and I'm a bit anxious, too, 'cause I know, if I publish it here lots of guys will fall over it:"That's crap!", "Cannot do it this way..." etc 😨
[and they'd be right 😁])

But if you promise yourself to get the one or the other idea from it,
or at least being encouraged to start get into scripting yourself,
I'll give it a review and translate comments/explanations from german to english.

...here you go - but don't expect fancy hacker stuff 😂
It's more ment to encourage that scritping is not that hard.
I for myself like to start with small, simple example pieces than being overwhelmed by the complete and perfect profi source.

...ah, yeah, of course this is started by root's cron, containing this line:
Code:
@reboot    /path/2/script
 

Attachments

  • daily-bu.txt
    5.2 KB · Views: 62
I will follow up on my initial suggestion as I still believe it's the most fitting viable option applying to that case.

1. I would prefer an application written in sh(1). I am by no means an experienced shell script programmer, but with a help of search I can muddle through, so if the developer gets run over by a truck, I can maintain the application and/or modify it for my needs.

zfs-auto-snapshot as part of the zfstools package is written in ruby, so unfortunately here is some kind of compromis needed.
zxfer is a written in sh.

2. I would prefer a separate application for creating the snapshots and for transferring the snapshots between the central server and the backup server. I think that it simplifies the scripts and thus understanding. Furthermore, for less critical or infrequently changing data I can create several snapshots and transfer them in bulk with the -I option.

zfs-auto-snapshot is solely handling the snapshot management part by making use of cron.

Let's assume we want to snapshot the zfs dataset zdata/important every 15 minutes and we want to retain the last 4 snapshots. So we want to execute the following sequence of commands:

Code:
# zfs set com.sun:auto-snapshot=true zdata/important
Now we need to run zfs-auto-snapshot every 15 minutes, following an example using root crontab, which will create a snapshot every 15mins while retaining the last 4 snapshots, snapshots which consist 0 bytes (because no data has been changed) will be omitted:

Code:
# edit root cron table
$ crontab -e
# add to root cron table
#minute    hour    mday    month    wday    command
*/15    *        *        *        *        /usr/local/sbin/zfs-auto-snapshot 15minfreq 4

This will manage create,suppress 0 byte snapshots, and destroy old snapshots every 15,30,45,0 minute of each single hour 24/7/365.

The second requirement was to sync zdata/important including all it's snapshots at the time being to a remote location in pull mode using a seperate piece of software. Here comes zxfer in. Zxfer can do all the heavy lifting and keep our original zfs dataset and our remote zfs dataset in sync, including to satisfy the required pull request. For the purpose of this as being an example and the lack of any further details, I assume a zfs pool zbackup on the backup server and create a zfs dataset vault on that pool which will then contain our zfs important dataset from the central server.

Code:
# executed on backup server
$ zfs create -o canmount=off zbackup/vault
# executed on backup server to transfer zfs dataset important including all snapshots from
# central server to backup server
$ zxfer -dFkPv -O replicator@centralserver -R zdata/important zbackup/vault

A root cron table entry on the backup server helps to automate the launch of zxfer. Following the expanded root cron table from the backup server to pull the zfs dataset imortant including all snapshots from the central server every 30 minutes:

Code:
# edit root cron table
$ crontab -e
# add to root cron table
#minute    hour    mday    month    wday    command
*/30        *        *        *        *        /usr/local/sbin/zxfer -dFkP -O replicator@centralserver -R zdata/important zbackup/vault

3. I would like to use pull from the backup server, that way there can be only a one-way (ssh) connection from the backup server to the central server.
This is achieved with zxfer by using the -O option which allows to login to a remote machine and "pull" the data to the local host.

The two earlier mentioned books outline this in quite more detailed information and are a worthy investment.
 
My script is of course tailored to my needs.
Nothing big, just tar my home via nfs on my NAS
and doing snapshots.
And I'm not dealing with internetconnections.

Basic start-idea of a script is:
Put the commands you enter into the shell anyway into a file and add a shebang 😎

The next thing is, you do not want many lines containing the same command over and over again, you want to put such in a loop, of course.
That's where scripting starts.

Best - and here good scritping starts - would be having clean checks and control, which files/connections exist, what to do if not and so on.
In this part my script really is quick and dirty, nothing exemplary.
(and I'm a bit anxious, too, 'cause I know, if I publish it here lots of guys will fall over it:"That's crap!", "Cannot do it this way..." etc 😨
[and they'd be right 😁])

But if you promise yourself to get the one or the other idea from it,
or at least being encouraged to start get into scripting yourself,
I'll give it a review and translate comments/explanations from german to english.

...here you go - but don't expect fancy hacker stuff 😂
It's more ment to encourage that scritping is not that hard.
I for myself like to start with small, simple example pieces than being overwhelmed by the complete and perfect profi source.

...ah, yeah, of course this is started by root's cron, containing this line:
Code:
@reboot    /path/2/script

Without snapshots you can convert all of your script in this
Code:
/usr/local/bin/zpaqfranz a /thebackupfile.zpaq /root /usr/home/myusername /whateveryouwant

with snapshot (example tank/d)
Code:
zfs destroy  tank/d@franco
zfs snapshot tank/d@franco
/usr/local/bin/zpaqfranz a /thebackupfile.zpaq /tank/d/.zfs/snapshot/franco -to /tank/d
zfs destroy tank/d@franco
 
My script is of course tailored to my needs.
Nothing big, just tar my home via nfs on my NAS
and doing snapshots.
And I'm not dealing with internetconnections.

Basic start-idea of a script is:
Put the commands you enter into the shell anyway into a file and add a shebang 😎
(...)
It is a typical "ancient-time" archiving strategy
At every single execution, you will take just about the same space and time
Suppose you have 100GB of data (uncompressible for simplicity) that never changes (always max simplicity)
You do the first run on monday, and get
copy-01.tar.gz of 100GB
Then thuesday
copy-02.tar.gz of 100GB
Wednesday
copy-03.tar.gz of 100GB
...
copy-07.tar.gz of 100GB

Now you have 7 versions (one for day), each of 100GB, for 700GB
Every single run take the same time (for simplicity 1 hour), for 7 hours
Now, with 700GB full, you'll run whatever pruning you want, because you have (say) 1TB for backups
GFS, the latest 7, whatever

After a while you wil delete the "first" backup, you do not have ANYMORE the copy for (say) one year ago
---
I hope this is clear.
---
Now try zpaq's technology
Monday
copy.zpaq of 100GB
Then thuesday
copy.zpaq of 100GB
Wednesday
copy.zpaq of 100GB
...
copy.zpaq of 100GB

Now you have 7 versions (one for day), each of 100GB, for 100GB (yes, 100GB)
The first run will take (for simplicity 1 hour).
All the others (in the example) maybe 10 seconds.
Total time: 1 hour

Now you do not prune anything. You keep your data forever. After 3 years or whatever you can fully restore with the granularity of the executions (once a day? every 2 hours? whatever)
Because the archive will be 100GB (and you have 1TB free space, in this example)

---
What happen in real world with zpaq?
That you will change data into the folder, because you download e-mails, write .docs, take pictures etc.
How much? It really depends. Say (on average) 1GB every day
So you copy.zpaq become 100GB, then 101GB, then 102... then 106GB (after a week)
After about two years 100GB+365*2 = ~ 830GB.
Now it is time to freeze (copy the .zpaq on some kind of long term storage), and start again

There are also possibilities to operate on archives of practically unlimited size (we are talking about 500TB and more. TB, not GB) through multipart archives with index, but never mind, they are normally used for virtual datacenters

OK, to do all of this, what exactly do you need?

The executable (zpaq or zpaqfranz), and a single line command

Code:
zpaqfranz a /thebackupfilewhereitis.zpaq /thevarious/* /folderwithpicture/* /whatever /you /want

Do you want encryption? Put -key pippo. That's it
 
How about snapshot handling?
It is trivial
- delete the snapshot (just in case a zombie)
- take the snapshot
- update the archive with zpaqfranz and a final ... -to
- delete the snapshot

And you will get, into the archive, the snapshots marked with date and time
 
LAN or WAN or USB?
For mounted (nfs, samba) no big deal (of course)
In the example only 1GB for day will be written (works fine with USB or whatever, some 20 seconds maybe)
Offline, by rsync, rclone or (!) dropbox ?
In the example the first X-1GB will be the same. You need to send only 1GB for day
You have a 800GB archive that become 801GB? You only send the last 1GB (rsync --append, rclone, dropbox, whatever)
You want to copy yours local backup archive to a NAS?
Do not use cp or whatever, but rsync --append. Instead of sending (in the example) 100GB (size of a single run) only 1GB, about realtime on LANs.

More advanced tricks are possible (aka: zfs-on-ssh replication by sanoid-syncoid etc)
 
...and... my FreeBSD is broken!!
AArrgghh my BSD X.Y does not read my superduper zfs pool anymore!!!!
Arrghhh how I "unpack" my migthy zfs snapshots stored with pigz or whatever (zfs send)?

No problem: take Windows (!), zpaqfranz.exe, the .zpaq and restore right back (or Linux, if you prefer :)
 
...mmhhh.. I am paranoid!... how can I am sure that everything is restorable???
... you will get a gazillion of options, from testing of block hashes to full-scale-fake-restore

...mmmhh... I use spinning drive, very slow extraction by latency!!
... use the -ramdisk (and w), work almost always in RAM, unless a sequential write of everything takes place

...mmmh... I have very fast SSD/Nvme drives!
...no problem, use the -ssd switch and you will get multithreaded read and write

... mmmhhh. I have to deal with thousands of crontabbed-snapshots (by zfssnap or whatever)... I want to list-kill-take them all!
Code:
root@aserver:/tmp/zp # zpaqfranz zfslist "*" "--60d"
zpaqfranz v55.2c-experimental archiver,  compiled Jul 14 2022
tank/d@2022-06-13_00.01.00--60d
tank/d@2022-06-14_00.01.00--60d
tank/d@2022-06-15_00.01.00--60d
tank/d@2022-06-16_00.01.00--60d
tank/d@2022-06-17_00.01.00--60d
tank/d@2022-06-18_00.01.00--60d
tank/d@2022-06-19_00.01.00--60d
tank/d@2022-06-20_00.01.00--60d
then zfsadd (freeze the snapshots into the ZPAQ), and zfspurge
... and much, much more :)
 
Help me, I want to purge every snapshot starting with /tank/d and ending with --7d in the name!
Code:
root@aserver:/tmp/zp # ./zpaqfranz zfspurge "tank/d" "--7d"
zpaqfranz v55.10b-experimental archiver,  compiled Aug 11 2022
zfs destroy tank/d@2022-08-04_09.00.00--7d
zfs destroy tank/d@2022-08-04_11.00.00--7d
zfs destroy tank/d@2022-08-04_13.00.00--7d
zfs destroy tank/d@2022-08-04_15.00.00--7d
zfs destroy tank/d@2022-08-04_17.00.00--7d
zfs destroy tank/d@2022-08-04_19.00.00--7d
zfs destroy tank/d@2022-08-05_09.00.00--7d
zfs destroy tank/d@2022-08-05_11.00.00--7d
zfs destroy tank/d@2022-08-05_13.00.00--7d
zfs destroy tank/d@2022-08-05_15.00.00--7d
(...)
zfs destroy tank/d@2022-08-10_15.00.00--7d
zfs destroy tank/d@2022-08-10_17.00.00--7d
zfs destroy tank/d@2022-08-10_19.00.00--7d
zfs destroy tank/d@2022-08-11_09.00.00--7d
zfs destroy tank/d@2022-08-11_11.00.00--7d
zfs destroy tank/d@2022-08-11_13.00.00--7d
zfs destroy tank/d@2022-08-11_15.00.00--7d

0.030 seconds (00:00:00)  (all OK)
root@aserver:/tmp/zp #
 
Back
Top