automating ZFS incremental receive

The facts:

We have a couple of ZFS pools that we want to backup with incremental snapshots. The snapshots are sent daily to a backup server that doesn't support ZFS like that:

[CMD=""]# zfs send -R -i tank/test@1 tank/test@2 | ssh gkontos@10.10.10.131 "cat > test@2"[/CMD]
[CMD=""]# zfs send -R -i tank/test@2 tank/test@3 | ssh gkontos@10.10.10.131 "cat > test@3"[/CMD]

Our backup server will contain the original snapshot and our incremental snapshots. Eventually it will look something like this:

Code:
[FILE]test@1[/FILE]	[FILE]test@2[/FILE]	[FILE]test@3[/FILE]	[FILE]test@4[/FILE]

The problem:

The reason why we backup our pools is because we are afraid that one day we might loose our data. So, in this imaginary scenario, our server exploded and our disks became toasted. But we have our backups.

In order to perform a restore we would have to start receiving the first snapshot and then the rest one by one. Something like this:

[CMD=""]# zfs receive -Fv tank/test < test@1[/CMD]
[CMD=""]# zfs receive -Fv tank/test < test@2[/CMD]
[CMD=""]# ...[/CMD]

This works and we will end up with our latest backup. But it is not very convenient to do this for each snapshot separately.
Unfortunately, zfs receive does not support the -i flag. Meaning that something like this will not work:

[CMD=""]# zfs receive -Fvi tank/test < test@1 test@2 test@3[/CMD]

The solution:

That's something I would expect as a reply to this thread :)

Best regards,

George
 
Are you storing the backup datastreams as they are on UFS filesystems? I think there is a recommendation not to do so because the streams have no redundancy at all and one bit flipped may render the whole stream unusable. Yeah, sysutils/zxfer is for backup from ZFS to ZFS filesystem.
 
Hi,

you can use a shell "while" loop to achieve this quite easily. However wouldn't it be better to read in the snapshots as they are created? If there is any problem with the recieve action its better to know before rather than later.

thanks Andy.
 
You really, really, really, really shouldn't do this (save zfs streams to files). Especially not as a backup medium. All it takes is 1 single bit (anywhere in the stream) to flip (for any reason), and the entire stream is corrupt and unusable. There's lots of error detection in zfs send/recv, but no error correction.

If you really must do it this way, then you'll need to layer on something like PAR2 to make sure that the saved ZFS send stream is always readable and error-free. And make sure to keep multiple copies of each stream, just in case.

However, you'd be better off using another ZFS pool on the recv end. It will save you a *lot* of headaches down the line.
 
To all, I didn't mention that the backup server does not support ZFS!!!

@AndyUKG
This would require that the whole send/receive operation be actually performed on the source server, on a different pool maybe, and then be transferred to the backup server as a file stream. Correct?
If that is the case, then there is no reason for incremental backups in the first place since we will end up transferring the full stream.
Again, please correct me if I am wrong here.

@phoenix
I really share your concerns. A wrong bit in one snapshot could easily make the rest useless. A will definitely have a look at PAR2. It also makes me wonder how reliable is sending snapshots to tape drives.
Still the question remains regarding automating restore process.

@kpa
The snapshots are transferred to a different file server that does not support ZFS. If the remote server did support ZFS then I wouldn't raise the topic.

To sum up, would I be better off using rsynch instead of snapshots in this case?

So far, all of my installations include a separate backup server that has the ability to handle ZFS. This is the first time I am trying to figure out a safe way to send snapshots to a non ZFS system as a means of backup. I could be wrong!

Thank you all, please continue it looks like we are getting somewhere.
 
Yeah, if the backup server cannot run ZFS for whatever reason, then you'd get better results using rsync to do the backups. Especially if the backup server can run a filesystem that supports snapshots.

  1. Create snapshot on the ZFS server.
  2. rsync from snapshot directory to backup server.
  3. Create snapshot on backup server.
 
gkontos said:
@AndyUKG
This would require that the whole send/receive operation be actually performed on the source server, on a different pool maybe, and then be transferred to the backup server as a file stream. Correct?
If that is the case, then there is no reason for incremental backups in the first place since we will end up transferring the full stream.
Again, please correct me if I am wrong here.

No, I was suggesting running the recieve at the time of backup on the backup server (I was assuming it would be an option to have a backup server that supports ZFS).

If you are unable to load the incremental data each time it is sent or at least regularly it would seem best to think about rsync or some other method,

thanks Andy.
 
Complexity means more chance for failure, but you could put some iSCSI target software on the backup server, export enough blocks (sparse perhaps?) to store the backups, put iSCSI initiator software on the ZFS machine, and then create a real zfs backup pool and use send there. Do a test restore on a 3rd machine with ZFS.

The rsync idea might be better.
 
peetaur said:
Complexity means more chance for failure, but you could put some iSCSI target software on the backup server, export enough blocks (sparse perhaps?) to store the backups, put iSCSI initiator software on the ZFS machine, and then create a real zfs backup pool and use send there. Do a test restore on a 3rd machine with ZFS.

The rsync idea might be better.

I am waiting for a ZFS version soon.

Till then I send full snapshots every night. To make sure that my snapshots are in good health, a script is comparing their MD5 checksum before and after the transfer.

Regards,
George
 
Back
Top