cracauer@
Developer
This is probably a very common problem, but I've never seen an easygoing solution.
Let's say I have many TB of primary storage and I want it backed up in fixed-sized chunks dictated by media size. Let's say 800 GB for a tape or 3 TB for a USB harddisk.
Traditionally people let a single tar "run over" and continue on the next medium, within the same tarfile. Obviously this has some disadvantages:
There must of a software out there that splits a big file tree into lists of files, the sum of the files in each list being the size of the medium. But I don't see such a software.
I'm just short of creating many more ZFS filesystems, each the size of the backup medium. But that's stupid as the size of the medium can change and you can't automate what happens when one filesystem grows.
Let's say I have many TB of primary storage and I want it backed up in fixed-sized chunks dictated by media size. Let's say 800 GB for a tape or 3 TB for a USB harddisk.
Traditionally people let a single tar "run over" and continue on the next medium, within the same tarfile. Obviously this has some disadvantages:
- If one medium breaks then you can't access the subsequent ones anymore
- It take huge amounts of time and any interruption to the tar process (reboot etc) would make you start over
- Even if everything goes right - the moment you want to access a file on the backup you have to wade through the previous media
There must of a software out there that splits a big file tree into lists of files, the sum of the files in each list being the size of the medium. But I don't see such a software.
I'm just short of creating many more ZFS filesystems, each the size of the backup medium. But that's stupid as the size of the medium can change and you can't automate what happens when one filesystem grows.