Rsync backup: which directories?

When using rsync for backups you could of course always backup /. But that's a bit crude and wasteful (of bandwidth and disk space).

For example, you wouldn't normally want to backup /dev or /tmp or /sbin or... But what directories, as a general rule, do you backup (without being wasteful)?

So far, I have the following dirs/files in my if you backup these, you're probably covered-list (for 99% of the FreeBSD installs):

  • /boot/loader.conf (I find myself putting a tunable in there in most installs)
  • /etc (Duh)
  • /root (root's homedir)
  • /usr/home (Your people's homedirs)
  • /usr/local (This is rather crude, but apart from /usr/local/etc and /usr/local/www a lot of chrooted programs go in here, like Postgres in /usr/local/pgsql or Resin in /usr/local/resin3, with all their data. And you never know which programs will be installed on average. On my installs I also have a backup dir in here containing *SQL server dumps.)
  • /var (Also crude, but there IS a lot of stuff in here. Mails, passwd backups, logs, standard MySQL db/config-files location, ...)

Am I missing anything important? Could I do some things a little more fine-grained?
 
Have a read through hier(7) for a list of the FreeBSD directory hierarchy and you can decide from there what you require to backup.

Normally you only want to backup things which are variable or hold state. So config files and log files + your typical user home directories.

On my server box at home i only backup /etc because the only thing critical to me is configuration. /var is kind of important if your in a business and the logs are used for audit trails. But look through the above link and deem what you believe is critical.

Remember that it's your data you need to deem what is important or not ;) once it's gone it's gone
 
I've never thought of an answer to this question.. I began backing up individual conf files to, say /conf_bup and /fat32/conf_bup, and then once rsync is set to "incremental" usage, a third way not-so-much seems redundant, but I'd worry "did I leave some location out" each time I did the less-than-full-incremental rsync...
 
Do people really need to backup /usr/local? Couldn't you save space on your backup media by skipping that and instead backing up only a list of installed ports and their configuration?

The manpage for portmaster() shows one way to generate and use a list of installed ports. (Scroll all the way to the bottom of the manpage and the last example.) If restoring to a pristine drive, all you'd need are steps 1, 9 and 10.

The config's are stored in the options files found under /var/db/ports.

As far as I can see, the only reason to backup /usr/local is to speedup the restore. And of course, that's not a trivial concern. Some people play the time:space tradeoff one way, some play it the other way, depending on their particular needs.
 
To make your life simpler, backup everything. Why? Because then the restore process is simple:
  1. boot LiveCD
  2. partition/format disks
  3. mount filesystems
  4. rsync
  5. install boot code
  6. reboot

If you just backup things piecemeal, then you have to add a bunch of install steps (OS, ports tree, ports tools, ports, etc), and reconfigure steps, and then copy config and data files around, and twiddle this, and twiddle that. Too much hassle.

Backup everything (start from /), but use an exclude file to remove unneeded directories like:
  • /dev
  • /tmp
  • .mozilla/firefox/cache
  • other scratch/cache/temp directories

If you are worried about storage on the backup server, then use a compressed filesystem, or one that supports dedupe.

Trying to do anything less than a full backup is just asking for problems down the road. Trying to optimise the backup process leads to an increase in complexity of the restore process. It's just not worth it.

Plus, if you use a filesystem (like ZFS) that supports snapshots on the backup server, you get "incremental" backups for free (just rsync to the same directory each time, but snapshot it before starting). Doing it piecemeal makes that a lot harder to get right.

For example, using a full backup process like above, we can restore a server with less than 5 minutes of manual work (boot, format, mount, rsync). Sure, the data transfer across a 100 Mbps network can take a few hours for some servers, but the amount of manual labour is miniscule and more than makes up for it.
 
I believe the confusion comes from mixing the terms 'backup' and 'archive'.

You backup something, so that you are able to quickly restore the previous state of things. You rarely need to have more than one valid backup available.

You archive something, because you want to make sure you have a copy of it somewhere and not have to re-do it again. You may have many archived versions of the same thing and all to be perfectly valid and valuable.

For backup, I agree with the opinion of phoenix, that it is best to have full backup, because the primary goal of backup is to be able to quickly restore it.

For archive.. one could use backup+ZFS (snapshots) to have point in time recovery, that is, access to old backup data. Or, could only archive non-recoverable data such as content and software configuration.
 
phoenix said:
To make your life simpler, backup everything. Why? Because then the restore process is simple:
  1. boot LiveCD
  2. partition/format disks
  3. mount filesystems
  4. rsync
  5. install boot code
  6. reboot

If you just backup things piecemeal, then you have to add a bunch of install steps (OS, ports tree, ports tools, ports, etc), and reconfigure steps, and then copy config and data files around, and twiddle this, and twiddle that. Too much hassle.
I am right on the same page as you with this. It takes that much effort to get everything working right in the first place to not want to be able to restore the whole shebang as it once was. And not to mention having to document everything that has gone to produce the system you currently have. Otherwise there will be something that is left out and doesn't work as it should...

For example, using a full backup process like above, we can restore a server with less than 5 minutes of manual work (boot, format, mount, rsync). Sure, the data transfer across a 100 Mbps network can take a few hours for some servers, but the amount of manual labour is miniscule and more than makes up for it.
One thing I'm not totally sure about - are you running ZFS on your root, and are you restoring from Fixit#? Have you tested it in 8.2? If so, do you have any pointers on how you do it?

I did have everything written up and tested to do the following as part of a full restore:
  1. Install zroot (ZFS)
  2. Install storage, basically just a HDD mirror
  3. Connect the backup pool.
  4. sysutils/zxfer across the storage/home, which uses zfs send/receive.
  5. use zxfer to basically just rsync the backed up root mirror across (basically everything that was not the home directory) to its new home. You can do the same thing manually, of course.
This worked perfectly in 8.0, but now I have to # chflags noschg a few more things and there are some files that are being read as it tries to write, resulting in not everything being transferred. Still, at first glance it appears to mostly work but the mouse won't work in gnome, and there are some .iceauthority errors. I'm rebuilding world etc. in an attempt to fix things.

It would seem that if I could mount the zroot and install from Fixit it would be a cleaner solution (as nothing should be busy when we try to write to it, as the system is not in use). However, I'm not sure exactly how to do that. If you have any suggestions I'm all ears.
 
carlton_draught said:
One thing I'm not totally sure about - are you running ZFS on your root, and are you restoring from Fixit#? Have you tested it in 8.2? If so, do you have any pointers on how you do it?

No, we do not use ZFS for the OS filesystems, only the data filesystems. We use UFS and gmirror(8) for the OS install.

It would seem that if I could mount the zroot and install from Fixit it would be a cleaner solution (as nothing should be busy when we try to write to it, as the system is not in use). However, I'm not sure exactly how to do that. If you have any suggestions I'm all ears.

You should be able to use mfsBSD, Frenzy, or a bsdinstall CD for this, as they all have ZFS support and give you a full LiveCD environment.
 
phoenix said:
You should be able to use mfsBSD, Frenzy, or a bsdinstall CD for this, as they all have ZFS support and give you a full LiveCD environment.
I'm just putting this out there because I'm not really sure where else to put it.

I'm re-experimenting with transferring back a copy of a backed up zroot pool from Fixit# using ZFS send/receive, starting with a freshly installed zroot. Problem: when it comes time to reboot, I see:
Code:
FreeBSD/x86 boot
Default: zroot:/boot/kernel/kernel
boot:/
int=00000006   err=00000000   efl=00010086  eip=0018a32b
eax=... <7 lines snipped>
BTX halted
Not really sure where to go with this one. Ideally, it would be nice to be able to transfer via zfs send/receive to a non-live zroot through a BSD install CD using Fixit#. However, I have not been able to get this approach to work.

Getting nowhere here was originally my impetus for using rsync to do the transfer instead of zfs send/receive. I figured that if I started with a fresh install, installed rsync to that fresh install (since for some reason rsync has been removed from the BSD install DVD) and rsynced across everything, it should work ok. The number of files that can't be written because they are busy is pretty minimal, and an update/rebuild should rewrite most of the files that have changed anyway. And this approach worked back with FreeBSD 8.0.

Anyway, if anyone knows of a way to get past the "BTX halted" thing I'd be glad to hear it. Otherwise, I will continue with the rsync approach.
 
Back
Top