Simple backup protocol for home user.

I use rsync with good results, but two caveats: 1) it seems to like 16G memory at least
probably due to concurrency while backing up...
I run it on Raspberry Pis (1G memory) all the time. I've been running it since ... forever, when computers had very little memory.

2) some files on the destination are identical
to newly backed up files, but ending in a tilde, like text.txt~, and remain past newer
backups, vs the expected behavior, being deleted,
making the backup area larger than the source. Not a problem in this
use case, but if someone knows if it is a switch rsync needs/doesn't need that I use
it would be useful to know.
On the target side, rsync creates backups of files it overwrites. That's the ones ending in tilde. This is probably useful when using it to copy directory trees, probably not useful when using it as part of a backup strategy. There is a command-line switch to turn it off and on, look at the man page. I would guess it's "-b", but please check.
 
  • Thanks
Reactions: a6h
2) some files on the destination are identical to newly backed up files, but ending in a tilde, like text.txt~, and remain past newer backups, vs the expected behavior, being deleted, making the backup area larger than the source.
Add the parameter "-v" to rsync, and it tells you more about what's going on. Also check the file permissions and owners of your remote file system; Create a single file on your computer, rsync it, and check the rsynced file owner.

About memory: I've never noticed high memory usage on any computer. And as ralphbsz wrote: It wasn't a problem long time ago when 1 GB memory was just woolgathering…
 
be aware that rsync is a sync tool - if you have an important file and then you do echo "" > important.txt you end up with an empty file that gets synced to your backup. I have probably used _all_ major backup tools within the last 20 years to find the best tool (yeah, I am a little obsessed with backups), and I recommend borg backup in the first place, restic on the second. (keep your keys for your borg backup in a save place!)

Thanks for the borg backup suggestion. I tried it and it appears to have worked nicely. Proof will be in the eating of the pudding if and when I try to restore after some failure.
 
Thanks for the borg backup suggestion. I tried it and it appears to have worked nicely. Proof will be in the eating of the pudding if and when I try to restore after some failure.
You're welcome. I use borg also with my clients, backing up many TB of small files. I tried many tools, and most of them showing their flaws once you have a serious amount of files and replicas in your backup storage or you want to restore a large amount of files from a backup long time ago. Not so with borg, it has not let me down, and deduplication is a great feature expecially for a home user because usually your data changes not so much - you can keep quite a lot of snapshots of your data without wasting too much space. However, as you have mentioned, testing the restore of files is of utmost importance. And I cannot emphasize enough to keep the borg keys in a safe place (or several places ;-) - a backup without the keys to recover the encrypted data is worthless.
 
Add the parameter "-v" to rsync, and it tells you more about what's going on. Also check the file permissions and owners of your remote file system; Create a single file on your computer, rsync it, and check the rsynced file owner.

About memory: I've never noticed high memory usage on any computer. And as ralphbsz wrote: It wasn't a problem long time ago when 1 GB memory was just woolgathering…
quick fix, especially before a restore if need be:
to my question on page 1 of this thread...

Code:
cd /backup/root/.mozilla...     find... [ see below ]
cd /backup/root/.cache...
cd /backup/usr/home...
cd /backup/usr/local...
cd /backup/var...
cd /backup/usr/ports...
.... and any 7th portion of the filesystem(s) found extraneously populated...
find . -type f -name "*~" -exec /bin/rm -v {} \;

deletes the extra files to make the space used on the source and destination near equal.
[ usual caveat about deleting files on a backup applies... ]
and, substitute your /mnt for backup in the example(s)
....
BTW this simple command makes rsync much more a capable backup/restore tool IMHO.
 
I'm reading through the Handbook 19.10 Backup Basics and will likely follow the instructions on dump & restore, but I wanted to check in with the forum and see if there were any pearls you can offer me first. A GUI option would be nice but I'm happy to use the terminal.

I have two backups -- one that adds my files to all I ever made, and the other that is just a mirror of my home directory. With multiple users on a box, every user's home gets backedup (is that a word?). In case of a meltdown, I just install the system anew, so I only need to backup my files and system settings.

I have copies of system files like /etc/rc.conf /boot/loader.conf /etc/hosts etc. in my 'admin stuff' directory, so I don't have to bother backing them up separately. I use two 500GB USB external hard drives for the backups, they are stored away from my computer with their volume names taped on them.

The backups are simply done with net/rsync. For easy handling, I made two aliases to run the commands. `bak-all' is the incremental backup, `bak-now' is the mirror.

Code:
bak-all   rsync -az --info=progress2 --exclude-from=/media/bsd_all/exclude-list-rsync.txt /home/ /media/bsd_all/HOME_backup_all/ ; ( echo incremental backup @FreeBSD ; date ; echo ---------- ) >> /media/bsd_all/backup.bsd_all.log

bak-now   rsync -az --info=progress2 --delete --exclude-from=/media/bsd_now/exclude-list-rsync.txt /home/ /media/bsd_now/HOME_backup_now/ ; ( echo mirror backup @FreeBSD ; date ; echo ---------- ) >> /media/bsd_now/backup.bsd_now.log

I found out that it is rather important to check if the backup volume is mounted in the same place as where rsync writes the files to, prior to running rsync.

The command uses an exclude-file to prevent a bazillion useless cache files and iso-files to be backupped. Further it writes a few lines to a log file on the backup volume to keep track of the backups I've done.

The backups ave one disadvantage: I have to run them manually, have to fetch the hard drives first, mount them, etc. I just have to remind myself making backups more frequently.
 
The backups ave one disadvantage: I have to run them manually, have to fetch the hard drives first, mount them, etc. I just have to remind myself making backups more frequently.
Known problem. My working solution:
  • using network instead of USB drives (just power them on and they are available)
  • backup script which stores a timestamp on execution
  • desktop notification when this timestamp is too old
  • make everything eyecandy and mouse usable
 
  • if you use tar(1), consider pax(1) instead. It's default format is ultimately portable.
    I.e. you can read the backup from any other OS box
  • do some sanity checks (assertions) on your backup: e.g. size(backup) >= size(source), #files, checksums, ...
  • regularly (once a year?) do a recovery manœvre
  • include the timezone name or offset in the timestamp, or use UTC, e.g. date +%F_%R_%Z
  • pkg search backup shows a subset of available solutions. Either you find a good one or you change some to fit your needs.
  • IMHO dump(8)/restore(8) is still the most reasonable solution for UFS; it has disadvantages though. 2nd best is rsync. Obviously these are a matter of personal preference.
  • with UFS, you can benefit from inserting an I/O scheduler (gsched(8)), pull my rc-script here
 
Back
Top