Complete desktop system backup - how and what?

How should a backup of the system be done? (along with applications/data/permissions preserved)

What are the folders/files that are most essential ?

How would such a backup happen if it's a zfs system?
 
Don't backup the OS or your packages. It's usually quicker to reinstall than restoring from backup. You do want to make a list of relevant packages that are installed; pkg prime-list would give you that. As for the OS or configuration, you want to backup /boot/loader.conf, /etc/rc.conf, /etc/sysctl.conf and relevant configuration files in /usr/local/etc/. Then you also want to include your /usr/home home directories.

How would such a backup happen if it's a zfs system?
zfs-send(8) creates a filestream of the dataset. You can redirect this to a file, backup that file on some other media. It can be restored using zfs-receive(8).
 
Don't backup the OS or your packages. It's usually quicker to reinstall than restoring from backup. You do want to make a list of relevant packages that are installed; pkg prime-list would give you that.
But will this method back-up the state of the applications? eg would a Chromium instance on original setup with all it's tabs/logins open be backed up to work the same way on the backup

zfs-send(8) creates a filestream of the dataset. You can redirect this to a file, backup that file on some other media. It can be restored using zfs-receive(8).
Does this back-up everything at once or is this some kind of incremental backup ?
rsync with all its filtering options is the king of selective backup.
What options are usually the best?
and relevant configuration files in /usr/local/etc/
How does one check which files are relevant? given a system that's been running for a couple of years.
 
But will this method back-up the state of the applications? eg would a Chromium instance on original setup with all it's tabs/logins open be backed up to work the same way on the backup
You're backing up files, not the state of the memory. Configuration of chromium is stored in your home directory. Which is something you definitely want to backup.
Does this back-up everything at once or is this some kind of incremental backup ?
That's up to you. Both are possible.

How does one check which files are relevant? given a system that's been running for a couple of years.
You configured it, so I would hope you also recorded somewhere what and how you configured it. I can't tell you what you did, so can't tell you exactly which files. But you could just backup the entire /usr/local/etc/ directory for good measure.
 
I agree with others here, don't bother with the OS or worst case keep copies of relevant config files, typically found in /etc and /usr/local/etc. rc.conf, periodic.conf, sysctl.conf are ones that I typically modify, but as other have said just grab the whole directory.

list of installed packages is good to keep and smart to actually look at the list once in a while. We all install applications to try, then forget to remove them and their dependencies.

I've preferred to keep my user data (home directory and shared space) on different devices from the OS, makes it easy to mirror user data or to upgrade OS by installing a new device and fresh install to it. I'll do this and leave the old OS devices in but unpowered/unplugged so if the upgrade doesn't work trivial to go back.
 
I've preferred to keep my user data (home directory and shared space) on different devices from the OS, makes it easy to mirror user data or to upgrade OS by installing a new device and fresh install to it.
Definitely recommend putting /usr/home (or /home) on a separate dataset (or a separate partition if you use UFS). On some systems I even create a separate home directory dataset for each user account. Makes it a lot easier to backup the home directories.
 
I used to backup everything with "borgbackup" then "restic", but I came back to tar archive (which is what I was doing at first), it is simple and do not have dependencies problem. Once done I export them to an external disk and that's it.
I do not care about incremental stuff, because when the time comes to extract the backup I usually am in hurry, no time to search in google to try to remember the logic.
I do make archives of what I think is more important, in my case, the /home directory .
Then few important config files like SirDice already said, but that's less than 10Mo.
You do want to make a list of relevant packages that are installed; pkg prime-list would give you that
I take note of pkg prime-list that I didn't know , thank you.
 
I use tree scripts for my backups.

One for dot files:
Bash:
#!/usr/bin/env sh

_bkp_loc="$HOME/bkp"
_sys_conf_loc1="/etc"
_sys_conf_loc2="/usr/local/etc"
_sys_bkp_conf_loc="$_bkp_loc$_sys_conf_loc1"
_sys_bkp_conf_loc2="$_bkp_loc$_sys_conf_loc2"
_home_conf="${HOME}/.config"
_home_local="${HOME}/.local"
_home_bkp="${_bkp_loc}/home/beastie"

bkp_sys(){

    pkg prime-list > "${_bkp_loc}"/pkg_list.txt
    cp -a -f -v /boot/loader.conf "${_bkp_loc}"/boot/.
    cp -a -f -v "${_sys_conf_loc1}"/profile "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc1}"/devfs.conf "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc1}"/devfs.rules "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc1}"/rc.conf "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc1}"/sysctl.conf "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc1}"/fstab "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc1}"/motd.template "${_sys_bkp_conf_loc}"/.
    cp -a -f -v "${_sys_conf_loc2}"/doas.conf "${_sys_bkp_conf_loc2}"/.
    cp -a -f -v "${_sys_conf_loc2}"/openvpn "${_sys_bkp_conf_loc2}"/.
    cp -a -f -v "${_sys_conf_loc2}"/X11 "${_sys_bkp_conf_loc2}"/.
    cp -a -f -v "${_sys_conf_loc2}"/smartd.conf "${_sys_bkp_conf_loc2}"/.
    doas cp -a -f -v "${_sys_conf_loc2}"/polkit-1/rules.d/xfce.rules "${_sys_bkp_conf_loc2}"/polkit-1/rules.d/.

}

bkp_home(){

    cp -a -f -v "${HOME}"/.beastie "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.cshrc "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.cwmrc "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.fehbg "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.login_conf "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.tcshrc "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.zshrc "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.welcome "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.Xdefaults "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.xinitrc "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/xinitrc-xfce "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/xinitrcwm "${_home_bkp}"/.
    cp -a -f -v "${HOME}"/.moc "${_home_bkp}"/.
    cp -a -f -v "${_home_conf}"/dunst "${_home_bkp}"/config/.
    cp -a -f -v "${_home_conf}"/fish "${_home_bkp}"/config/.
    cp -a -f -v "${_home_conf}"/nano "${_home_bkp}"/config/.
    cp -a -f -v "${_home_conf}"/pcmanfm "${_home_bkp}"/config/.
    cp -a -f -v "${_home_conf}"/qt5ct "${_home_bkp}"/config/.
    cp -a -f -v "${_home_conf}"/xfe "${_home_bkp}"/config/.
    cp -a -f -v "${_home_local}"/bin "${_home_bkp}"/dot-local/.

}

bkp_sys && bkp_home && echo "Done!" && exit 0

Then I push the dot files to my git repo (link in my signature) with this script:
Bash:
#!/usr/bin/env sh

cd ~/bkp
git add *
printf '%s\n' 'Commit message?'
read _m
git commit -m "${_m}"
git push origin main
exit 0

My personal files I use this one:
Bash:
#!/usr/bin/env bash

_date=`date "+%Y-%m-%d"`
_disk_dest="/zmedia/zbkp/zhome"
_bkp_dest="$_disk_dest/$_date"

mkdir "${_bkp_dest}"

bkp_files(){

_dirs=("bkp" "Desktop" "Documents" "Downloads" "Monero" "Music" "Pictures" "Public" "Templates" "tmp" "Videos")

    cd "$_bkp_dest"
    for d in "${_dirs[@]}"; do
          echo "Copying $HOME/$d to $_bkp_dest/$d.tar.lz4"
          tar --use-compress-program=lz4 -cf "$d".tar.lz4 "$HOME"/"$d"
    done
    cd

}

bkp_files && echo "Done!" && exit 0

This script save the tar archive in a external HD, when this backup is done I upload the archives to my encrypted drive in the cloud (don't ask what company). and I only keep the last two backups in the drive and the last 5 in the cloud.
 
As SirDice posted, it is most practical to only backup user home directories and system config files. My routine for the latter is that after changing them, I make a copy to a directory in my /home, so that I only need to backup my user files.
 
But will this method back-up the state of the applications? eg would a Chromium instance on original setup with all it's tabs/logins open be backed up to work the same way on the backup


Does this back-up everything at once or is this some kind of incremental backup ?

What options are usually the best?

How does one check which files are relevant? given a system that's been running for a couple of years.

Nobody can really help you deciding which files are worth backing up. I exclude things like /usr/obj and media files I also have elsewhere. Decisions are to be made, for example if you have local modifications in /usr/src you would want to back that up. I also back up the base system and pkg since they are very small compared to the total system size and since I usually have local modifications and custom config port installs.
 
It's difficult to be prescriptive without a lot more details.

However, my view is that disk capacity is cheap these days. A typical FreeBSD root (without user data) would generally not need to exceed 25GB. You can get high capacity external USB3 disks for US$20/TB. That's peanuts. So I don't pick through /etc, and hope I got everything. I back up all files (but I de-duplicate successive backups of the same host). I also exclude some common mount points I use like /cdrom and /usb.

Orchestration of the backups for each of my hosts (including time series views and de-duplication) is done with rsnapshot(1), which works with all Unix variants. It uses rsync(1) under the hood. The backups are sent to a single file system (/tank/backups) on my ZFS server, where I keep a curated time series of backups on-line.

I also regularly zfs-send(8) the entire ZFS tank to an external disk. There are several disks, and they rotate off-site.

I can keep my backups on-line because I invested in the resources to make it possible. It's a convenience, not mandatory (but it is one extra copy of the backups). Without the ZFS server, I would send the backups directly to removable media (still using rsnapshot).
 
You configured it, so I would hope you also recorded somewhere what and how you configured it.

We are speaking about Desktop, own Desktop. I think professional
backup would backup everything, because one cannot judge what
is important. I keep order in my Desktop, not only for backing up,
I know where are things important to me, and only that I back up.

However, my view is that disk capacity is cheap these days. A typical FreeBSD root (without user data) would generally not need to exceed 25GB. You can get high capacity external USB3 disks for US$20/TB. That's peanuts.

Indeed, but as said, that is for me no reason not to keep order in my
Desktop.

And real backups are in many different devices for each different cycle,
one needs a lot of devices. I only copy the last state in two mirrored
ZFS disks and now to a USB stick. Mainly personal files. I use rsync for
that.
 
Don't backup the OS or your packages
Recently I had an unexpected power failure happened with some physical server.
This server acts as a host for a lot of VirtualMachines, of course some of them are FreeBSD.
One of FreeBSD's won't boot after the power failure, bacause some important libraries become corrupted.
There are /lib/libc.so.7 and /lib/libcrypto.so.7
It was some unsupported version of FreeBSD with non-default patch level and some very old software compiled from sources.
It was very difficult to recover the same version of specified files using only original ISOs.
So having a full, complete backup of all files (including OS-base-system) I successfully restored that files from backup.
After recovery I checked consistence of another files. I compared all files of the system with a recent backup.
I used rsync for comparing every file by checksumms rsync -c -n / rsync://remotebackup/
So existance of full backup saved a lot of time.
Base system has a very small size even with all packages. In case of using Incremental backups - you will store it only once.
HDDs are so cheap, just backup everything.

How should a backup of the system be done? (along with applications/data/permissions preserved)
Just to rsync all files to another HDD or remote storage (excluding apparently temporary and extralarge like snapshots).
As far as I know, default rsync installation does not copy 'file flags', but you can enable it by installing rsync using ports.

Also FreeBSD has a good tool pax().
It can copy directory hierarchies and make an archive like 'tar' but it store&copy everything even file flags, and pax can process different filename encodings better than tar.
You can use pax for making one-time local backup or non-inctemental remote backup via ssh.
I used pax for copying FreeBSD from HDD to HDD and for making a partition or system image in one file.
Examples:
Code:
#Copy directory hierarchy / to /mnt/root
cd / ; pax -p eme -X -rw . /mnt/root

#Copy from local to remote
cd /; pax -w -X . | ssh root@hostname "cd /mnt/root && pax -r -v -p eme"

#Copy from remote to current local directory
ssh root@hostname "cd /mnt/a && pax -w -X . " | pax -r -v -p eme

What options are usually the best?
Code:
#!/bin/sh
date_now=`date "+%Y%m%d-%H%M"`
rsync -Hav --exclude-from=/root/bin/backup.exclude --numeric-ids --bwlimit=400 --log-file=/var/log/backup-${date_now} --password-file=/root/bin/backup.rsyncpwd / backup@127.0.0.1::backup/
#--delete is required for mirroring hierarchy but doublecheck a name and content of the target directory
Snapshots on the backup hosts are wellcome.
 
I always been searching for a "time-machine" substitute, especially the aspect of selectively retrieving past versions of a certain file/directory.

For my desktops, at the moment I am using sysutils/luckybackup.

I am not too happy about the retrieval aspect, I am looking for a better gui tool, perhaps leveraging zfs, that will help search for files before retrieval.
 
How should a backup of the system be done? (along with applications/data/permissions preserved)
Beside the already named: If you use f.e. something like rsync for your backup (as I do among other tools) take care of the file system of your backup drive; The often used combination "NAS & Samba" won't take care of your file permissions (a NAS for unix backups should IMO at least offer the ext* file system and SSH as well as NFS); Instead by using something like "tar" will preserve file permissions even on a FAT file system, but won't be as comfortable when it comes to restore your data.
Backup isn't a peace of software, but a concept. No one can tell you what you need (or want for convenience) to restore your data. You've got to check every piece of software you're using for special needs: MariaDB, MySQL, PostgreSQL? Your backup should include (IMO) a dump (others might backup the database files). Webserver? Document Root of it. Mail spool of a mail server etc. And of course: The HOME directories itself (there I exclude all temporary and cache directories).
How would such a backup happen if it's a zfs system?
For me there's nothing related to the source filesystem.

You might want to backup simply everything - even the base installation. For me that doesn't make sense, as the base installation and packages are easily installed again (like others mentioned): Both "etc" directories, the loader.conf (! - not named before) and a list of packages are enough. But that's already your decision: Should your restore begin with a new installation, or should it include all leftovers from the old one? How fast has it to be, how much disk space is needed (and: is that something to look after), how many days do you need to roll back, is one separated hardware for your backup enough, or do you need a second backup line because you're doing heavy hardware tinkering etc. Maybe one simple memory stick does the trick and you're even going with the exFAT file system - it's easy to restore the file permissions of your $HOME, and the etc directories can be stored on it as tar.gz archive… There's nothing someone else could tell you. It's a concept you've got to write down for yourself, there is seldom a magic tool that does it all for you.
 
I always been searching for a "time-machine" substitute, especially the aspect of selectively retrieving past versions of a certain file/directory.
I need that for files/directories I write. For keeping remote copies of
my files with versions I use:


All in one program (not necessarily an advantage, but it works),
whole repository in one sqlite3 database. Version control and
remote storage (backup!) with one command (not the case with
rcs or cvs).

You may try with other version control systems.
 
From the base system it is /boot, /etc and /var. Exclude application payload in /var if applicable.
From the ports it is definitely /usr/local/etc. Then a few creepy ports tend to have configfiles elsewhere, for instance the notorious /usr/local/share/emacs/site-lisp/default.el. Locally (off-pkg) installed rubygems, perlpackages and pythonwheels are a nuisance. In a professional installation they would get an extra path under /opt, /usr2 or something alike, and that would go into the backup.
The remainder should then be payload: homedirs, applications, /media, i.e. stuff you work with and must individually decide.
 
How should a backup of the system be done? (along with applications/data/permissions preserved)

What are the folders/files that are most essential ?

How would such a backup happen if it's a zfs system?
You can just export entire ZFS Boot Environment to a file and you have entire system secured.

Backup like that:

Code:
# beadm list
BE        Active Mountpoint  Space Created
13.1      NR     /           33.4G 2023-01-10 16:55
13.1.safe -      -            2.1G 2023-01-20 17:09

# beadm export 13.1 | xz -9 > 13.1.BE.xz

Restore like that:

Code:
# xz -c -d 13.1.BE.xz | beadm import 13.1.RESTORE

# beadm list
BE           Active Mountpoint  Space Created
13.1         NR     /           33.4G 2023-01-10 16:55
13.1.safe    -      -            2.1G 2023-01-20 17:09
13.1.RESTORE -      -           31.2G 2023-02-06 18:15

# beadm activate 13.1.RESTORE

When doing 'Bare Metal Recovery' just install same (or newer) FreeBSD version with Auto (ZFS) scheme - does not matter if with GELI enabled or not.

Then do these and reboot(8):

Code:
# xz -c -d 13.1.BE.xz | beadm import 13.1.RESTORE

# beadm activate 13.1.RESTORE

# reboot

You can also use the same method to 'move' your system to other/new hardware or between VM and a real hardware.

Hope that helps.
 
Back
Top