Your personal approach to backing up data for FreeBSD

Tell us how you make backup copies of your personal files that would be too offensive to lose: memorable photos and videos, code repositories, pdf e-book libraries, etc.

Of course, we are talking specifically about personal home and office computers using FreeBSD. Not corporate, expensive systems (although you may have brought corporate methods into your personal backup system).

For example, inside one machine you can use one pull and copy it to a second pull (fast backup) raidz-2. In addition, you can copy files within the same machine between different file systems using rsync: from ZFS to UFS, to prevent data loss if ZFS becomes corrupted for reasons unique to ZFS (failed zpool upgrade, for example).

Of course, ZFS snapshots can be easily and automatically copied to remote ZFS systems.

Can you share your own original FreeBSD backup methods? How much redundancy is enough for you to feel calm?

Are you using LTO streamers? M-disk? DVD-RAM? External USB HDD? USB flash drives? Just sending encrypted ZFS datasets as files to free cloud services? Maybe you have a cold backup server somewhere in the basement or under your bed, which is turned on only to make a backup? Describe your NAS. Everything is interesting.

UPD: at first I forgot to indicate such a suitable and inexpensive method as backup to S3.
 
I have spare SAS 3.5" slots in my main workstation. I buy cheap, large disks used on Ebay, put a backup on them and move them off-site. I also have a backup array in a NAS.

Smaller data is online synced to machines on the internet.

I have LTO tape machinery, but I never found satisfactory software for datasets that are larger than one tape.
 
When I was on UFS2, I used plan9port vbackup on a WD mycloud server changed to Linux running plan9port.

Now, I changed to ZFS on FreeBSD. So this approach is notfeasible any more. I am going to add some features to plan9port vac (mainly observing the uarchive and temporary flags of the zfs filesystem) and use that in the future.

The backups are served to Plan9 style dump file systems.

From time to time the venti arenas will be saved to a 2nd wdmc.
 
  • Thanks
Reactions: dnb
All my systems (except for the laptops) have mirrored disks/SSDs. This covers hardware failure.

My ZFS datasets are snapshotted daily, maintaining the snapshots for a few days. Snapshots can either be rolled back or copied from their respective .zfs/snapshot directories (for selected files).

Then once a month I create a disaster recovery backup using UFS dump for UFS and zfs send for the zpools. This is stored on some bootable external USB disks which can be used to restore any of my servers.

I cannot recover from the expected 9.0 earthquake in this part of the world. Then again, I have a lot more to worry about than computers, like rebuilding the house and rebuilding life and the lives of family here. And, that assumes I myself survive such an event. So the line is drawn at the most common events that might happen to my systems.
 
My approach is pretty simple. The valuable data is all documents, pictures, etc. These all live on my server, which is using a simple ZFS mirror for the data drives. I wrote a script to use tar and /archivers/pbzip2 to put all the home folders in a single file, which I then copy to an external drive. I was able to acquire a large amount of 250GB hard drives for very cheap, and these currently fit all my data. This script is invoked manually. I really need to get more disciplined with running it more often, but I don't generate a lot of new data. Of course, automating the process a little more would be beneficial.
 
  • Thanks
Reactions: dnb
Pretty simple also, no fancy things, for my personnal usage:

- multiple external SATA HDD (I trust in goo ol' classic spinners) rescued when I teardown a machine at work (so I have buckets of them, of course I test them with a smartcl -x /dev/dax )
- simple shell scripts involving rsync (i.e for mirroring rsync -aPtv --delete /path/to/source/ /path/to/dest/)
 
  • Thanks
Reactions: dnb
I have 2 or 3 way mirrored zfs pools so im not bothering to much but because im new to zfs but im started to use snapshots so i do snapshot before i starting to mess up with the system.
 
  • Thanks
Reactions: dnb
I have one 1TB hard drive, one 500GB hard drive. I use ZFS on them and I just copied important data to them. I don't use mirrored pool because they both are different model hard drives. I remembered that I was using a mirrored pool with a nvme ssd and a 5400rpm hard drive once, I think it caused slowness. I have taken snapshots of their datasets just in case if something gets deleted accidentally.

If I could have more than one hard drives that are exact model, I could use a mirrored pool for redundancy.
 
  • Thanks
Reactions: dnb
I use syncing to different machines which are not online at the same time. I have seen what a lightning strike can do to your hw, a mirror or raid is not going to cut that. One set of mirrored SSDs is attached via usb3 Every so often and is locked in a safe when not in use.
Remember: two is one and one is none.

I have LTO tape machinery, but I never found satisfactory software for datasets that are larger than one tape.
Wasn't exactly that what tar was written for?
 
For the record: lightning strike can make your network card blow a capacitor so violently the cap punches a hole trough the graphics card. The only thing still working was an old plextor DVD. And the router case only contained loose debris. Take note should your backup consist of ZFS snapshots.
 
Homebrew or commercial of some sort? If commercial, what brand if you don't mind?

My backup "NAS" is a regular big tower PC with 6 HDs, ECC memory etc. It can actually run into both FreeBSD and Linux and receive backups the same way under either. It's not running 24/7.
 
At home I use restic to backup to a secondary drive and to oneDrive (could be any cloud provider - family plan thing with MS is $99 per year and includes 6TB of storage in six 1TB chunks which is pretty cheap). I used to keep it more basic than that but restic is fast and doesn't burn up a lot of space so just doing it seperatelyto two seperate geographic locations. It helps to have a decent symetrical high speed connection.

Professionally I used restic to back up to an NetApp NFS mount and than those were backed up by Rubrik to a local appliance which also sent everything to a Rubrik AWS instance.

As aside aside, Rubrik saddly didn't have a FreeBSD agent, which is a situation the FreeBSD Foundation needs to advocate to correct, we would be well served by FreeBSD being supported directly by more infrastructure vendors. Rubrik would be a good candidate since they are a NetApp partner and NetApp is so tightly tied to FreeBSD. I doubt porting the agent would be much of a lift. It's friction like this that makes adoption a harder sell.
 
  • Thanks
Reactions: dnb
As aside aside, Rubrik saddly didn't have a FreeBSD agent, which is a situation the FreeBSD Foundation needs to advocate to correct, we would be well served by FreeBSD being supported directly by more infrastructure vendors. Rubrik would be a good candidate since they are a NetApp partner and NetApp is so tightly tied to FreeBSD. I doubt porting the agent would be much of a lift. It's friction like this that makes adoption a harder sell.

Why would you consider a closed-source backup solution when you might have to read the backup in 20 years with the vendor long gone?
 
I curate both FreeBSD and Linux systems.

Except for the systems that I boot from USB, I always use redundant storage, usually mirror(s).

A few of my small systems (media server clients and firewalls) are fairly static and boot from USB. For these systems I take a dd copy of the USB stick a couple of time a year, compress it and save it on my ZFS server. Recovery generally involves a loopback mount (using mdconfig on FreeBSD or losetup on Linux) of the backup file, and the creation of a new USB stick by copying file systems and installing the boot stuff. [You can't rely on writing a new USB stick with dd for recovery, because USB stick sizes are variable.]

For my traveling notebook, I just carry a USB stick, and rsync my home directory and mail folders to the USB stick daily. When I get back home I rsync the USB stick back to my desktop.

For all my systems (including just the root of my ZFS server), I use rsnapshot to pull backups to a dedicated "backup" pool on my ZFS server. This uses rsync under the hood to copy entire systems, and I keep a time series of de-duplicated backups going back many years. Each backup appears to be a (more-or-less) complete rooted tree, but files that don't change are just a sym link to the original copy. I'm a big fan of having all backups available on line.

I have a small hot swap SATA disk enclosure on the ZFS server, and every month, or so, I zfs-send the entire ZFS tank to a 12TB disk which gets rotated off site.
 
  • Thanks
Reactions: dnb
When I was on UFS2, I used plan9port vbackup on a WD mycloud server changed to Linux running plan9port.

Now, I changed to ZFS on FreeBSD. So this approach is notfeasible any more. I am going to add some features to plan9port vac (mainly observing the uarchive and temporary flags of the zfs filesystem) and use that in the future.

The backups are served to Plan9 style dump file systems.

From time to time the venti arenas will be saved to a 2nd wdmc.
I used to use venti with ufs2 but that was a long time ago and it was slooow.... I also used vac some.

Now I just use zfs send, receive.
 
  • Thanks
Reactions: dnb
My home server (which has roughly a quarter million files that I care about): The boot disk is an SSD, which contains only the stuff that can be reinstalled from the network, or set up again by the admin (such as the BSD distribution, packages, and local tweaks). All administrative actions (in particular configuring) are logged, and those logs are part of that quarter million files. So if my home server is utterly destroyed, I can create a new one in about 2-3 days after obtaining new hardware. In theory, I should be backing up that boot disk to somewhere (with something like an image backup), so if just the SSD fails, I can recover more quickly. That's on my to-do list.

The main file system is ZFS, set up with a mirror of two good quality hard drives, different models. They are well cooled and in the basement (steady temperatures). They are monitored with smartctl, including alarms on elevated temperatures and elevated error rates. I should replace them (they are 7 and 9 years old), and that's also on my to-do list.

The content of the file system is backed up every two hours (it should be every hour, my to-do list is pretty full, and rewriting the backup program in rust to get the runtime down is going to be the next thing to work on). The backup disk is on a long USB-3 cable, and physically inside a big safe with heat-insulated walls. If a small house fire destroys my server, the backup disk is likely to survive. The backup program is homebrew, and does full archiving (old versions of files are not deleted), and full-file dedup (upgrading it to block-based or Rabin fingerprint dedup is at the bottom of the to-do list, probably not worth my time).

Sadly, I still have not implemented an automated restore from the backup; if I did lose my server and had to recreate everything from backup, it would probably take several days of tinkering and software development. No need to even mention my to-do list, which exploded somewhere above.

The next layer is: the backup is copied off-site to a commercial cloud provider, roughly once per night. To save bandwidth and money, only the current copies of files are stored off-site, and to not overload our pretty slow network connection, the off-site copy only runs between 1am and 5am (we have both night owls and early risers in the family). A restore from the off-site copy would be a nightmare, but theoretically feasible (I think, I've never actually tested it, but I know individual files are fine).

Finally, roughly once a month I make a full copy of the backup disk to a portable disk, which is stored in someone's office at work (at least 20 miles away from home). Thank you for reminding me to do that again, I think the last one was around Christmas, so "roughly once a month" is a euphemism for "I need to find more spare time".
 
  • Thanks
Reactions: dnb
I'm using UFS and make a backups with dump(8) over ssh on remote server.

To backup live UFS filesystem it was required to have soft updates journaling disabled. On the new 14.1-RELEASE it's not longer required but i didn't migrated to it yet to test it.
The journal was disabled from single user mode using tunefs -j disable /

The directories which are not needed to be included into the backup and can be easier downloaded again from internet like /usr/ports are skipped via nodump flag using the following command chflags nodump /usr/ports

The actual dump command for remote backup is:
/sbin/dump -C16 -b64 -0Lau -h0 -f - / | gzip | ssh -p 2222 user@ftp.example.com dd of=/home/user/05062024.dump.gz

The backup is then restored into Hyper-V VM where it's tested before any freebsd-update or pkg update before actual update of the server is done. The restore steps are following:

sh:
#1. create the backup
#2. copy the backup to windows host on d:\ using winscp rename to root.dump.gz
#3. create disk.iso using cdburnerxp containing root.dump.gz and this restore1.sh script
#4. create vm GEN2 secureboot=off with 2 cdrom drives and 1 hdd 50gb or more.
#5. first dvd is bootonly and second dvd is disk.iso with backup.
#6. boot into live cd
#7. mount_udf /dev/cd1 /media
#8. sh /media/restore1.sh


gpart destroy -F da0
gpart create -s gpt da0
gpart add -t efi -s 100M da0
gpart add -t freebsd-ufs -s 40G da0
gpart add -t freebsd-ufs -s 5G da0
gpart add -t freebsd-swap -s 4G da0
newfs_msdos -F 32 -c 1 /dev/da0p1
newfs -U /dev/da0p2
newfs -U /dev/da0p3
mount -t msdosfs /dev/da0p1 /mnt
mkdir -p /mnt/EFI/BOOT
cp /boot/loader.efi /mnt/EFI/BOOT/BOOTX64.efi
umount /mnt
mkdir /tmp/root
mkdir /tmp/temp
mount -o rw /dev/da0p2 /tmp/root
mount -o rw /dev/da0p3 /tmp/temp
setenv TMPDIR /tmp/temp
cd /tmp/root
zcat /media/root.dump.gz | restore -rvf -
rm /tmp/root/restoresymtable
ee /tmp/root/etc/fstab
# edit the fstab as the original server is using /dev/raid/r0p2 and the vm is using da0
#/dev/da0p2 / ufs
#/dev/da0p4 none swap

ee /tmp/root/etc/rc.conf
# comment all to disable all services and network except hostname variable as the lab network is using different vlan and subnet.
# shutdown and remove dvd roms from vm.
 
  • Thanks
Reactions: dnb
In my case the backup is on another mine server so yes i trust the location and the server so i don't need to encrypt it's storage. If it's located on location where i don't have physical control then yes encryption would be good option.
 
I'm using UFS and make a backups with dump(8) over ssh on remote server.

To backup live UFS filesystem it was required to have soft updates journaling disabled. On the new 14.1-RELEASE it's not longer required but i didn't migrated to it yet to test it.
The journal was disabled from single user mode using tunefs -j disable /

The directories which are not needed to be included into the backup and can be easier downloaded again from internet like /usr/ports are skipped via nodump flag using the following command chflags nodump /usr/ports

The actual dump command for remote backup is:
/sbin/dump -C16 -b64 -0Lau -h0 -f - / | gzip | ssh -p 2222 user@ftp.example.com dd of=/home/user/05062024.dump.gz

The backup is then restored into Hyper-V VM where it's tested before any freebsd-update or pkg update before actual update of the server is done. The restore steps are following:

sh:
#1. create the backup
#2. copy the backup to windows host on d:\ using winscp rename to root.dump.gz
#3. create disk.iso using cdburnerxp containing root.dump.gz and this restore1.sh script
#4. create vm GEN2 secureboot=off with 2 cdrom drives and 1 hdd 50gb or more.
#5. first dvd is bootonly and second dvd is disk.iso with backup.
#6. boot into live cd
#7. mount_udf /dev/cd1 /media
#8. sh /media/restore1.sh


gpart destroy -F da0
gpart create -s gpt da0
gpart add -t efi -s 100M da0
gpart add -t freebsd-ufs -s 40G da0
gpart add -t freebsd-ufs -s 5G da0
gpart add -t freebsd-swap -s 4G da0
newfs_msdos -F 32 -c 1 /dev/da0p1
newfs -U /dev/da0p2
newfs -U /dev/da0p3
mount -t msdosfs /dev/da0p1 /mnt
mkdir -p /mnt/EFI/BOOT
cp /boot/loader.efi /mnt/EFI/BOOT/BOOTX64.efi
umount /mnt
mkdir /tmp/root
mkdir /tmp/temp
mount -o rw /dev/da0p2 /tmp/root
mount -o rw /dev/da0p3 /tmp/temp
setenv TMPDIR /tmp/temp
cd /tmp/root
zcat /media/root.dump.gz | restore -rvf -
rm /tmp/root/restoresymtable
ee /tmp/root/etc/fstab
# edit the fstab as the original server is using /dev/raid/r0p2 and the vm is using da0
#/dev/da0p2 / ufs
#/dev/da0p4 none swap

ee /tmp/root/etc/rc.conf
# comment all to disable all services and network except hostname variable as the lab network is using different vlan and subnet.
# shutdown and remove dvd roms from vm.
I like your commitment to UFS VladiBG
That's unusual enough to be observed.

My backup approach is simple, only my personal data in /home are saved, the rest doesn't matter.
The tools I use are borg and rsync, borg backups are located in another internal disk/pool, rsync backups are in an external disk.
ZFS send/receive could be the next challenger ...
 
  • Thanks
Reactions: dnb
Back
Top