Backup software

What backup software do you use to backup windows pc's, mac's to your FreeBSD servers. Looking for recommendations, Linux i just rsync.
 
Clonezilla. I backup the main PCs (accounting, economists, HR) with images (IMG). The girls can't live without their cat wallpapers. And I just don't feel like setting everything up from scratch. And this way the girls don't scream, and I can calmly have a shot of whiskey after work, instead of hanging on one PC for 3 hours.
 
Not a full answer to your question, but the replies so far also weren't full answers.
Rsync: way too slow for large datasets, even though great for its universal availability
Clonezilla: my choice for image backups of system partitions (Windows, Linux, anything non-ZFS), but not helpful for quick and easy daily data backups

Daily backups, I do as "incremental forever" backups with deduplication, using Borg Backup. Available on FreeBSD as
archivers/py-borgbackup/ and on Linux. If not using on a dual-boot machine, you can use things like the WSL on Windows to run the Linux binary, which is reported to work. For Mac, borg backup is available as well to the best of my knowledge, but I have no experience there.

Advantages:
  • Lightning fast incremental forever backups
  • Deduplication massively cuts down on backup size and time
  • Network support (e.g. ssh tunnels)
  • Repository is mountable, allowing easy browsing and copying of files from backup
  • Fine-grained auto-purge of old backup data, while retaining integrity of incremental backups
  • Strong encryption available
  • BSD License
  • No backup server required, just a valid storage repository (network, USB drive, cloud, whatever)
Disadvantages:
  • Not efficient or easy to use for image style backups of operating system drives

Check here for more info: https://www.borgbackup.org/

I have used borg backup since 2018 without a single error in my backups.
 
I like using restic to back up Windows and Linux machines to local drives and also to a FreeBSD destination over ssh. It seems to work well and has built-in archive checks to validate the data. To be fair, I have not yet tested a complete disaster recovery. It's best to write a wrapper script to make periodic runs easier.
 
By all my experience with Windows (since 95) there is no real useful way to backup Windows but to create full images, or clones of partitions with something like GNOME Partition Editor, Clonezilla, partimage (comes with GParted live), or just simply dd will also do.

And yes, recover means to clone back to the point of the last bu ("Back to the Future IV")
So:
1. do it regulary, do it often
2. keep your data as far as possible on separate partitions.
3. keep your Windows partitions as small as possible (gparted does a good job on managing Windows partitions) not to spend whole nights(weekends) shoveling empty TBs.

For me this issue also was another minus point to turn my back eventually on Windows.
Plus point was I had lots of then unused HDDs to play, and to start my first FreeBSD raidz2(zfs) fileserver with 😁
 
To be fair, I have not yet tested a complete disaster recovery.
You better do.
All backup, backup software, backup procedures, backup strategies, config...is just theory - until the disaster strikes.
What actually works, is really useful to your needs, practicable in a stressing situation of panic and pressure,
you can and will only see when you actually do a recovery.

Until then everything else is nothing but the nice feeling you may convince yourself you're safe.
Only within real test conditions you can see if you really are.
Don't let the disaster become your first test condition.
 
You better do.
I'll make my own trade-offs, thank you. I meant checking it independently of restic itself, which will validate the archive on demand. To really test a backup, in my view, one would restore it elsewhere and then run an automatic file-by-file comparison between the original data and the restored backup to see the differences. For valid practical reasons, I suspect almost nobody actually does this, including you.
 
Years back, I wrote my own.
It is a Delphi app that wraps the command line version of RAR as the backup engine.
This creates non-proprietary .RAR backup files.

I also use Syncredible which is a sync application.
It is very flexible and can be either One Way or Both Ways for sync.

I use One Way, so all Source files are copied to Destination but never cleared or lost on Destination.
This avoids an "oopsie" where I deleted all of my 2019 photo work from my NAS.
 
I'll make my own trade-offs, thank you.
I never meant to tell you what to do.
Personally I don't care how anybody is doing their backups (as long as it does not affect my data), and as like you don't care how I do my backups I give a sh* if you were happy with cp /* /dev/null/ as "backup" *cough*

All I wanted to say is, one better be well versed in the recovery of a backup, and decide if that's what one wants before actual disaster strikes.
If you're already doing this, that's good, fine, exemplary. I don't know. I don't care.
We are in an open, public forum. Everybody on the internet can read what's written here. Some may actual pass by just reading without posting themselves. So in every post I make I try to respect all readers, not only the one I response to, to give something that may be useful even to somebody unseen.

I simply had some experience in the past with several backup software and strategies, and ran into the one or the other occasion of actual recovery, and experienced, 'fu*! this is not really what I wanted!'
While best case may result in hours of fumbling under stress, until most is half-way back to a state one can live with, but not really full satisfying my needs.
All I wanted to say is, not learn it while the actual need to recover, but before.
And again, if you already did so, that's good, fine, exemplary....

I meant checking it independently of restic itself,
I know now. I understood your post the way you do no testing at all. That's what I was objecting to - in general, not to your backup strategy, nor you personally.

For valid practical reasons, I suspect almost nobody actually does this, including you.
Oh, but I do.
I not only tested my backup strategy under "hard-core" worst case full disaster testing conditions, I also validate every single backup.
I don't just want to believe I can trust my backups. I want know for sure what I can reproduce, and how without panicky fumbling for trade-offs while I recover. I had that. I want to deal with trade-offs when I create my backup strategy, not while the recovery.
That's why I suggested proper testing, not commanded it.
Which again of course does not mean anybody now has to question their backup strategy, nor to tell me it's none of my business, 'cause I know it's not.
I also admit my backup strategy ain't nothing for anybody else but me, nor is it suitable to do full backup copys of many TBs every hour. But I do validate every single one of it.
And again, no need for being snappy if you do it otherwise.

peace out.
 
Oh, but I do.
I not only tested my backup strategy under "hard-core" worst case full disaster testing conditions, I also validate every single backup.
I am genuinely interested in your approach to this. If you are just running the validate step in your backup software, that's not what I'm talking about because that might not protect you from bugs in the backup software. If you are restoring to a separate drive and manually spot checking sizes and individual files, that's not what I'm talking about because you might miss something. However, if you have an automated way to compare restored files to the originals, perhaps something similar to mtree, I would like to learn about it.
 
I am almost sure this will not be anything satisfying you, but if you are
genuinely interested
it was arrogant by me, not to answer.
So:
I don't use any "real backup software" at all (anymore.) I'm a one-man show, no root of some enterprise server, so I don't need to deal with larger amounts of data than several TB, no responsibility for other's data, etc.

It's my way. Important part of my way was to learn not just to install, run and (blindly [untested]) trust some bu software - it's less the bu-software may have a bug, but more one is not sufficiently aware enough how to deal with recovery correctly, when disaster strikes, or the software was too complex, too opague for my personal needs [again: my opinion.]
Very first thing was to define a backup strategy: What do I want to backup, how, where to, when, frequency. Which includes to have different pools for different kinds of data, each having an individual backup plan, and to decide which data is saved where; not to have everything in one large pool, and then see how to run the whole shebang over several TB every day, while most of the time only some MB actually change.

I like to have uncompressed, 1:1 copys of my directories - nothing you may bu large enterprise servers with hourly; as I said: it's my way. It works for me.
By my experience most occasions I need to fall back on a backup it's just because I messed up a single file, a directory at the most. So to me it's way too complex, too complicated, too long-winded to recover whole filesystems, or even the whole systems, maybe even dealing with SQL databases..., to just get a single file out it.
As recovery I just simply copy the file back, or do a rsync in the opposite direction.
Of course I could do a whole system's recovery this way - I actually once or twice needed to do so, and I was astonished how well, and quickly it worked: my whole system was back and up again within few hours, (almost) as nothing ever happend at all. (Of course, because this ain't not Windows 😅)

I simply run some (primitive) self-written sh-scripts that mostly just do rsync, diff, and checksums (well duh, I do not control personally every single byte every time), and snapshots, and tar for redundancy - nothing spectacular, really; nothing anybody with some basic scripting knowledge couldn't do themselves.
I once had mtree as part of my plan, but currently it's not (slows down my bu-routine too much; as you said: one has to always deal with trade-offs. The question everybody needs to answer himself is: where?)
 
My Windows backup:

- c: drive is kept small and backed up as an image. The "trick" here is that I store an uncompressed image on a compressed ZFS that is also de-duped. So similar backups share most of their blocks, but I still get the benefit of compression. Also, this way the images are directly mountable to look inside.

- I use windows only for gaming. Games that are modded are backed up with rsync. Games that are not modded are not backed up.

- Assorted other Windows bits live in a SMB drive that is directly on the ZFS server and is hence in the send/receive backup of the server.

My Macs have one backup USB drive each for time machine. In addition I rsync import data over to that same server.

I am also planning to put a time machine backup directly on NFS. That is supposed to be possible if you make a filesystem living in a file on the share. Makes sense. That way you only need to connect to the network and don't have to bother with USB disks.
 
I am also planning to put a time machine backup directly on NFS. That is supposed to be possible if you make a filesystem living in a file on the share. Makes sense. That way you only need to connect to the network and don't have to bother with USB disks.
That's exactly what I did with my wife's former MacBook: let Time Machine do its BUs to a apple specific image file (you need it) via NFS on my FreeBSD zfs fileserver. Worked pretty well.

Until my wife got her new MacBook Pro (the one with the new M1.) I slaved weekends over that issue just to figure out Apple somehow prevents Time Machine to access to any NFS or Samba. (She has pretty access to the server, but not TM.) Also the shell tool needed for some certain preparation, I just simply forgot what, ain't working anymore on this task anymore. Search me! My theory is Apple wants to urge people to use their cloud service. Now again there is an external USB drive attached to her MacBook; not really perfect, especially since my wife uses WLAN, only.

But maybe I'm mistaken, oversaw something.
If you get Time Machine do BUs again via NFS - on a new MacBook, please let me know.
But please don't send me links of Apple's How-To/Help sites on this topic; those are for the former models, I'm already through that, it simply does not work.
 
That's exactly what I did with my wife's former MacBook: let Time Machine do its BUs to a apple specific image file (you need it) via NFS on my FreeBSD zfs fileserver. Worked pretty well.

Until my wife got her new MacBook Pro (the one with the new M1.) I slaved weekends over that issue just to figure out Apple somehow prevents Time Machine to access to any NFS or Samba. (She has pretty access to the server, but not TM.) Also the shell tool needed for some certain preparation, I just simply forgot what, ain't working anymore on this task anymore. Search me! My theory is Apple wants to urge people to use their cloud service. Now again there is an external USB drive attached to her MacBook; not really perfect, especially since my wife uses WLAN, only.

But maybe I'm mistaken, oversaw something.
If you get Time Machine do BUs again via NFS - on a new MacBook, please let me know.
But please don't send me links of Apple's How-To/Help sites on this topic; those are for the former models, I'm already through that, it simply does not work.

Thanks for the heads-up.

I wonder whether iSCSI might be an option if NFS and SMB fail. That would be a more direct block device. Does macOS have an iSCSI client built in? Need to look that up...
 
I wonder whether iSCSI might be an option
Cannot help you there.
When I still remember correctly there simply ain't no choice within TM anymore to chose anything but an external drive or the cloud service, or if so, TM simply rejects the connection... - 🥵 bad memorys. It's one of those things you spent whole weekends on, trying not to threw hardware out of the window, and being afraid the neighbours might come over to ask why there is so much loud cursing...

But - please - if you get to manage it (on a recent model), please tell me, and I'll give it another shot.
ty
 
If you get Time Machine do BUs again via NFS - on a new MacBook, please let me know.
But please don't send me links of Apple's How-To/Help sites on this topic; those are for the former models, I'm already through that, it simply does not work.

Seems to work via SMB on my M4 mini:
Code:
hdiutil attach -nomount -noautofsck -imagekey diskimage-class=CRawDiskImage /Volumes/doomdata/tm-loeschen.diskimage
diskutil eraseVolume  JHFS+ meh /dev/disk9
tmutil setdestination -a /Volumes/meh

It's backing up now.
 
Cannot help you there.
When I still remember correctly there simply ain't no choice within TM anymore to chose anything but an external drive or the cloud service, or if so, TM simply rejects the connection... - 🥵 bad memorys. It's one of those things you spent whole weekends on, trying not to threw hardware out of the window, and being afraid the neighbours might come over to ask why there is so much loud cursing...

But - please - if you get to manage it (on a recent model), please tell me, and I'll give it another shot.
ty
I've had it working with net/netatalk3 for years now. Looks like it's time to update to Netatalk 4 if I can't convince the wife to ditch her Mac: https://netatalk.io/security

You might need a magic incantation to get your Mac to trust non-Apple storage:
Code:
sudo defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1

Though maybe choosing the right mimic model in your afp.conf might do the trick as well. I did both. Not sure which one worked.

N.B. I do not use the Avahi crapware. I'm quite happy with dns/mDNSResponder_nss
 
Thank you guys!
Yeah, I'm not into MacOS to know such things like
You might need a magic incantation to get your Mac to trust non-Apple storage:
(it's not part of otherwise exemplary Apple's official help sites; at least I found none)
...*sigh* so I need to tinker with my wife's MacBook again...kay.
TY!
 
rsync Linux and FreeBSD

Otherwise I copy through whatever convenient :p (cp, FileZilla, Explorer/Nautilus/anything-with-SMB GUI). I manage files manually and can't imagine needing anything else, and usually tar big stuff or game saves up as gz.
 
Back
Top