Backup Server

What is the best way to do nightly backups of my windows 10 workstation over the network to my Freebsd server? This will be a system that will be long term use, meaning that I don't plan on changing hardware or software much once it's running. I want this to be as automated as possible and would like (if possible) to avoid using samba. I've thought about using something like sftp but am not sure that I could automate that from the windows side.
 
Are you running Windows Pro? I created a ZFS volume and exported it over iSCSI, then formatted the iSCSI target to NTFS and just let Windows File History handle everything. Windows Home Edition doesn't have an iSCSI initiator, though.
 
Are you running Windows Pro? I created a ZFS volume and exported it over iSCSI, then formatted the iSCSI target to NTFS and just let Windows File History handle everything. Windows Home Edition doesn't have an iSCSI initiator, though.
The same can be done with Samba. ZFS is Samba aware. Create a ZFS volume on FreeBSD. Export via Samba. Mount on Windows and make it a target for its history. Actually in a business environment I would make everyone just use those Samba shares by default. I would take regular snapshots for occasional role back and I would back up FreeBSD file server with ZFS remote replication. FreeNAS can be used instead vanilla FreeBSD in a Windows shop.
 
The last time I used samba (albeit some years ago) it was not great, I take it things have improved since then? I'm unfamiliar with ZFS, I was planning on having a 8TB raid array off an old 3ware card and have read that ZFS on hardware raid is a no go is this wrong? I'm running windows 10 pro, although iSCSI seems a bit complex for a nightly backup of important directories and a weekly whole drive backup. The server hardware is older (dual Pentium Pro) so i'd prefer to keep everything process intensive on the client side. Thanks for the suggestions so far, I'm still new at this although have used freebsd in the past...but only for a router and that was about 8 years ago. If you have anymore ideals, they are more than welcome! I plan to continue to evaluate as the thread progresses.
 
The last time I used samba (albeit some years ago) it was not great, I take it things have improved since then?
I set up Samba a few weeks ago (similar purpose, backup of Macs around the house), but haven't put it into production yet. Samba itself isn't bad; I hear that setting up the user authentication (translation between Unix users and the outside clients) may be gnarly, but I haven't gotten around to that yet (had to do the tax returns first, and there are yard work projects that eat my weekends).

I'm unfamiliar with ZFS, I was planning on having a 8TB raid array off an old 3ware card and have read that ZFS on hardware raid is a no go is this wrong?
It's not exactly that "ZFS on hardware RAID is a no go". Yes, you could do that, but it would be dumb. ZFS contains its own RAID layer, which works better than hardware RAID (because it is file system aware), yet is not harder to set up than hardware RAID. How about the following: Use the same physical drives, configure your 3Ware card to be in JBOD mode (every physical drive is immediately exposed to the FreeBSD OS, as an individual non-redundant drive), and then configure ZFS in a suitable RAID mode. If you have enough drives, I would make it two-fault tolerant, meaning RAID-Z2.

Another question is the following: If this volume is "only" a backup, why does it need to be so fault tolerant? After all, you only look at the backup when the original Windows machines have already failed. So if the backup crashes, usually no harm is done, and it takes a double fault (one Windows machine + the backup) to lose data. So here would be my suggestion for simplification: If you have the money to buy a new disk, forget using the 3Ware card and the RAID, and just buy a single 8TB drive (they are below $300). This solution is much simpler, and easier to manage and support.
 
I used sysutils/bacula-server to backup to file on ZFS. There is also a Windows client. Works well once you get it setup and running the way you want.
 
The last time I used samba (albeit some years ago) it was not great, I take it things have improved since then? I'm unfamiliar with ZFS, I was planning on having a 8TB raid array off an old 3ware card and have read that ZFS on hardware raid is a no go is this wrong? I'm running windows 10 pro, although iSCSI seems a bit complex for a nightly backup of important directories and a weekly whole drive backup. The server hardware is older (dual Pentium Pro) so i'd prefer to keep everything process intensive on the client side. Thanks for the suggestions so far, I'm still new at this although have used freebsd in the past...but only for a router and that was about 8 years ago. If you have anymore ideals, they are more than welcome! I plan to continue to evaluate as the thread progresses.
I don't like either. The sole Windows user in my lab uses http://www.netdrive.net

which is the best SSHFS client for Windows I know of. Windows NFS clients is an option if you have money for enterprise edition. 3ware cards you are taking about are pure HW Raid cards and can't be put into HBA mode. Don't use ZFS on the top of it. When I think better you probably don't have ECC RAM so for you ZFS is not an option.

Somebody mentioned Bacula. That is non trivial to set up.
 
I set up Samba a few weeks ago (similar purpose, backup of Macs around the house), but haven't put it into production yet. Samba itself isn't bad; I hear that setting up the user authentication (translation between Unix users and the outside clients) may be gnarly, but I haven't gotten around to that yet (had to do the tax returns first, and there are yard work projects that eat my weekends).


It's not exactly that "ZFS on hardware RAID is a no go". Yes, you could do that, but it would be dumb. ZFS contains its own RAID layer, which works better than hardware RAID (because it is file system aware), yet is not harder to set up than hardware RAID. How about the following: Use the same physical drives, configure your 3Ware card to be in JBOD mode (every physical drive is immediately exposed to the FreeBSD OS, as an individual non-redundant drive), and then configure ZFS in a suitable RAID mode. If you have enough drives, I would make it two-fault tolerant, meaning RAID-Z2.

Another question is the following: If this volume is "only" a backup, why does it need to be so fault tolerant? After all, you only look at the backup when the original Windows machines have already failed. So if the backup crashes, usually no harm is done, and it takes a double fault (one Windows machine + the backup) to lose data. So here would be my suggestion for simplification: If you have the money to buy a new disk, forget using the 3Ware card and the RAID, and just buy a single 8TB drive (they are below $300). This solution is much simpler, and easier to manage and support.

I'll look into samba, although the other post mentioning netdrive seems promising. ZFS is interesting, but the main concern is that disk access will saturate the cpus, definitely something to think about for the future though. In truth, yes, it is only a backup drive and could be accomplished with only a single drive however I'm trying to learn as I go and this provides some functional education. I am, however; right at the limits of what I feel I can do with this without the learning curve becoming too steep. Thank you all for the suggestions, you've given me much to think about.
 
I actually started off thinking about using rsync, cron, and samba...I thought there has to be a better way though haha.

Not being very Windows savvy, I have before used rsync to synchronise a Windows host onto a ZFS box (back then running OpenSolaris), then used ZFS snapshots so that I was able to roll back to set points in time.
 
I've been using AMANDA for years in my home environment and at our company. It's a simple "fire and forget" solution: setup is done in a few minutes and it will just work for years to come... It can also handle ZFS snapshots or replication and can be set up to even tolerate non-frequent backups e.g. from a Laptop that isn't always running/connected.
There's also a Windows-client available [1], although i've never used it as I've been running windows only on VMs for the last ~10 years, so they always just got snapshotted by the host. (Except for one remaining MSSQL Server, which writes its nightly dumps to a samba share from where the AMANDA backup chain picks up...)


[1] https://wiki.zmanda.com/index.php/Zmanda_Windows_Client
 
'm unfamiliar with ZFS, I was planning on having a 8TB raid array...

iSCSI just needs a block device. A partition works best. You can also use a memory disk or sparse file, though that might not be worth the trouble.

I'm running windows 10 pro, although iSCSI seems a bit complex for a nightly backup of important directories and a weekly whole drive backup.

Setup is quick, support is built into FreeBSD (no external packages like Samba needed), and it's faster than CIFS. I also chose it because I didn't want to mess with a whole Samba setup just to back up 10 GB of stuff from one laptop. The terminology might be confusing, but here's what my /etc/ctl.conf looks like:

Code:
portal-group pg0 {
        discovery-auth-group no-authentication
        listen 0.0.0.0
}

target iqn.2016-06.com:windows-backup {
        portal-group pg0
        chap "username" "password"
        lun 1 {
                path /path/to/windows/backup
        }
}

You can pretty much just copy-paste that. The names of the "portal-group" and "target" are just for identification purposes: while they need to follow the above format (such as iqn.something.com), the actual names are arbitrary. Just add the username and password you want to use on the appropriate line, and change the "path" line to match that of the device node for the block device you're exporting (e.g. /dev/da0 or /dev/gpt/backup).

Instructions for the Windows iSCSI Initiator are readily available online. Once you've connected to the target the first time Windows will automatically reconnect to it on every log-in.
 
sysutils/bacula-server keeps backups separate from the catalog database. You have to backup catalog database. Recovery from Bacula requires some solid knowledge atypical for home users. That is non-starter even for smaller shops (less than 100 servers like mine). Bacula is a great enterprise solution. Most home users are not running enterprise infrastructure.

Actually at the home and at the work I use sysutils/rsnapshot to backup my desktop to centralized storage. At work that centralized storage happens to be a ZFS pool. At home centralized storage is just an extra HDD in my desktop. I do backup my desktop as well as all other devises at home using net/rsync with inplace option to the file server running DragonFly HAMMER. Combination of rsync -inplace with HAMMER enables me to browse through the HAMMER history of files (ZFS has no equal) using slider.

https://www.dragonflybsd.org/docs/docs/howtos/howtoslide/

Once every two weeks I use BorgBackup to encrypt data and remotely archive it to Amazon Glacer. In the past I used sysutils/duplicity for that but I came to like
archivers/py-borgbackup (BorgBackup) better. By the way HAMMER is both NFS and Samba aware so I could just use NFS shares as my home directories if I want to have full history on my OpenBSD desktop.

My original suggestion was based on the fact that he runs Windows. I am not very familiar with that OS so I am not sure if the things I mentioned above can be done. At home I run only OpenBSD with exception of my file server which runs DragonFly. In my Lab at work we use the mixture of Open, Free, and Red Hat. Our desktops run Rad Hat. I do maintain the Lab infrastructure.

Very nice reading for people who are interesting to learn more. Taken from this thread
https://www.reddit.com/user/rsyncnet/?sort=hot


I have some expertise in this area[1] so I would like to provide some additional information for future readers of this thread - specifically on rsync snapshots, rsnapshot, duplicity, attic and borg.

The simplest thing to do is to rsync from one system to another. Very simple, but the problem is it's just a "dumb mirror" - there is no history, no versions in the past (snapshots in time) and every day you do your rsync, you risk clobbering old data that you won't realize you need until tomorrow.

So the next thing to do is graduate to "rsync snapshots" - sometimes known as "hard link snapshots". The originator of this method was Mike Rubel[2]. What you are doing here is making a hard links only copy of yesterdays backup (which means it takes up no space, since it's just hard links) and then doing your "dumb rsync". Any files that changed will break the hard links and your snapshot from yesterday will take up as much space on disk as (the total size of all files that changed since yesterday). It's very simple, very elegant, and requires no software support or requirements on the remote end - as long as you can ssh to the server and run rsync/cp/rm it will work.

Next step is to stop writing the (very simple) rsync snapshot scripts and let rsnapshot do it for you. We've never recommended this, since the rsync snapshot script is so simple, and rsnapshot requires that you put it on the server side ... so it's not as lightweight or universal. As far as we know, it's terrific, bulletproof software.

(Here is a good spot to point out that Apples "Time Machine" backup tool is nothing but rsync and hard link snapshots - that's all it is. Super simple, super basic - so you and a lot of people you know may have been using rsync snapshots for years without even knowing it)

There's one problem with these methods and that is the resulting, remote backups are not encrypted. Your provider, or host (rsync.net, for instance) can theoretically see your data. If this is a problem, you need something other than rsync.

The answer to this problem since 2006 or so has been duplicity.[3] duplicity is wonderful software, has a long history of stable, well organized development, and we have even contributed funds toward its development in the past. The problem with duplicity is that, due to some design constraints that I am not going to go into, every month or two or three you're going to have to re-upload your entire dataset again. The whole thing. That might not matter to you (small datasets) or it might be a deal killer (multi terabytes over WAN).

So now we come to attic and borg - borg being a more recently and more actively developed fork of attic. attic/borg give you all the network efficiency of rsync (changes only updates over network) and all of the remote-side encryption and variable retention of duplicity (and rdiff-backup) but without any of the gotchas of duplicity. Some folks refer to attic/borg as "the holy grail of network backups"[4].

We[5] are the only remote backup / cloud storage provider with support for attic and borg[6]. However, we are also the only provider running a ZFS based platform[7] on the remote end. This means two things: If you don't require remotely encrypted backups, you can just do a "dumb rsync" to us, completely neglecting any kind of retention, and we will do ZFS snapshots of your data on the server side. It's like having Apples time machine in the cloud. OR, IF YOU DO require remotely encrypted backups, you can just point attic or borg at us and solve the problem that way. Either way, you're getting the one thing nobody else will give you - a plain old unix filesystem, in the cloud, that you can do whatever you want with.

[1] I am the owner and founder of rsync.net.

[2] http://www.mikerubel.org/computers/rsync_snapshots/

[3] http://duplicity.nongnu.org/

[4] https://www.stavros.io/posts/holy-grail-backups/

[5] http://www.rsync.net

[6] http://rsync.net/products/attic.html

[7] http://rsync.net/products/zfs.html

 
I do backup my desktop as well as all other devises at home using rsync with inplace
option to the file server running DragonFly HAMMER. Combination of rsync -inplace with HAMMER enables me to browse through the HAMMER history of files (ZFS
has no equal) using slider.

https://www.dragonflybsd.org/docs/docs/howtos/howtoslide/

Heh, nice to see that people are using slider. For those that don't know, I wrote it. It was an early Ada project of mine.
 
  • Thanks
Reactions: Oko
Heh, nice to see that people are using slider. For those that don't know, I wrote it. It was an early Ada project of mine.
I know that you wrote it :) We had a serious look in my lab at DragonFly BSD for the main file server and there were only few things which prevented us at that time for going DF/HAMMER instead of FreeBSD/ZFS.
Please send me PM if you care to discuss.
 
Back
Top