Maybe there was just a misunderstanding.
But of course I can give you some more details, if you like.
The problem is that it is not really a backup because it synchronizes only the last state of the data.
Not quite.
You can rename snapshots.
Deleting e.g. #3 (you cannot overwrite snapshots), renaming #2 to #3, and #1 to #2, then make a new one #1 you may have 3 states of the last 3 snapshots (I do 10 within a loop.)
But beware:
If you rollback to e.g. #2 - even accidentely, the filesystem falls back
completely to state of #2
including the snapshots(!), if those are not made into an independent filesystem.
ZFS snapshots are a nice thing (for many users a main reason to use zfs even on single partition pools.)
But I recommend to do some playing/testing/experimenting with them before of any real use you rely on.
As I said, to me snapshots are not really a kind of a real backup.
They are on the same physical drive.
One can do snapshots to other machines, or copy/export them, but I don't have experience with that.
Since they don't take time to make, almost need no storage space, and are pretty good to quickly "reset" the file system to an exact former state - "nice to have while cost nothing" - my scripts doing them daily for / and ~ additionally.
Since I just want to have simple copys of my home(s) on another machine, and the amount of data allows me to do so (not having TBs to be saved daily), /home/ and also /root/ having a copy on that machine connected via nfs by being synced with rsync daily.
That machine (I don't really dare to use the word 'server' for it on this forum) does nightly a new zipped tarball of each, having 10 days to go back eventually.
I'm thinking of keeping one encrypted tarball additionally in some web space.
But at the moment I don't feel my data was worth the cost.
Additionally my /root/ contains a directory also daily rsynced with the directories of system's configs such as /etc/, and others.
The script also does a
pkg prime-list > installedpackages.txt
Those are automatically backuped to my "server", too, when /root/ is rsynced.
I once rebuilt my system after a complete new installation with that; I was astonished how easy this could be done half automatically: install FreeBSD on a blank machine, let pkg install all packages from this file, rsync/copy the directories back, a couple of hours, and almost everything again as nothing ever happend (Try this with MS Windows! Good Luck!
)
I split my data, and my data from system.
The amount of data to be rsynced daily is about 16G, which takes about 5 minutes on my LAN.
This doesn't look much, because larger amounts of data, such as my "library" containing my PDFs collection (books, datasheets etc.), music, pictures, downloaded softwareapackges, ...etc., old stuff like old "home"-directories from my former machines ... what one has collected over twenty years... are outsourced to another zpool.
I don't want my /home containing 4TB of stuff I don't look at for years, stress the backup routine every day with it.
All programming code I write (including shell scripts, and LaTeX) of course are under version control.
It's all backuped with my rsync, too, but also there are independent repositories I can pull from, if everything fails.
'hope that gave you some ideas.
What has much or fewer worth to you, and how much redundancy you feel secure with for each, is of course your definition.