- Thread Starter
- #26
Does oneone has a sanoid/syncoid tutorial? How to configure for dummies ?
Ahem... nocp /backup/.zfs/snapshot/S-XX/dir/file/pippo.txt /tmp
zpaq x /backup/thebackup.zpaq /etc/pippo.txt -to /tmp/restored.txt -until 27
YepDoes oneone has a sanoid/syncoid tutorial? How to configure for dummies ?
if ping -q -c 1 -W 1 backup.francocorbelli.com >/dev/null; then
/bin/date +"%R ----------REPLICA remota server risponde PING => replica"
/usr/local/bin/syncoid -r --sshkey=/root/script/root_backup --identifier=bakrem tank/d ro
ot@backup.francocorbelli.com:zroot/copia_rambo
/bin/date +"%R ----------REPLICA locale: fine replica su backup"
else
/bin/date +"%R non pingato server di replica backup!"
fi
01 07 * * * /root/script/bak_replicaremota.sh >/dev/null 2>&1
Syncoid assumes a bourne style shell on remote hosts. Using (t)csh (the default for root under FreeBSD)
will cause syncoid to fail cryptically due to 2>&1 output redirects.
To use syncoid successfully with FreeBSD targets, you must use the chsh command to change the root shell:
root@bsd:~# chsh -s /bin/sh
FreeBSD users will also need to change the Perl shebangs at the top of the executables from #!/usr/bin/perl
to #!/usr/local/bin/perl in most cases.
Sorry folks, but if I set this with #!/usr/bin/env perl as suggested, then nothing works properly
from a typical cron environment on EITHER operating system, Linux or BSD. I'm mostly using Linux
systems, so I get to set the shebang for my use and give you folks a FREEBSD readme rather than
the other way around. =)
If you don't want to have to change the shebangs, your other option is to drop a symlink on your system:
root@bsd:~# ln -s /usr/local/bin/perl /usr/bin/perl
After putting this symlink in place, ANY perl script shebanged for Linux will work on your system too.
Ok, seriously, what are you talking about? You can access any individual file in a snapshot easily, and covacat pointed out how that works...Ahem... no
If you do a send-receive on the same machine, maybe on two different drives, you're doing nothing more than an rsync
Of course you will get a replica, and therefore all the various snapshots, and therefore it will also be very quick to obtain
But it won't be much better than a mirror
Not so easyIn my above post, by "restore" I meant "disaster recovery" where my local disk caught on fire and I need to restore it completely.
ZFS snapshots are without a doubt the best backup format that I've ever used. covacat already showed how to restore a file. It couldn't be easier.
zpaqfranz l intermedie.zpaq -all
zpaqfranz v54.16i-experimental (HW BLAKE3), SFX64 v52.15, compiled Jun 15 2022
intermedie.zpaq:
Block 1 K 58.823 (block/s)
484 versions, 484 files, 790 fragments, 35.934.705 bytes (34.27 MB)
- 2022-09-17 11:35:10 0 0001| +1 -0 -> 872.092
- 2022-09-17 11:33:28 1.923.392 A 0001|zpaqfranz.cpp
- 2022-09-17 17:07:17 0 0002| +1 -0 -> 151.268
- 2022-09-17 17:06:47 1.923.589 A 0002|zpaqfranz.cpp
- 2022-09-17 17:33:20 0 0003| +1 -0 -> 13.259
- 2022-09-17 17:33:12 1.923.595 A 0003|zpaqfranz.cpp
(...)
- 2022-09-30 15:07:15 1.971.339 A 0194|zpaqfranz.cpp
- 2022-09-30 15:14:44 0 0195| +1 -0 -> 742
- 2022-09-30 14:13:45 1.968.810 A 0195|zpaqfranz.cpp
- 2022-09-30 15:15:28 0 0196| +1 -0 -> 13.752
- 2022-09-30 15:15:27 1.968.806 A 0196|zpaqfranz.cpp
- 2022-09-30 15:15:49 0 0197| +1 -0 -> 101.480
- 2022-09-30 15:15:47 1.968.815 A 0197|zpaqfranz.cpp
- 2022-10-01 13:12:03 0 0198| +1 -0 -> 136.553
- 2022-10-01 13:11:51 1.969.104 A 0198|zpaqfranz.cpp
- 2022-10-01 13:12:15 0 0199| +1 -0 -> 112.745
- 2022-10-01 13:12:13 1.969.125 A 0199|zpaqfranz.cpp
- 2022-10-01 13:18:36 0 0200| +1 -0 -> 112.776
- 2022-10-01 13:18:34 1.969.179 A 0200|zpaqfranz.cpp
- 2022-10-01 13:44:15 0 0201| +1 -0 -> 112.738
- 2022-10-01 13:44:13 1.969.227 A 0201|zpaqfranz.cpp
- 2022-10-01 13:48:22 0 0202| +1 -0 -> 112.834
- 2022-10-01 13:48:20 1.969.311 A 0202|zpaqfranz.cpp
- 2022-10-01 13:56:29 0 0203| +1 -0 -> 112.865
- 2022-10-01 13:56:28 1.969.404 A 0203|zpaqfranz.cpp
- 2022-10-01 13:57:12 0 0204| +1 -0 -> 112.836
- 2022-10-01 13:57:10 1.969.367 A 0204|zpaqfranz.cpp
- 2022-10-01 13:58:59 0 0205| +1 -0 -> 112.891
- 2022-10-01 13:58:55 1.969.464 A 0205|zpaqfranz.cpp
- 2022-10-01 14:04:09 0 0206| +1 -0 -> 112.844
- 2022-10-01 14:04:07 1.969.506 A 0206|zpaqfranz.cpp
- 2022-10-01 14:04:33 0 0207| +1 -0 -> 123.147
- 2022-10-01 14:04:32 1.969.507 A 0207|zpaqfranz.cpp
- 2022-10-01 14:06:12 0 0208| +1 -0 -> 13.971
- 2022-10-01 14:06:09 1.969.614 A 0208|zpaqfranz.cpp
- 2022-10-01 14:08:54 0 0209| +1 -0 -> 135.886
- 2022-10-01 14:08:52 1.969.716 A 0209|zpaqfranz.cpp
- 2022-10-01 14:10:21 0 0210| +1 -0 -> 14.004
- 2022-10-01 14:10:19 1.969.729 A 0210|zpaqfranz.cpp
- 2022-10-01 14:10:35 0 0211| +1 -0 -> 14.016
- 2022-10-01 14:10:33 1.969.732 A 0211|zpaqfranz.cpp
- 2022-10-01 14:10:53 0 0212| +1 -0 -> 14.016
- 2022-10-01 14:10:51 1.969.732 A 0212|zpaqfranz.cpp
- 2022-10-01 14:11:53 0 0213| +1 -0 -> 14.059
- 2022-10-01 14:11:51 1.969.811 A 0213|zpaqfranz.cpp
- 2022-10-01 14:12:16 0 0214| +1 -0 -> 14.045
- 2022-10-01 14:12:14 1.969.844 A 0214|zpaqfranz.cpp
- 2022-10-01 14:14:06 0 0215| +1 -0 -> 14.073
- 2022-10-01 14:14:02 1.969.912 A 0215|zpaqfranz.cpp
- 2022-10-01 14:21:34 0 0216| +1 -0 -> 15.611
- 2022-10-01 14:21:30 1.974.319 A 0216|zpaqfranz.cpp
- 2022-10-01 14:22:45 0 0217| +1 -0 -> 47.286
- 2022-10-01 14:22:43 1.974.326 A 0217|zpaqfranz.cpp
- 2022-10-01 14:23:12 0 0218| +1 -0 -> 35.974
(...)
- 2022-11-28 19:51:59 3.810.656 A 0482|zpaqfranz.cpp
- 2022-11-29 14:57:53 0 0483| +1 -0 -> 191.301
- 2022-11-29 14:50:40 3.812.814 A 0483|zpaqfranz.cpp
- 2022-11-29 15:03:23 0 0484| +1 -0 -> 40.650
- 2022-11-29 15:02:59 3.813.006 A 0484|zpaqfranz.cpp
971.514.969 (926.51 MB) of 971.514.969 (926.51 MB) in 968 files shown
Do you want this specific version (just an example)?
- 2022-10-01 13:44:13 1.969.227 A 0201|zpaqfranz.cpp
C:\zpaqfranz\spaz>zpaqfranz x intermedie.zpaq -to z:\estratto -until 201
zpaqfranz v54.16i-experimental (HW BLAKE3), SFX64 v52.15, compiled Jun 15 2022
intermedie.zpaq -until 201:
201 versions, 201 files, 324 fragments, 14.627.901 bytes (13.95 MB)
Extracting 1.969.227 bytes (1.88 MB) in 1 files (0 folders) with 32 threads
0.047 seconds (00:00:00) (all OK)
C:\zpaqfranz\spaz>dir z:\estratto
Il volume nell'unità Z è RamDisk
Numero di serie del volume: F849-FB20
Directory di z:\estratto
30/11/2022 17:03 <DIR> .
30/11/2022 17:03 <DIR> ..
01/10/2022 12:44 1.969.227 zpaqfranz.cpp
1 File 1.969.227 byte
2 Directory 40.324.943.872 byte disponibili
Not so easy
Also because, sooner or later, the snapshots will be purged
In my experience with magnetic disks already a thousand slow down considerably
patmaddox$ ssh nas
Welcome to FreeBSD!
patmaddox@nas:~ $ cd istudo-pending/
patmaddox@nas:~/istudo-pending $ ls next\ EIS\ lesson/next\ EIS\ lesson.dorico
next EIS lesson/next EIS lesson.dorico
patmaddox@nas:~/istudo-pending $ ls .zfs/snapshot/*/next\ EIS\ lesson/next\ EIS\ lesson.dorico
.zfs/snapshot/autosnap_2022-11-27_15:35:03_hourly/next EIS lesson/next EIS lesson.dorico
.zfs/snapshot/autosnap_2022-11-27_16:35:03_hourly/next EIS lesson/next EIS lesson.dorico
<SNIP 68 LINES>
.zfs/snapshot/autosnap_2022-11-30_09:35:02_hourly/next EIS lesson/next EIS lesson.dorico
.zfs/snapshot/autosnap_2022-11-30_10:35:03_hourly/next EIS lesson/next EIS lesson.dorico
patmaddox@nas:~/istudo-pending $ for FILE in .zfs/snapshot/*/next\ EIS\ lesson/next\ EIS\ lesson.dorico; do SUM=$(md5sum -q "$FILE"); echo "$FILE - $SUM"; done | uniq -f 1
.zfs/snapshot/autosnap_2022-11-27_15:35:03_hourly/next EIS lesson/next EIS lesson.dorico - 30f83d10ad23486a4f652edace7e0dbb
.zfs/snapshot/autosnap_2022-11-27_16:35:03_hourly/next EIS lesson/next EIS lesson.dorico - 7e2f04af5b86daf3f49f8bd5128b9d56
<SNIP 9 LINES>
.zfs/snapshot/autosnap_2022-11-30_01:35:03_hourly/next EIS lesson/next EIS lesson.dorico - 8e4c096c657522e02771b847fd24a0b7
.zfs/snapshot/autosnap_2022-11-30_03:35:03_hourly/next EIS lesson/next EIS lesson.dorico - 6e1ac76be53ad9c99928257d60bae0da
export source="ZT/usr/home"
export mp="/mnt/snap_usr_home_hourly"
export mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export destsmall="ZHD/backup_usr_home"
export dest=${destsmall}@${mydate}
export current=${source}@${mydate}
/sbin/zfs destroy -r -f -v ${destsmall}
/sbin/zfs create -u -v ${destsmall}
/sbin/zfs list -t snap ${source} | /usr/bin/grep ${source}@ | /usr/bin/awk '{print $1}' | /usr/bin/xargs -I {} /sbin/zfs destroy -v {}
/sbin/zfs snapshot ${current}
echo "SRC:" ${current}
echo "DST:" ${dest}
( /sbin/zfs send ${current} 2>>/var/log/messages | /sbin/zfs receive -o readonly=on -o snapdir=hidden -o checksum=skein -o compression=lz4 -o atime=off -o relatime=off -o canmount=off -o mountpoint=${mp} -F -v -u ${dest} 2>>/var/log/messages ) || /usr/bin/logger "zfs-send-receive-once failed" ${current} ${dest}
export source="ZT/usr/home"
export mydate=`/bin/date "+%Y_%m_%d__%H_%M_%S"`
export current=${source}@${mydate}
export dest="ZHD/backup_usr_home"@${mydate}
export previous=` /sbin/zfs list -t snap -r ${source} | /usr/bin/grep ${source}@ | /usr/bin/awk 'END{print}' | /usr/bin/awk '{print $1}'`
/sbin/zfs snapshot ${current}
echo "SRC:" ${previous} ${current}
echo "DST:" ${dest}
( /sbin/zfs send -i ${previous} ${current} 2>>/var/log/messages | /sbin/zfs receive -o readonly=on -v -u ${dest} 2>>/var/log/messages ) || ( /usr/bin/logger "zfs-send-receive failed" ${previous} ${current} ${dest} ; /root/Root/backup/once_usr_home )
Well, I admire your unwillingness to learn something newI don't. I am completely happy with ZFS snapshots, for the reasons that I demonstrated. I admire your persistence though!
zpaq a /zhd/backup/myveryownbackup.zpaq /usr/home/whatIwanttoget
Any suggestion that running zfs send/recv is inferior to some other option ...
...
The ability to update the backup of filesystems with tens of millions (yes) of files in well under a minute** exists with ZFS, explicitly because it is the filesystem, so it knows explicitly what has changed from time point A to point B without having to crawl through the tree and look for changes.
Couldn’t agree more. The things I care most about (photos, for example) are backed up on two separate zpools (one typically detached and off-site) and in the cloud. I try not to be bleeding-edge (I don’t run the ports-tree OpenZFS) and run scrubs religiously.That is a very good argument. Matter-of-fact, the head of my department (a large research group at one of those big computer companies) made exactly that argument to executives, and they gave him a blank check worth many M$ to create a new file system, which was intended to be integrated with the backup system. We built it, it worked well, we shipped it to customers, they liked it, and then the company cancelled the product (for good and logical reasons, which had nothing to do with backup integration).
Except that there is one argument that is ignored in that argument: You are relying on the correctness (being bug free) of ZFS here. If there is a ZFS bug, then the replication process will destroy BOTH copies. That gets back to the fundamental question of backup, redundancy and replication: What are you trying to accomplish? Are you trying to defend against failure of a single disk (typically, one uses RAID for that)? Against failure of the whole site (fire destroys the computer, a good defense is an off-site backup)? Or in this case, against a software bug?
On my home system, the important data is in ZFS on FreeBSD, using two mirrored disks. I know that 3 disks would be better, but I have only limited space and physical connections. The local backup copy is also on ZFS, using a 3rd disk drive which is about 2m (6 feet) away from the server, and reasonably well protected against fire. The remote backup copy is on a Mac AppleFS disk with native encryption, and many miles away. I used to have backups also on a cloud-based service (one of the big cloud providers), but my homebrew software lost that ability, and I didn't maintain it well enough. But you can see that I deliberately use more than one file system (and OS) for the backup system. You can never be too paranoid.
Would stacks of pennies, face up to represent 1, face down to represent 0, each stack representing a file be too paranoid?You can never be too paranoid.
Replication is not backupAny suggestion that running zfs send/recv is inferior to some other option for replication* of a filesystem is really going to have to show some receipts.
(...)
zpaq(franz) has much more deduplication, compression and verification and checksum of zfsDeduplication, compression, and verification (checksums embedded with and checked against the data) all exist in ZFS, if you want to turn them on (well, verification is on by default, and strongly recommended against turning off, but the knob is there).
And in a LOT of other situations* Except, obviously, if you don't run ZFS on both sides. You can use it (by saving streams) in that fashion, but I will agree it is far from ideal (and not how it is intended to be used.)
You can send as much as fast, or even faster, with zpaq(franz) with rsync (!)** Closer to a second if there have been no changes, obviously much longer if there have been a significant of actual changes; on my system, enumerating and sending 20GB worth of changes accumulated over a week on a filesystem system with 20M files took less than 15s. (This time will obviously depend on the types of I/O going in to creating those 20GB of changes.) Good luck doing anything from above the filesystem layer that can guarantee that any changes (even changes obfuscated by retaining file sizes and/or modification dates) of those 20M files were carried over to the destination in 15s.
If you have a professional level of paranoia, you will use different software compiled on different machines with different filesystems and different CPUs and EVEN different endiannessBut yes, a software bug could bring it all down, but so far (knocks on wood) I’ve had much better luck with ZFS than any other FS. But your reminder to inject some diversity of technologies for critical data is worth remembering.
If you have a script more complex than a single line yes, you have an incredible fragile script, for my standard at leastI think you're confusing me with someone else at this point. I doubt you'd agree I have "incredibly fragile scripts" - and we may disagree on effectiveness of the results.