Solved What would be today the fastest and most secure way to copy 4TB of data in a LAN?

Hi all,

My case is rather simple, I just want to prepare a cold backup disk to keep it in another location.

I used to run something like this, and I'm looking at the moment what cipher I could use as an alternative to `arcfour`.
sh:
rsync -a --stats --human-readable -e "ssh -T -c ARCFOUR_ALTERNATIVE -o Compression=no -x" bob@192.168.112.8:/myhostdata /myexternaldisk

Then I thought I could simply ask what other people would do :)

Thanks in advance
 
"Web and network services" is probably a better location. Although "Storage" could also apply (backup).
 
Personally I would use zfs send/receive piped through ssh, assuming the source is on zfs. Something along the lines of:
# zfs snapshot source/dataset@backup
# zfs send source/dataset@backup|ssh user@targethost zfs receive -Fu target/dataset


The actual commands would need to be tweaked to your specific layout of course, but this would likely be just about the fastest and most convenient way to accomplish what you're asking about.
 
I suspect this is through a "trusted" LAN? Then why not simply move the data via nc(1), completely omitting any potential bottleneck of ssh/encryption?

Although, on halfway modern systems the penalty of a ssh tunnel is pretty much negligible, even on faster than 1GBit links. I often move around dozens and sometimes hundreds of GB of data over 10G and 40G links and mostly just use scp for file-based transfers. But usually those are zfs send|recv transfers, where nc definitely is the weapon of choice. If one of the systems involved uses ancient storage (i.e. spinning rust) I put an mbuffer(1) in the chain to improve things a bit.

OTOH, If this is a one-off job to prime that backup disk/pool; I may just put it into the system where the data resides and make a local transfer - but only if the saved time justifies me getting up from my desk and walk to the other building where our servers are located...
 
Hi all,

My case is rather simple, I just want to prepare a cold backup disk to keep it in another location.

I used to run something like this, and I'm looking at the moment what cipher I could use as an alternative to `arcfour`.
sh:
rsync -a --stats --human-readable -e "ssh -T -c ARCFOUR_ALTERNATIVE -o Compression=no -x" bob@192.168.112.8:/myhostdata /myexternaldisk

Then I thought I could simply ask what other people would do :)

Thanks in advance

My bad, maybe some further background is due

The host is still running `FreeBSD 13.2-RELEASE` on a mirrored ZFS root on SSDs, with a raidz2 zpool of 8 x 4TB rust disks of 2 vdevs of 4 disks each.
The host doesn't have USB3.
The external disk is a 8TB rust disk in an USB3 case
I would like to sync the disk every 6/12 months locally using my laptop, or from the remote location, if I ever manage to get a decent ISP.

wolffnx Yup, gigabit.
puppydog I thought of that, but the external disk it's UFS2 formatted and has already some other stuff on it. Using zfs send/receive seems like a logical step, but I will have to find some temporary space for the existing data, and reformat the disk to ZFS. I will probably do this.
T-Daemon Added some comments to address this.
sko Yes, it's a home setup, so I would consider that a trusted LAN. In my mind using rsync was a 'good compromise' because after the initial transfer (with a weak cipher and no compression) I still could use it in the future to update the backup locally or from a remote location.
SirDice Indeed, thank for moving the post to the appropriate location.

In this case I think I will invest in an extra disk, so I can move the existing data I have on the external 8TB, reformat it into ZFS, and connected it locally to the host to perform the first backup sko, and I will have the chance to perform any future backups using zfs send/receive, with or without ssh puppydog

Thanks all for the help!
 
You don't say if it's possible to quiesce /myhostdata, and if so for how long.

When faced with data transfers of this size between production systems, with on-line access from multiple countries, I would use rsync over a private gigabit network with the fastest cipher ("-c none" was easy to hack back into the rsync source). I did this on a live system (it took many hours). I would then arrange for down-time, repeat an incremental on-line rsync daily prior to the outage, and run the final incremental off-line rsync (adding "--delete" to the option list) during the outage (limited to an hour or so).
 
thought of that, but the external disk it's UFS2 formatted and has already some other stuff on it. Using zfs send/receive seems like a logical step,
You could dispense with zfs receive and just store the output from zfs send as a single file.
Code:
zfs send source/dataset@backup | ssh user@targethost 'cat - > my-zfs-dump'
You'd loose the ability to easily restore individual files but you could restore the entire dataset from the file in the event of a total disaster
 
rsync -aPz /my/folder33 user123@box99:/some/folder44 will place folder33 in folder44 on box99
rsync must be installed on both box
huge good thing: if anything stops, rsync will pick right back up, just use trailing /
rsync -aPz /my/folder33/ user123@box99:/some/folder44/folder33/
 
Back
Top