Other Redundant remote storage advise


New Member

Messages: 2

Hi All,

I have a small BSD server running at home with a small SSD and a 6tb disk. This is used for samba and as a plex media server
I also have a remote server with 15 disks, sitting at work.
Home and work are linked via VPN, but over a fairly slow connection (20mbps with about 30ms latency)

My goal is to have a low power server running at home and have it mirrored to a redundant encrypted disks/pool/pack at the remote site.
In an ideal world, the disks / pool would remain encrypted to the remote server.

I have built a 14disk RaidZ2 pool and created a zvol. I have then run HAST with the single 6tb disk and the zvol (home as primary and remote as secondary) and finally created a (geli backed) UFS filesystem on the home server.
I have (hopefully) configured HAST to run in async mode, but copying a file on the local server still takes a very long time, as though the data is being copied over the VPN link to the far end HAST member.
I would have expected the copy to complete much quicker and then have HAST copy the blocks over in the background.

Should HAST work as I expected?

Is there a better way to achieve my goal?



Staff member

Reaction score: 1,224
Messages: 4,074

I wouldn't run HAST across such vastly different hardware. Or across the Internet at large. Too many variables to manage that would lead to drops in the connection.

Tarsnap would probably be good for this. Create a tarsnap server on the work system, and copy the encrypted backups to it.
Edit: Oh, it appears you can't create your own tarsnap servers, only the client is available in the ports tree.

Or, just rsync from your home system to the encrypted zfs dataset. Although, anytime the dataset is mounted, the data is accessible unencrypted, so anyone at work with access to the server would have access to your data.

Probably be better to create tarballs, encrypt them, then copy those to the remote server.


New Member

Messages: 2

My commit is current just over 2TB, so pushing regular tarballs of the complete dataset would not be possible.
I could push deltas, but at some point, I would want to merge them all back together.

I had thought about running an Owncloud / Nextcloud server at work and the client on the home machine. Owncloud / Nextcloud can encrypt the data at rest, although the filenames are still in clear text. This isn't ideal, but it better than nothing.

I had also thought about running an iSCSI target at work and using ZFS to mirror between the local disk and the iSCSI device. Im not sure how funny ZFS or iSCSI would find the latency on the VPN.