ZFS ZFS Snapshot Question

I recently lost 2 disks in my 5 disk ZFS RAID-Z array, resulting in an unrecoverable array. I do have a recent backup in the form of ZFS snapshots sent to an external USB drive on a regular basis, so my data loss was minimal.

I have replaced the failed disks and am ready to set my 5 disk array back up. MY plan to restore from the backup drive is to take a fresh snapshot of the backup drive and send it to the newly rebuilt array, which will copy all of the data back onto the RAID-Z pool.

At that point, I'm obviously going to want to re-implement my backup strategy... and this is where I get confused. Will I be able to send incremental snapshots of the RAID-Z pool directly to the backup drive based of that snapshot I used to restore the data? Or will I need to take a new snapshot of the RAID-Z pool, send that to the backup drive and then take incrementals from there?

I have close to 4TB of data here, and copying that over is quite time consuming on this system, so I'm hoping ot minimize the amount of full snapshots I need.
 
Why bother taking a new snapshot of the backup data? Unless you've been using the data on the backup directly, the last snapshot you sent from the source should be the most up-to-date data.

Regarding the question, say your have a snapshot called '2015-08-18' on the backup device. You'd send that to your new pool:
Code:
zfs send backup/dataset@2015-08-18 | zfs recv newpool/dataset
You now have the data on both pools, and both will have the exact same 2015-08-18 snapshot. It's then perfectly acceptable to use that snapshot as the source when sending data back the other way.
Code:
zfs snapshot newpool/dataset@2015-08-19
zfs send -i 2015-08-18 newpool/data@2015-08-19 | zfs recv backup/dataset
 
Ok, i'm having an issue with this I don't understand. When I try to snapshot from my rebuilt array back to the backup drive, I end up erasing a ton of data.

I have the following filesystems (where ZFSData is my newly rebuilt pool )

Code:
zfs list
NAME              USED  AVAIL  REFER  MOUNTPOINT
ZFSData          2.69T  2.58T  85.2G  /zfsdata
ZFSData/media    2.60T  2.58T  2.60T  /zfsdata/media
usbbackup        2.69T  1.70T  85.3G  /usbbackup
usbbackup/media  2.61T  1.70T  2.61T  /usbbackup/media


I first attempted to restore by just restoring the root filesystem, figuring /media would go with it.
Code:
zfs snapshot usbbackup@restore 
zfs send usbbackup@restore | zfs recv ZFSData

When I did this it did not restore /media. So I took a snapshot of that filesystem as well and restored it separately.

Code:
zfs snapshot usbbackup/media@restore 
zfs send usbbackup/media@restore | zfs recv ZFSData/media

Which worked fine. But now I'm at a loss as to how to set my backups up again... It seems (to me) like this should work:
Code:
zfs snapshot ZFSData@backup
zfs snapshot ZFSData/media@backup 
zfs send -Rvi restore ZFSData@backup | zfs recv -Fv usbbackup

But if I do this, it *looks* like it syncs both filesystems, but in reality it deletes all the content in /media on the backup drive and I have to rollback. What am I missing?
 
Don't see anything particularly wrong with those commands. What does mount output look like after you've run the backup (when usbbackup appears empty)?
 
Maybe you should use zfs snapshot -r. From zfs(8) manpage
zfs snapshot|snap [-r] [-o property=value]...
filesystem@snapname|volume@snapname
filesystem@snapname|volume@snapname...

Creates snapshots with the given names. All previous modifications by
successful system calls to the file system are part of the snapshots.
Snapshots are taken atomically, so that all snapshots correspond to
the same moment in time. See the "Snapshots" section for details.

-r Recursively create snapshots of all descendent datasets
 
Back
Top