ZFS Procedure to restore zfs root from remote backup

I'm sure this is documented someplace but my Google-fu is failing me today. Assume a server where each individual zfs file system has been backed up by a remote backup server with a zfs send and ssh via:
backup # ssh <server> zfs snapshot ${pool}@${today}.${level}
backup # ssh <server> zfs send ${pool}@${today}.${level} > ${localname}@${today}.${level}
and now the zpool is corrupt. What is the procedure for restoring this (I'm presuming from a boot from the live DVD). A link to a web page would be fine (oddly, this procedure is not documented in the handbook).

Basically, I'm looking to replicate the dump/restore catastrophic recovery procedure of:
fixit # cd <fs to be restored> ; ssh <backup server> cat <level.0.dumpfile> | restore -if -
fixit # cd <fs to be restored> ; ssh <backup server> cat <level.1.dumpfile> | restore -if -
ZFS is very different. I'm guessing something along the lines of this will work:
# Will this destroy any existing corrupt zpool or do I need to manually do that?
zpool create [I]<zpool>/[/I]mirror [I]<device1>[/I] <device2>
# Do I need to recreate the individual filesystems here or will restoring ROOT do it?
zpool import -N -R /mnt <zpool>
ssh ssh <backup server> cat <Full_snapshot_file> | zfs receive
ssh ssh <backup server> cat <incremental_snapshot_file> | zfs receive
Repeat the above for each individual file system. Am I close?
 
I have always seen it as something like..

#zfs send snap | ssh host | zfs receive /dataset

the input to zfs receive needs to be a zfs send

Oops, answered the wrong question.. To restore the sending pool, just rollback the snap
 
The server machine had bad RAM, causing corruption of the zpool so I don't trust anything on it. The backup server has backups made with:
zfs snapshot ${pool}@${today}.${level}
zfs send ${pool}@${today}.${level} > ${localname}@${today}.${level}
 
I guess you should do it in steps.
1 - on local create a new pool (ie ${pool} ) with an empty filesystem to restore (ie ${fs} )
2 - on local pull data from remote and receive into the new pool as a snapshot:
# ssh ${remote} cat ${localname}@${today}.${level} | zfs receive -F ${pool}/${fs}@restore
3 - set the correct mountpoint for the filesystem - this can be a temp mountpoint that you change later to the production mountpoint, if all went well.
zfs set mountpoint=/temprestoreofpool ${pool}/${fs}
4 - rollback the downloaded snapshot
zfs rollback ${pool}/${fs}@restore
5 - scrub, test, verify that data is correct and complete
6 - remove the local copy of the backup snapshot
zfs destroy ${pool}/${fs}@restore
7 - promote filesystem to production
zfs set mountpoint=/productionpath ${pool}/${fs}
 
Back
Top