I'm sure this is documented someplace but my Google-fu is failing me today. Assume a server where each individual zfs file system has been backed up by a remote backup server with a zfs send and ssh via:
and now the zpool is corrupt. What is the procedure for restoring this (I'm presuming from a boot from the live DVD). A link to a web page would be fine (oddly, this procedure is not documented in the handbook).
Basically, I'm looking to replicate the dump/restore catastrophic recovery procedure of:
ZFS is very different. I'm guessing something along the lines of this will work:
Repeat the above for each individual file system. Am I close?
backup # ssh <server> zfs snapshot ${pool}@${today}.${level}
backup # ssh <server> zfs send ${pool}@${today}.${level} > ${localname}@${today}.${level}
Basically, I'm looking to replicate the dump/restore catastrophic recovery procedure of:
fixit # cd <fs to be restored> ; ssh <backup server> cat <level.0.dumpfile> | restore -if -
fixit # cd <fs to be restored> ; ssh <backup server> cat <level.1.dumpfile> | restore -if -
# Will this destroy any existing corrupt zpool or do I need to manually do that?
zpool create [I]<zpool>/[/I]mirror [I]<device1>[/I] <device2>
# Do I need to recreate the individual filesystems here or will restoring ROOT do it?
zpool import -N -R /mnt <zpool>
ssh ssh <backup server> cat <Full_snapshot_file> | zfs receive
ssh ssh <backup server> cat <incremental_snapshot_file> | zfs receive