ZFS Restoring server from zfs snapshot

Hi Guys,

I use zfs to create differential backup to an external backup server.
My question is this:
Can I restore a full server from bare metal using zfs send/receive?

I understand that I will need to have FreeBSD installed prior receiving the data but otherthan that is it possible?

What syntax will I be looking at?
Code:
zfs send -R zroot@-2018-01-18 | mbuffer -q -v 0 -s 128k -m 1G | ssh root@91.203.xx.xxx "mbuffer -s 128k -m 1G | zfs receive -Feuv zroot"
Will something like the above work?

Thank you
 
Can I restore a full server from bare metal using zfs send/receive?
Kind of - you also need to have a boot loader, GPT layout and zfs pool with the same name as before and sufficient size in place. So you could just install a minimal FreeBSD beforehand, but you'd have to boot from the install medium in live-mode to wipe the contents of that newly created pool with the contents of the backup pool. So it's generally faster/easier to just prepare the disks by hand.

Will something like the above work?
If both machines are in a trusted network and therefore no encryption of the traffic is necessary, I'd simply use nc(1) for the actual transfer instead of tunneling through ssh; which adds a lot of overhead and might even pose a considerable bottleneck on CPU-constrained systems (e.g. most NASes with very puny atom CPUs).
 
So it's generally faster/easier to just prepare the disks by hand
Sorry I don't understand. If i install a minimal FreeBSD beforehand, why do I still need to boot from live-mode as everything will be wiped during installation.

If both machines are in a trusted network and therefore no encryption of the traffic is necessary
Production server in DC, backup server in office, I trust both location but not the in between so I think I need ssh.. Am I wrong?
 
Sorry I don't understand. If i install a minimal FreeBSD beforehand, why do I still need to boot from live-mode as everything will be wiped during installation.

Because you don't want to restore into an active pool with already mounted datasets e.g. for / and you can't destroy those datasets if they are mounted. So you need to import the pool without mounting the datasets in it, which is only possible if booting from another disk or install image.
As said: it would be faster and less confusing/error prone to just manually create the GPT table, write the boot loader to the disk(s), then create the pool and send|receive your backup.

Production server in DC, backup server in office, I trust both location but not the in between so I think I need ssh.. Am I wrong?
Of course, then you absolutely want to encrypt the data and ssh is the easiest way. The limiting factor in your scenario would be the available bandwidth at the office anyways, so no need to worry about the CPU overhead.
 
it would be faster and less confusing/error prone to just manually create the GPT table, write the boot loader to the disk(s), then create the pool and send|receive your backup
sko thank you for your advises here :)
I have no idea on how to manually create the GPT table, write the boot loader to the disk(s)..Could please show me or point me to an online artice if you know of any?
 
Have a look at gpart(8). The whole process is described In the "EXAMPLES" section for GPT and MBR. Also take a look at the "Backup and Restore" example, which might also be of use to you.
 
To restore FreeBSD 11 to a fresh drive without installing an OS first:
  1. Boot off of the live CD; get networking up and running.
  2. Follow the gpool commands here https://lists.freebsd.org/pipermail/freebsd-fs/2015-November/022276.html modified for your device name (instead of the globs at the end) and particular (swap size) use case. BE SURE TO USE a larger boot code partition; 128k or so to be safe. (64k is too small now.)
  3. Create the new zpool. Using the same name as before is convenient but typically not required, unless you have bootfs set in /boot/loader.conf (which isn’t needed on the latest versions)
  4. Use gptzfsboot(8) to install bootcode: gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 <your_device>
  5. zfs receive the backed-up pool
  6. Use zpool(1) to set the bootfs property appropriately. On newer installs, this is zroot/ROOT/default or similar.
  7. Reboot and enjoy.
Edit: this is assuming you were booting off of zfs, not a separate non-zfs /boot partition.
 
Can I restore a full server from bare metal using zfs send/receive?
Of course.

And unlike previously given comments you do not need to have a bootloader nor a ZFS pool with the same name in place. Simply because you can set that up during the restoration process. Of course you need a ZFS pool in place because otherwise you cannot import any snapshots.

Eric A. Borisch kinda ninja'd me but to add to that: the only thing you do need to keep in mind is that if you are creating the same pool (for example: on my systems I always use one main pool called zroot and the rest as subsets of that) then you will need to force the zfs receive action in order to tell it to overwrite your existing pool.

But I do have a small correction / suggestion about the list which Eric gave: Set up the bootcode after you restored your system using the files in the boot directory from that same restored system. Do not rely on the code from the rescue CD.

See, the problem is that there's no guarantee that your restored system will be of the same version as the rescue CD you're using. It's perfectly doable to use a 'modern' 11.1 rescue CD to restore a 'vintage' 10.3 system. But you really don't want to use a newer bootloader strapped to an older system.

Also: I doubt it can cause issues but for what's it worth I never messed with mbuffer myself. I simply used dd to create a stream over SSH and fed that into ZFS.

# ssh peter@breve "dd if=/opt/backups/home.zfs" | zfs receive -suv zroot@home (note: from mind, I'll verify it later when I'm back home). Obviously this differs a bit if you're also using incremental backups.
 
Thank you very much every one.
Obviously this is was me getting ready in case of a desaster so I cannot test any of it just now..
That been said, It looklike there is a bit to it so I will try to find some old hardware to practice restore on.
Will it work if I use an old desktop?

My server has 6 disks in it, will that be an issue if restoring to 1 disk with different size?
 
Back
Top