Solved Using ZFS to carry out full-system backup through snapshots

Hello,

I have a few machines that run ZFS and one with UFS that I would like to do full-system backups to another machine running ZFS. For backing up the UFS system, I am just going to make a snapshot and use rsync to transfer the data to the backup server. For the ZFS systems, I would like to make ZFS snapshots and have those snapshots transferred to the backup server with zfs send ... | ssh host zfs receive ...

To create the snapshot, I do zfs snapshot bootpool@14_dec_2014 && zfs snapshot -r zroot@14_dec_2014

So I get a pool that looks like this:
Code:
# zfs list -t all
NAME                 USED  AVAIL  REFER  MOUNTPOINT
bootpool             486M  1.45G   485M  /bootpool
zroot               2.14G   443G    96K  none
zroot/ROOT           814M   443G    96K  none
zroot/ROOT/default   813M   443G   813M  /
zroot/tmp            144K   443G   144K  /tmp
zroot/usr           1.34G   443G    96K  /usr
zroot/usr/home       216K   443G   216K  /usr/home
zroot/usr/ports      863M   443G   863M  /usr/ports
zroot/usr/src        505M   443G   505M  /usr/src
zroot/var           3.89M   443G    96K  /var
zroot/var/crash       96K   443G    96K  /var/crash
zroot/var/log       3.50M   443G  3.50M  /var/log
zroot/var/mail       108K   443G   108K  /var/mail
zroot/var/tmp         96K   443G    96K  /var/tmp
After the snapshot it looks like this:
Code:
# zfs list -t all
NAME                             USED  AVAIL  REFER  MOUNTPOINT
bootpool                         486M  1.45G   485M  /bootpool
bootpool@14_dec_2014                0      -   485M  -
zroot                           2.14G   443G    96K  none
zroot@14_dec_2014                   0      -    96K  -
zroot/ROOT                       814M   443G    96K  none
zroot/ROOT@14_dec_2014              0      -    96K  -
zroot/ROOT/default               814M   443G   813M  /
zroot/ROOT/default@14_dec_2014   844K      -   813M  -
zroot/tmp                        144K   443G   144K  /tmp
zroot/tmp@14_dec_2014               0      -   144K  -
zroot/usr                       1.34G   443G    96K  /usr
zroot/usr@14_dec_2014               0      -    96K  -
zroot/usr/home                   216K   443G   216K  /usr/home
zroot/usr/home@14_dec_2014          0      -   216K  -
zroot/usr/ports                  863M   443G   863M  /usr/ports
zroot/usr/ports@14_dec_2014         0      -   863M  -
zroot/usr/src                    505M   443G   505M  /usr/src
zroot/usr/src@14_dec_2014           0      -   505M  -
zroot/var                       3.98M   443G    96K  /var
zroot/var@14_dec_2014               0      -    96K  -
zroot/var/crash                   96K   443G    96K  /var/crash
zroot/var/crash@14_dec_2014         0      -    96K  -
zroot/var/log                   3.59M   443G  3.50M  /var/log
zroot/var/log@14_dec_2014         92K      -  3.50M  -
zroot/var/mail                   108K   443G   108K  /var/mail
zroot/var/mail@14_dec_2014          0      -   108K  -
zroot/var/tmp                     96K   443G    96K  /var/tmp
zroot/var/tmp@14_dec_2014           0      -    96K  -

Now, I would like to replicate the snapshost of bootpool, zroot and all child snapshots to a remote server with zfs send/receive. On the destination, I created the zroot/BACKUP filesystem:
Code:
NAME                 USED  AVAIL  REFER  MOUNTPOINT
bootpool             486M  1.45G   485M  /bootpool
zroot               4.14G  7.94T   219K  none
zroot/BACKUP         219K  7.94T   219K  /BACKUP
zroot/ROOT          1.07G  7.94T   219K  none
zroot/ROOT/default  1.07G  7.94T  1.07G  /
zroot/tmp            151M  7.94T   151M  /tmp
zroot/usr           2.44G  7.94T   219K  /usr
zroot/usr/home      23.2M  7.94T  23.2M  /usr/home
zroot/usr/ports     1.56G  7.94T  1.56G  /usr/ports
zroot/usr/src        874M  7.94T   874M  /usr/src
zroot/var           1.61M  7.94T   219K  /var
zroot/var/crash      219K  7.94T   219K  /var/crash
zroot/var/log        757K  7.94T   757K  /var/log
zroot/var/mail       237K  7.94T   237K  /var/mail
zroot/var/tmp        219K  7.94T   219K  /var/tmp

Then I try to send over the snapshot of bootpool to a folder created in /BACKUP/server-bootpool/:
zfs send bootpool@14_dec_2014 | ssh server-3 zfs receive zroot/BACKUP/server-4-bootpool
The backup machine's pool now looks like this:
Code:
# zfs list -t all
NAME                                       USED  AVAIL  REFER  MOUNTPOINT
bootpool                                   486M  1.45G   485M  /bootpool
zroot                                     4.36G  7.94T   219K  none
zroot/BACKUP                               226M  7.94T   219K  /BACKUP
zroot/BACKUP/server-4-bootpool               226M  7.94T   226M  /BACKUP/server-4-bootpool
zroot/BACKUP/server-4-bootpool@14_dec_2014      0      -   226M  -
zroot/ROOT                                1.07G  7.94T   219K  none
zroot/ROOT/default                        1.07G  7.94T  1.07G  /
zroot/tmp                                  151M  7.94T   151M  /tmp
zroot/usr                                 2.44G  7.94T   219K  /usr
zroot/usr/home                            23.2M  7.94T  23.2M  /usr/home
zroot/usr/ports                           1.56G  7.94T  1.56G  /usr/ports
zroot/usr/src                              874M  7.94T   874M  /usr/src
zroot/var                                 1.61M  7.94T   219K  /var
zroot/var/crash                            219K  7.94T   219K  /var/crash
zroot/var/log                              757K  7.94T   757K  /var/log
zroot/var/mail                             237K  7.94T   237K  /var/mail
zroot/var/tmp                              219K  7.94T   219K  /var/tmp
All good so far. I am able to browse through the /BACKUP/server-4-bootpool directory on the backup machine.

Here is where I am having difficulty: I then try to replicate the zroot@14_dec_2014 snapshot to the backup machine. The only switch for zfs send that I found for recursively sending snapshots from a base filesystem was:
Code:
-R Generate a replication stream package, which will replicate
the specified filesystem, and all descendent file systems, up
to the named snapshot. When received, all properties, snap-
shots, descendent file systems, and clones are preserved.

If the -i or -I flags are used in conjunction with the -R
flag, an incremental replication stream is generated. The
current values of properties, and current snapshot and file
system names are set when the stream is received. If the -F
flag is specified when this stream is received, snapshots and
file systems that do not exist on the sending side are
destroyed.

When I use the -R switch for zfs send, the resulting filesystems keep their original mountpoints: /usr for example instead of /BACKUP/server-4-zroot/usr
The command used:
zfs send -R zroot@14_dec_2014 | ssh server-3 zfs receive -F zroot/BACKUP/server-3-zroot

My solution was to manually change all the mountpoints to their correct locations. I didn't have that many filesystems so it didn't take very long. Does anyone have any suggestions for this? I could see this getting quite time consuming if I had to change 10 mountpoints for every system I wanted to back up in this manner.

And I wanted to add that after changing the mountpoints, a zfs send -R ... does not reset the new mountpoints so this is good.

Edit: The -u flag for zfs receive is also useful. The received filesystem is not mounted.

Thanks,
Manas
 
Just FYI about the -u flag for zfs receive, you should be aware that it only makes it not mounted at receive time. It might still get mounted at reboot. For that you may want to set the canmount property to off.
 
You don't have to replicate the entire thing or do recursive sends.

Just write your script to get a list of filesystems, and then send each individual snapshot of each filesystem to the remote system. If you aren't inclined to delve into the full wonders of shell scripting and programming, and it's only for the one system, you can just list out each filesystem in turn:
Code:
# zfs send -I zroot@firstsnap zroot@secondsnap | ssh "zfs recv -d"
# zfs send -I zroot/tmp@firstsnap zroot@secondsnap | ssh "zfs recv -d"
# zfs send -I zroot/usr@firstsnap zroot@secondsnap | ssh "zfs recv -d"
...
[code]
[/code]
 
I settled for a recursive send, receiving without mounting and manually updating the mountpoints. Seems to work. I think it would be nice to be able to specify an alternate root mountpoint for any recursive snapshot that I send.
 
Back
Top