ZFS Replicating zroot to a ZFS backup server. Cannot browse the root directory.

I have two systems:
zfs1 (Sender) - A clean install of 11.1-RELEASE in a VM with ZFS
zfs2 (Receiver) - A backup server running ZFS

Both servers are running 11.1-RELEASE-p10 which is the current patch as of today.

On the VM I have installed sysutils/zxfer as the utility in which I will push the snapshot with to the backup ZFS server. My issue is that once the I push the snapshot to the backup server I cannot browse what would have been the root / mount for the VM. It shows that the data with zfs list, but no data is there when I cd into the directory.

Here are the steps I followed

On zfs1 (Sender)
I have one zpool which is zroot. Let's create a full system snapshot.

zfs snapshot -r zroot@backup

Install zxfer

pkg install zxfer

On zfs2 (Receiver)

Create the dataset on the receiver (ZFS Backup Server)

zfs create tank/backup
zfs create tank/backup/zfsvm1.example.com

Let's transfer the zroot@backup snapshots from zfs1 to zfs2

On zfs1 (Sender)

zxfer -dFkPv -T zfs2 -R zroot tank/backup/zfsvm1.example.com

It would appear that everything copied and is in the correct structure that I want

Code:
zfs list -r tank/backup/zfsvm1.example.com
NAME                                                USED  AVAIL  REFER  MOUNTPOINT
tank/backup/zfsvm1.example.com                      432M   886G  26.5K  /tank/backup/zfsvm1.example.com
tank/backup/zfsvm1.example.com/zroot                432M   886G    26K  /tank/backup/zfsvm1.example.com/zroot
tank/backup/zfsvm1.example.com/zroot/ROOT           432M   886G    23K  /tank/backup/zfsvm1.example.com/zroot/ROOT
tank/backup/zfsvm1.example.com/zroot/ROOT/default   432M   886G   432M  /tank/backup/zfsvm1.example.com/zroot/ROOT/default
tank/backup/zfsvm1.example.com/zroot/data            24K   886G    24K  /tank/backup/zfsvm1.example.com/zroot/data
tank/backup/zfsvm1.example.com/zroot/tmp             26K   886G    26K  /tank/backup/zfsvm1.example.com/zroot/tmp
tank/backup/zfsvm1.example.com/zroot/usr             92K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/usr
tank/backup/zfsvm1.example.com/zroot/usr/home        23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/usr/home
tank/backup/zfsvm1.example.com/zroot/usr/ports       23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/usr/ports
tank/backup/zfsvm1.example.com/zroot/usr/src         23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/usr/src
tank/backup/zfsvm1.example.com/zroot/var            160K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/var
tank/backup/zfsvm1.example.com/zroot/var/audit       23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/var/audit
tank/backup/zfsvm1.example.com/zroot/var/crash       23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/var/crash
tank/backup/zfsvm1.example.com/zroot/var/log       44.5K   886G  44.5K  /tank/backup/zfsvm1.example.com/zroot/var/log
tank/backup/zfsvm1.example.com/zroot/var/mail        23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/var/mail
tank/backup/zfsvm1.example.com/zroot/var/tmp         23K   886G    23K  /tank/backup/zfsvm1.example.com/zroot/var/tmp

I want to look inside the "root" directory of zfs1 that is now stored on zfs2

ls /tank/backup/zfsvm1.example.com/zroot/ROOT/default
ls: /tank/backup/zfsvm1.example.com/zroot/ROOT/default: No such file or directory


There seems to be nothing there. 'zfs list' reports that it is indeed 432M

Code:
NAME                                                USED  AVAIL  REFER  MOUNTPOINT
tank/backup/zfsvm1.example.com/zroot/ROOT/default   432M   886G   432M  /tank/backup/zfsvm1.example.com/zroot/ROOT/default


Let's try du and see what it reports

du -sh /tank/backup/zfsvm1.example.com
50K /tank/backup/zfsvm1.example.com


I also tried to mount everything just in case it just hadn't mounted.

zfs mount -a

Do any ZFS wizards know where the data is and why I can't see it?
 
I went through the options dozens of times to see if anything worked at all. I found a few offenders.

The following examples work

# Basically specifying no extra options
zxfer -T zfs1 -R zroot storage/backups/zfsvm1.example.com

# Works
zxfer -F -T zfs1 -R zroot storage/backups/zfsvm1.example.com

# Same as above but with verbose output
zxfer -vF -T zfs1 -R zroot storage/backups/zfsvm1.example.com

Don't work

Using the -P or -d options seem to be the offenders

I am not sure if this is a good or a bad thing or if I'm losing things by not using -P and -d. Hoping someone with more ZFS experience knows.
 
Why rely on 3rd party stuff if you can easily achieve the exact same thing using the commands which were build for all this in the first place? That's the part I don't get here. Why not use zfs send ... | ssh <server> "zfs receive ..." instead?
 
Ultimately we will have servers will hundreds or thousands of snapshots. In my testing, it appears that zfs send/receive falls miserably short at just being able to simply send the entire pool and ALL of its snapshots with a simple one liner command. Hence, utilities like zxfer were created. The only issue I was running into was detailed in my post where I could not read the zroot/ROOT/default.
 
You cannot send a pool but the filesystems. :)

Take look on the filesystem properties to see if you find something relevant. Are you trying to read as root? I do not know sysutils/zxfer but it may create some specific user for that, and not allow "normal" users too read.

I use sysutils/zap and that work like a charm.
 
In my testing, it appears that zfs send/receive falls miserably short at just being able to simply send the entire pool and ALL of its snapshots with a simple one liner command.
I'll just substitute pool for filesystem here. Thing is: you can. Just check zfs(8). Also pay closer attention to -R for example. No offense intended here but I cannot help get the impression that you're merely using those tools as a substitute for your inexperience and misunderstanding of the underlying tools being used.

It can work, but it can also easily backfire on you.
 
You cannot send a pool but the filesystems. :)

Take look on the filesystem properties to see if you find something relevant. Are you trying to read as root? I do not know sysutils/zxfer but it may create some specific user for that, and not allow "normal" users too read.

I use sysutils/zap and that work like a charm.

It was not a user issue. As I stated, the issue was the -P and -d flags. If you don't use those, you can read zroot/ROOT/default just fine.

I just gave sysutils/zap a try. The documentation is pretty spare. It spits out errors when I attempt a simple operation.

zap rep -Fv root@zfs1:storage/backups/zfsvm1.example.com -r zroot


Errors out and says 'Failed to find newest local snapshot' for all datasets.
 
I'll just substitute pool for filesystem here. Thing is: you can. Just check zfs(8). Also pay closer attention to -R for example. No offense intended here but I cannot help get the impression that you're merely using those tools as a substitute for your inexperience and misunderstanding of the underlying tools being used.

It can work, but it can also easily backfire on you.

We dug through the zfs man page for days testing options. The -R option doesn't really seem to do what one might think. If you have hundreds of @hourly @daily @monthly @monthly snapshots then you can't really use -R to send them all in one shot. Anyone who has played with ZFS for more than a couple of hours realizes why there are dozens of tools created for ZFS replication.

I will definitely wait for you to post a one liner than can ZFS replicate a system with 500 snapshots.
 
You didn't look in the details. See HERE. :D

Couple of things about sysutils/zap

It appears to only work with the snapshots created by zap. This means that it won't copy of any our existing snapshots. That's a pretty down big downside.

Once you do create some snapshots using zap you can finally do a 'zap replicate' but it seems to vomit all over your filesystem. My goal is to replicate several systems to a central ZFS server.


NAME USED AVAIL REFER MOUNTPOINT
storage/backups 2.25G 871G 88K /storage/backups
storage/backups/zfsvm1.example.com 478M 871G 88K /zroot
storage/backups/zfsvm1.example.com/ROOT 477M 871G 88K none
storage/backups/zfsvm1.example.com/ROOT/default 476M 871G 476M /
storage/backups/zfsvm1.example.com/data 360K 871G 96K /zroot/data
storage/backups/zfsvm1.example.com/data/data1 88K 871G 88K /zroot/data/data1
storage/backups/zfsvm1.example.com/data/data2 88K 871G 88K /zroot/data/data2
storage/backups/zfsvm1.example.com/data/data3 88K 871G 88K /zroot/data/data3
storage/backups/zfsvm1.example.com/tmp 112K 871G 112K /tmp
storage/backups/zfsvm1.example.com/usr 352K 871G 88K /usr
storage/backups/zfsvm1.example.com/usr/home 88K 871G 88K /usr/home
storage/backups/zfsvm1.example.com/usr/ports 88K 871G 88K /usr/ports
storage/backups/zfsvm1.example.com/usr/src 88K 871G 88K /usr/src
storage/backups/zfsvm1.example.com/var 572K 871G 88K /var
storage/backups/zfsvm1.example.com/var/audit 88K 871G 88K /var/audit
storage/backups/zfsvm1.example.com/var/crash 88K 871G 88K /var/crash
storage/backups/zfsvm1.example.com/var/log 132K 871G 132K /var/log
storage/backups/zfsvm1.example.com/var/mail 88K 871G 88K /var/mail
storage/backups/zfsvm1.example.com/var/tmp 88K 871G 88K /var/tmp


This has mounted directly to / on my backup server which is exactly what I don't want. This tool is definitely not what I want.
 
Yes, zap do not interfere on any snapshot not created by zap. You should set where you want the replicated snapshots will be placed on the receiver/server.

The objective of zap is set and forget.
 
Yes, zap do not interfere on any snapshot not created by zap. You should set where you want the replicated snapshots will be placed on the receiver/server.

The objective of zap is set and forget.

I'm not a huge fan of how zap attempts to mount the shares by default to mountpoints like /

Per their instructions you may optionally set the mountpoints on the receiver as such

Code:
# mkdir -p /zback/phe
# zfs set mountpoint=/zback/phe zback/phe/ROOT/default
# zfs mount zback/phe/ROOT/default
# zfs set mountpoint=/zback/phe/var zback/phe/var
# zfs mount zback/phe/var
# zfs set mountpoint=/zback/phe/usr/home zback/phe/usr/home
# zfs mount zback/phe/usr/home

I don't like that you have to manually intervene and tell it to mount it in an optimal manner. This doesn't seem like a lot of fun to have to do for hundreds of servers. It also seems that you can't make the mountpoint changes until you push the data using 'zap rep'. Am I missing something?
 
Ideally what I want is a tool that dumps a named filesystem and all of it's snapshots to a remote server and mounts them in a structure like so

We will most likely use zfstools for our snapshot regimen. We need something that can handle automatically figuring out which snapshots to push and sending to the remote host.

server1/zroot -> backupserver/tank/backups/server1/...
server2/zroot -> backupserver/tank/backups/server2/...
server3/zroot -> backupserver/tank/backups/server3/...
server4/zroot -> backupserver/tank/backups/server4/...
server5/zroot -> backupserver/tank/backups/server5/...
 
Replicate datasets to the remote host bravo, under the zback/phe dataset. If you use a non-default ssh port, specify it in ~/.ssh/config.
# zfs set zap:rep='zap@bravo:zback/phe' zroot/ROOT zroot/usr/home/jrm
# zap rep -v


Replicate datasets (recursively for zroot/ROOT) to the remote host bravo, under the rback/phe dataset, but this time specify the datasets on the command line. If you use a non-default ssh port, specify it in ~/.ssh/config.
# zap rep zap@bravo:rback/phe -r zroot/ROOT zroot/usr/home/jrm

SOURCE.
 
If you have hundreds of @hourly @daily @monthly @monthly snapshots then you can't really use -R to send them all in one shot.
That somewhat depends on the hierarchy. It's roughly the same as when you're creating a snapshot in a recursive way. But even if your filesystem is somewhat spread out then you'd still be able to set this up using a mere shell script.

Anyone who has played with ZFS for more than a couple of hours realizes why there are dozens of tools created for ZFS replication.
Actually, not so much for me. Although I can definitely see that it's quicker, and thus easier, to rely on an existing tool fact of the matter is that setting up a shell script to sort the whole thing out is also easy to do.

For example... you mention @daily and @weekly snapshots and such, that's something I don't use. I rely on:

Code:
CURDAT=$(date "+%d%m%y");
PRVDAT=$(date -v-${RETENTION}d "+%d%m%y");
---CUT---
                        $(zfs snapshot ${OPTS} ${ZFS}@${CURDAT} > /dev/null 2>&1) || echo "${PROG}: Error creating snapshot ${ZFS}@${CURDAT}" > /dev/stderr
                        $(zfs destroy ${OPTS} ${ZFS}@${PRVDAT} > /dev/null 2>&1) || echo "${PROG}: Error destroying snapshot ${ZFS}@${PRVDAT}" > /dev/stderr
Retention calculation (and more) is fully done using date (as seen above) and this allows me to determine what snapshots are being processed without any error margin. It also makes it easier to calculate and/or recognize the exact age.

The routines to actually send the snapshots to a remote server roughly rely on the same principle. But as I said: this is also where hierarchy matters. I do not maintain any hot-swappable servers as such most of my snapshots are stored as (compressed) images and accessed over SSH (or VPN) if needed.

Even so, all of the stuff I use is custom scripted because I prefer not creating a dependency on 3rd party tools if I don't have to. The advantage for me lies in the fact that as soon as I have a pristine FreeBSD environment running I immediately have all the required tools to quickly start the restoration process (basically fetch to retrieve the main backup script which will then perform the actual restoration).

I honestly believe that many people highly underestimate the flexibility you have with the default tools, especially when combined with a decent shell script.
 
The solution at this point after testing nearly all the snapshot and replications tools is to merely create our own. Most of the ready made tools only do one specific thing and don't combine in useful ways to do the things we want.
 
Back
Top