ZFS Recovering a single directory from a zfs snapshot


I have a zfs file system that I have been running for 4 years. I take regular snapshots of the data as a backup using the following recipe from https://www.dan.me.uk/blog/2012/08/05/full-system-backups-for-freebsd-systems-using-zfs/ .

gpart destroy -F da0
dd if=/dev/zero of=/dev/da0 bs=1m count=128
zpool create zbackup /dev/da0
zfs set mountpoint=/backup zbackup
zfs snapshot -r zroot@backup
zfs send -Rv zroot@backup | gzip > /backup/full-system-backup.zfs.gz
zfs destroy -r zroot@backup

This works fine and today I have discovered that about 2 years ago some files that I need were deleted. What I'd like to do is recover just the directories that were lost and I have a series of snapshots that covers this period on USB HDD's. They are 186GB compressed, so time is a factor.

On the original file system these would have been stored at

I could attempt to restore the entire pool using the backup, but this would overwrite the existing pool, this is a lot of work and I'd like to avoid this if at all possible.

Is there a way to extract just the files matching the path above??

So far I have connected the backup drive and gunzip'ed the snapshot into a directory as a file :

Is there some way to mount or untar this file so that I can extract the 50 or so files that I need?

As ever your help is much appreciated.
Yep, just import the snapshot with a new name and it will be mounted. For example:

Create snapshot file of /usr/ports called testSnap:
zfs send zroot/usr/ports@testSnap > /var/tmp/testSnap.zfs

Then import the snapshot as /tmp/testSnap:
zfs recv zroot/tmp/testSnap < /var/tmp/testSnap.zfs

Now I can recover the files from within /tmp/testSnap and when I'm done destroy the volume with:
zfs destroy -r zroot/tmp/testSnap
Snapshots can be directly accessed through the hidden .zfs directory in the "base" directory of every dataset.
E.g. the snapshots for dataset zroot/usr/home are directly available under /usr/home/.zfs/snapshot/<snapname>.

So when you import the pool from your USB-drive (don't forget to set an altroot or nomount !) and mount the dataset, you can easily browse/grep through your snapshots inside the hidden .zfs directory instead of mounting/unmounting different snapshots until you found the 'lost' data.
It seems the script used creates a pool on the USB disk, then sends the entire snapshot to a g'zipped file. As such, once the backup pool is imported he'll need to unzip the backup and recv it into a pool before being able to browse the data/snapshots.

Would of been a lot easier if the snapshots were sent to the backup pool using send/recv so files could be browsed on the USB disk directly.
Well, the gzip compression is quite useless if the pool already has compression enabled. It only adds a layer of "data corruption waiting to happen"...