Solved Recovery form backups

Greetings all,

as I have been trying to make sense of backups, namely deleting snapshots, as per my other thread, I read several warnings against using the hidden .zfs/snapshot/<snapshot_name> as I did. The justification given was that the snapshot may be damaged. I am not quite sure how this may happen because the snapshot is read-only. Thus just copying data should not be able to damage it.

The other option given is to mount the snapshot at a mountpoint and recover the data. But I cannot understand the difference.

Is there a "canonical" method?
Kindest regards,
M
 
Snapshots are not backups. A backup resides on hardware somewhere else, ideally off-site.

What exactly are you trying to do though? Revert to a snapshot? Use zfs(8) rollback to do that.
 
Hi 'Orum,

I think that you are making an assumption that the snapshot has not been transferred to another machine/location. Nevertheless, to answer your question, I can think of two scenarios:

1. There is at least one file in one of the snapshots that was later deleted and I would like to recover it.
2. There has been a catastrophic failure of the primary machine, and I would like to restore from the backup.

In simulation, I did both as described above.

Kindest regards,

M
 
I think that you are making an assumption that the snapshot has not been transferred to another machine/location.
To me, the terms mean different things; a snapshot being local and taken off a live data set, and once it's transferred elsewhere I refer to it as a backup. Sorry for the confusion.

1. There is at least one file in one of the snapshots that was later deleted and I would like to recover it.
2. There has been a catastrophic failure of the primary machine, and I would like to restore from the backup.
For each situation, it's a bit different:
  1. For this, cp, rsync, etc. are just fine as well as using the .zfs snapshot directory. Just make sure they don't overwrite existing/newer files, e.g. cp -an.
  2. I assume you mean the entire pool is lost. For this situation, recreate the pool, copy the snapshots back from your backup using zfs send/recv, and then revert to them with zfs rollback.
 
Hi 'Orum,

please, no need for apology; the irony is that I accused you of assuming facts, while in fact I assumed the knowledge of my other post. In any event, thank you for your reply.

Based on your reply to 1, there is no danger in accessing the .zfs directory.

Regarding 2, I am a little confused, the zfs(8) recites in part:
Example 8 Rolling Back a ZFS File System

The following command reverts the contents of pool/home/anne to the
snapshot named yesterday, deleting all intermediate snapshots.

# zfs rollback -r pool/home/anne@yesterday
I have interpreted the emphasized part as implying that the pool/home/anne must already exist before the rollback is applied to roll the content of the data-set back to a previous state. However, your explanation suggest that the rollback operation will create the data-set.

Kindest regards,

M
 
I have interpreted the emphasized part as implying that the pool/home/anne must already exist before the rollback is applied to roll the content of the data-set back to a previous state. However, your explanation suggest that the rollback operation will create the data-set.
You have to create the pool first. Once that's done, and assuming you're doing a full restore, you will typically send from your backup with a zfs send -R, for "replicate", and a snapshot of the root of the pool created with -r (e.g. zfs snap -r zroot@now). This will automatically send all the child filesystems created under that pool, and they'll be created on your destination with zfs recv (usually run with at least -du).
 
Hi 'Orum,

thank you again. I have been doing some research and from what I understand, I have to create not only a pool (if it already does not exist) but also an (empty) data-set on the pool, i.e., zfs create pool/filesystem. Then I create the snapshot and receive it zfs receive -options pool/filesystem@snapshot.

Then I can do the rollback, destroy the snapshot and mount the pool/filesystem at the proper mount-point.

I will experiment tomorrow.

Kindest regards,

M
 
I have been doing some research and from what I understand, I have to create not only a pool (if it already does not exist) but also an (empty) data-set on the pool, i.e., zfs create pool/filesystem.
Creating a pool always creates a root data set for that pool, e.g. creating a pool called zroot will have its default/root dataset be "zroot". You can see this with a zfs list on a FreeBSD machine with zfsroot:
Code:
% zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
zroot   111G  2.48G   109G        -         -     0%     2%  1.00x  ONLINE  -

% zfs list | head -n 2
NAME              USED  AVAIL  REFER  MOUNTPOINT
zroot            2.48G   105G  75.9M  /
If I recall correctly, you don't need to create any data sets beyond this, as they'll be automatically created with the proper send/recv flags. However, it's been some time since I manually transferred snapshots, the last time being a few years ago when I was getting rid of my last 512b drives, so things may have changed since then.
 
Hi 'Orum,

I think you might be correct again. I think that I had to create the filesystem because the root data-set is not mounted, it is just used as a placeholder.

Kindest regards,
M
 
Back
Top