Testing snapshots and it's not working as expected. Not sure what I'm doing wrong

Or maybe I'm misunderstanding what's supposed to be happening. Here's what I've done for this test:

1. Look at my datasets
Code:
root@nightmaremoon:~ # zfs list
NAME                USED  AVAIL  REFER  MOUNTPOINT
backup2            1.69T  3.57T  1.69T  /mnt/backup2
data               5.82T  1.21T   117G  /data
data/ROOT          13.6G  1.21T    96K  none
data/ROOT/default  13.6G  1.21T  13.6G  /
data/shared        14.3G  1.21T  14.3G  /data/shared
data/tmp            112K  1.21T   112K  /tmp
data/usr           2.39G  1.21T    96K  /usr
data/usr/home      1.21G  1.21T  1.21G  /usr/home
data/usr/ports     1.18G  1.21T  1.18G  /usr/ports
data/usr/src         96K  1.21T    96K  /usr/src
data/var           1.35M  1.21T    96K  /var
data/var/audit       96K  1.21T    96K  /var/audit
data/var/crash       96K  1.21T    96K  /var/crash
data/var/log        492K  1.21T   492K  /var/log
data/var/mail       480K  1.21T   480K  /var/mail
data/var/tmp        120K  1.21T   120K  /var/tmp

2. Create a file to modify:
Code:
root@nightmaremoon:~ # echo hello > /usr/test.txt
root@nightmaremoon:~ # cat /usr/test.txt
hello

3. I snapshot data/usr:
Code:
root@nightmaremoon:~transmission # zfs snapshot data/usr@29MAY17
root@nightmaremoon:~transmission # zfs list -t snapshot
NAME                       USED  AVAIL  REFER  MOUNTPOINT
data/usr@29MAY17           0      -       96K  -

4. modify my file:
Code:
root@nightmaremoon:~ # echo rollback >> /usr/test.txt
root@nightmaremoon:~ # cat /usr/test.txt
hello
rollback

5. Rollback
Code:
root@nightmaremoon:~ # zfs rollback data/usr@29MAY17

6. check my modified file:
Code:
root@nightmaremoon:~ # cat /usr/test.txt
hello
rollback

So what did I do wrong here? My belief was that when I ran the rollback command, any changes should have been eliminated, meaning the rollback line in /usr/test.txt should not be there anymore.
Am I misunderstanding what zfs snapshotting does? Or did I do something wrong in this process?
 
You've been tripped up by a subtle zfs setting. To help support boot environments via sysutils/beadm, data/usr has its zfs property 'canmount' set to 'off', so it mainly exists such that the zfs filesystems underneath it (e.g. data/usr/home) can inherit their mountpoint (e.g. /usr/home) from it. (Note that data/var is configured the same way.) Everything you see under the /usr directory that is not part of another zfs filesystem is actually stored in your current root filesystem (data/ROOT/default, mounted on /)

If you make your snapshot in data/usr/home, and edit a file in /usr/home, you will get your expected results after rollback.

As a side note, be sure to look into boot environments with sysutils/beadm; they are one of the best features built on ZFS out there.
 
Tested it again with a file under ~myusername, and the rollback worked as expected, thank you.

So this problem is due to the canmount property being OFF? Or at least, the problem is indicated by this? Can I assume then that anything with ON set will work properly with a snapshot and a rollback, and everything with it set to OFF will not work? Running # zfs get canmount shows it set to OFF for data/usr and data/var, noauto on data/ROOT/default, and ON for everything else.
 
So this problem is due to the canmount property being OFF?

That's not the right way to think of this. There wasn't a problem, it's just a little confusing at first. Everything is working as designed.

The confusion is that the data/usr filesystem is there as a placeholder only, to enable the data/usr/home, data/usr/ports, etc. filesystems to inherit their mountpoints. Depending on your point of view, this is an elegant solution, or a confusing solution. I think for most users, it starts as confusing, and becomes elegant when they grok what is going on.

It all comes back to boot environments (BEs) and beadm(1). The idea is to be able to create (typically, by cloning the active one first) a new BE, then perform upgrades -- either to OS or programs (ports) -- and finally boot into the new environment. If something didn't work, you can go back to the old environment. You can even recover from a failed upgrade rendering the system un-bootable, as the boot menu supports switching environments. So they're really cool and powerful, but they make things (the zfs mountpoint layout, in particular) a little complicated.

Some things, like documents you create in /usr/home/yourname, you -- typically -- don't want to be tied to switching (think reverting) between BEs, and some others, like /usr/ports, you don't need to have tied to environments, as they are tied to some external state anyway (via portsnap fetch and portsnap update). All the other default filesystems you see beyond data/ROOT/default are trying to capture everything you DON'T want in the BE. (With the notable exception of data/usr and data/var, which again, are just there as placeholders and are not mounted.)

The rest of the system, like /bin, /sbin, /usr/bin, /usr/sbin, and even /usr/local, you want contained and controlled by the BE such that it can do its job of capturing system (OS & programs) state.

The BEs live (in this setup), under data/ROOT/, with the only one in the default install named 'default'. This is why data/ROOT/default is set to the root (/) mount point. Everything created on the computer that isn't a under a different mountpoint (/usr/home is a different mountpoint, mounted from data/usr/home) is created as part of the root (data/ROOT/default) filesystem.

So... /usr (the directory), and /usr/test.txt (the test file you created) were part of the data/ROOT/default dataset, and not part of the (again, not mounted, as canmount=off) data/usr dataset. This is why (and again, this is what is expected by design, if not expected by the user) reverting data/usr had no impact on /usr/test.txt. You could have done the experiment with the /usr/test.txt file, but you would need to perform the snapshot of data/ROOT/default (as that is the dataset containing /usr/test.txt), and you would quickly find reverting the root filesystem is... difficult to do when you're using it. ;)

Which brings us full circle back to boot environments. BEs make snapshots of the root system much more useful. You can create a new BE from an old snapshot, and then boot into it, performing the 'revert' by rebooting into the new BE, and then destroying the no-longer-wanted one.

So. Probably a longer answer than you wanted. Nothing is working the wrong way, it is just the initial setup is unfortunately confusing. Don't set data/usr to be mountable if you want to use BEs. You can probably tell, but I think you should use sysutils/beadm and BEs, as they are one of the best features of FreeBSD in my opinion.

BTW, an easy way to see what filesystem you are currently in (.) is to df . ... try it in /usr to see what I mean, and the source of all this confusion.
 
I don't want to turn this into an endless discussion, but I disagree when you state that there wasn't a problem Eric A. Borisch. Because common behavior for ZFS is to have the canmount flag set to on after which a full set will become accessible the very moment you import a ZFS pool and provide a (temporary) root / mountpoint.

Although I can see why they chose for this setup it also makes for an unneeded confusing situation because suddenly ZFS does not behave as its supported to (read: would normally do without all these hacks). The reason I use words such as hacks and problems is because this setup also assumes that everyone will be using beadm which I think is a bit odd.

Not to mention that they could also have gone the other way around: instead of trying to separate nearly all parts from the main system (the zpool/usr placeholder and such) they could also have opted for the other way around: seclude the booting environment. Which would make for a much more logical approach because now the only thing excluded from being mounted automatically would be the boot environment.

This whole setup is precisely why I usually suggest not to use the installer if you want to use ZFS on root but instead set the structure up manually on the shell during installation. Because I think the default is a bit flawed and causes too much problems than its good for (most people will only run into this when they're trying to repair their system, and nearly every available ZFS documentation out there will tell you that your filesystems become available as soon as # zpool import is run, which is why I consider this setup to be a crude hack).

Just my opinion obviously. Once again: I can see why they did it like this (though I think there's a much better alternative, would have to work on that a bit more), but I'm one of those people who continue to consider it overly confusing ;)
 
I don't want to turn this into an endless discussion, but I disagree when you state that there wasn't a problem Eric A. Borisch. Because common behavior for ZFS is to have the canmount flag set to on after which a full set will become accessible the very moment you import a ZFS pool and provide a (temporary) root / mountpoint.

Just my opinion obviously. Once again: I can see why they did it like this (though I think there's a much better alternative, would have to work on that a bit more), but I'm one of those people who continue to consider it overly confusing ;)

I'm just saying it is working as configured. You're saying the configuration choices (defaults from installer) are wrong. Both statements can be true. :)

Two quick points:

1) With a system like ZFS, you have much more flexibility and choices for how to do things than with traditonal partitions and filesystems. That makes it much less likely that there will be one right way (or even majority popular way) to configure something. Polish yours up and submit it as a PR and maybe it will become the default.

2) You don't have to use beadm to use BEs, but it does make it convenient. Not using such a powerful tool is really missing out.

Just my 2c. Thanks for sharing your config (and post the PR when you make it!)
 
Back
Top