ZFS zfs snapshot works on /usr/home and not on /usr

Emrion

Member

Thanks: 6
Messages: 43

#1
Hello,

I'm trying to use zfs snapshot and there is something that escapes me. It's maybe obvious but...

[The system is: FreeBSD 11.2-RELEASE FreeBSD 11.2-RELEASE #0 r335510]

When I test zfs snpashot / rollback on /usr/home, it works well:

root@FreeBSD:/home # zfs snapshot zroot/usr/home@t1
root@FreeBSD:/home # touch test1
root@FreeBSD:/home # ls test1
test1
root@FreeBSD:/home # zfs rollback zroot/usr/home@t1
root@FreeBSD:/home # ls test1
ls: test1: No such file or directory
root@FreeBSD:/home # zfs destroy zroot/usr/home@t1
root@FreeBSD:/home #


I create a file after the snapshot and then, I rollback. The file has disappeared after the rollback. This results seems to me normal.

But when I do the same on zroot/usr:

root@FreeBSD:/usr # zfs snapshot zroot/usr@t2
root@FreeBSD:/usr # touch test2
root@FreeBSD:/usr # ls
bin include lib32 libexec obj sbin src tests
home lib libdata local ports share test2
root@FreeBSD:/usr # zfs rollback zroot/usr@t2
root@FreeBSD:/usr # ls
bin include lib32 libexec obj sbin src tests
home lib libdata local ports share test2
root@FreeBSD:/usr #


As you can see, it doesn't work. The created file, test2, is still present after the rollback.

What I missed there?
 

ShelLuser

Son of Beastie

Thanks: 1,490
Messages: 3,262

#2
I can only speculate but I'd say this is most likely a caching issue. Generally speaking /usr would be a lot more used than /home is:

Code:
peter@zefiris:/home/peter $ fuser -c /home
/home: 91482c 91457c 91446c 91441c 91434c 46707c 84388c  1374c  1367c
peter@zefiris:/home/peter $ fuser -c /usr
/usr: 91495rx 91457rx 91446rx 91441r 91434r 46707r 84388r 39138rc 39136rc  1374r  1367r
And that can have an effect on a filesystem.

Which leads up to my question: does the file still exist after a reboot? And what happens if you try to remove it (using rm test2)?
 

xtaz

Well-Known Member

Thanks: 82
Messages: 361

#3
/usr isn't actually a mounted filesystem. It's only a ZFS dataset for the purpose of adding other child datasets under it like /usr/home, /usr/src, etc.

/usr is actually mounted as part of the root / dataset so you need to snapshot this one. If you've never changed it from the defaults this will be zroot/ROOT/default.

Type zfs mount to see!

This is set up like this for the purposes of ZFS boot environments where you can select a different environment to boot using the loader menu or the beadm utility. Take a read of beadm(1)
 

ShelLuser

Son of Beastie

Thanks: 1,490
Messages: 3,262

#4
/usr isn't actually a mounted filesystem. It's only a ZFS dataset for the purpose of adding other child datasets under it like /usr/home, /usr/src, etc.
Good theory, but it also assumes that the OP used the standard installer to set up that filesystem.

However, it's actually zroot which isn't used as a real filesystem but a placeholder, because if you do use the (IMO totally braindead) default installation then your default root filesystem is zroot/root/DEFAULT (or something close enough).

Emrion : time to use zfs list to check what filesystems you defined and where they are mounted. That should give a conclusive answer to your question.
 

Rigoletto

Daemon
Developer

Thanks: 748
Messages: 1,654

#5
zroot/usr isn't mounted in the default installation as pointed by xtaz, otherwise it would break beadm (and the upcoming bectl).
 
OP
OP
E

Emrion

Member

Thanks: 6
Messages: 43

#6
Thanks to you all. Actually, this is a default installation.

I didn't realize that a snapshot needs a mounted file system as target; even if man pages state "filesystem@snapname" as parameter (furthermore, there is no warning issued from zfs snapshot zroot/usr@t2).

It works with zroot/ROOT/default. But this concerns the whole file system not just /usr. I was testing that in case of problem with pkg upgrade.

I read beadm and I think I have to better know zfs snapshot before to use it.
 

ShelLuser

Son of Beastie

Thanks: 1,490
Messages: 3,262

#7
I didn't realize that a snapshot needs a mounted file system as target;
It doesn't. All it needs is a valid ZFS filesystem aka dataset.

Would I be right if I assumed that you manually created zroot/usr and then figured that this would be mounted onto /usr?

It works with zroot/ROOT/default. But this concerns the whole file system not just /usr. I was testing that in case of problem with pkg upgrade.
That's because the installer doesn't set up a fully customized hierarchy out of the box. Note though that I'm not trying to imply that separating /usr from the root filesystem is feasible where ZFS is concerned (I did keep them separated when I still used UFS).

However, why not keep /usr/local separated instead? That's where all the packages get installed to anyway...

Code:
zroot                  98.8G  44.6G  7.26G  /
zroot/doc               560M  44.6G   560M  /usr/doc
zroot/home             26.3G  44.6G  26.3G  /home
zroot/local            19.4G  44.6G  18.0G  /usr/local
zroot/opt              13.6G  44.6G  9.14G  /opt
This is a customized (manual) installation, comparable to the way I used ZFS on Solaris. Point being: zroot/local is the dataset ("filesystem") which is mounted on /usr/local which is also where all ports and/or packages get installed to. If you want to keep control over your packages then my suggestion would be to set up something like this. Not referring to the zroot obviously, but to make a separate filesystem and mount that as /usr/local.
 

phoenix

Administrator
Staff member
Administrator
Moderator

Thanks: 1,171
Messages: 4,011

#8
The snapshot works at the zfs dataset level. However, not every dataset leads to a mounted filesystem.

So, if you have a standard install, then you have a zroot/usr dataset, but /usr is just a normal directory under the / filesystem (which is actually part of the zroot/ROOT/default datasaet).

So, you can create a file /usr/testing, snapshot zroot/usr but not actually affect the /usr/testing file. To do that, you would have to snapshot the zroot/ROOT/default dataset instead.

Compare the output of these two commands to see the difference between a zfs dataset and a mounted filesystem:
Code:
zfs list
df -h
Pay particular attention to the MOUNTPOINT and Mounted on columns.
 

Eric A. Borisch

Well-Known Member

Thanks: 214
Messages: 344

#9
Or I like this view:

zfs list -ro mounted,canmount,mountpoint,name

I expect you will find that zroot/usr is not mounted, and set to canmount=no or noauto and a mountpoint of /usr. So while you can take a snapshot of it, it’s not mounted or containing any files, so it has no visible effect.

Try out my (shameless plug) zfs_versions tool (eborisch/zfs_versions on GitHub) on an existing file to see the snapshotted versions of it (and where they reside).
 
OP
OP
E

Emrion

Member

Thanks: 6
Messages: 43

#10
That's ok, guys. I think I've understood. :)

Exploring ZFS dataset properties gives all information I need.
As for setting /usr/local as a dataset on installation may be a good idea, but for now, since I have a default installation, the only solution is to use zroot/ROOT/default.
 

phoenix

Administrator
Staff member
Administrator
Moderator

Thanks: 1,171
Messages: 4,011

#11
You can get around that by creating it post-install. Something like:
Code:
# zfs create -o mountpoint=none -o canmount=off zroot/usr
# zfs create -o mountpoint=/mnt zroot/usr/local
# rsync --verbose --archive --hard-links ... /usr/local/ /mnt/
# rm -rf /usr/local/*
# zfs set mountpoint=/usr/local zroot/usr/local
(Note: typed in a phone, untested, don't copy/paste, etc.)
 

Eric A. Borisch

Well-Known Member

Thanks: 214
Messages: 344

#12
This is what I use for a /usr/local that is kept/managed along with / by beadm(1):
Code:
$ zfs list -ro canmount,mountpoint,name system/ROOT/11.2-p4
CANMOUNT  MOUNTPOINT  NAME
  noauto  /           system/ROOT/11.2-p4
  noauto  none        system/ROOT/11.2-p4/usr
  noauto  /usr/local  system/ROOT/11.2-p4/usr/local
 

Eric A. Borisch

Well-Known Member

Thanks: 214
Messages: 344

#13
If you transition to this layout, you’ll have to mount it elsewhere and copy over /usr/local’s current contents, first. rsync -avx or similar...
 
Top