Suggestions how to work with snapshots for /usr

I would love to use sysutils/beadm to manage all the snapshots/clones for the entire system, because it is so easy to use. From what I have read that is not possible if you have a data set where tank0/usr is not a child of tank0/ROOT. The reason for that is to keep the core system separate from the local. That makes perfect sense, but I still want to keep snapshots of /usr especially before installing or upgrading a port or package. So I tried to do it manually, but I seem to have few problems.

# zfs list

Code:
NAME                            USED  AVAIL  REFER  MOUNTPOINT
tank0                          8.62G  20.7G   144K  legacy
tank0/ROOT                     1.07G  20.7G   144K  legacy
tank0/ROOT/default             1.07G  20.7G  1.07G  /mnt
tank0/swap                     2.06G  22.7G  16.3M  -
tank0/tmp                       288K  20.7G   288K  /tmp
tank0/usr                      2.98G  20.7G   144K  /mnt/usr
tank0/usr/home                  292K  20.7G   196K  /usr/home
tank0/usr/jails                1.89G  20.7G   260K  /usr/jails
tank0/usr/obj                   144K  20.7G   144K  /usr/obj
tank0/usr/ports                1.08G  20.7G   810M  /usr/ports
tank0/usr/ports/distfiles       297M  20.7G   297M  /usr/ports/distfiles
tank0/usr/src                   144K  20.7G   144K  /usr/src
tank0/var                       680K  20.7G   144K  /mnt/var
tank0/var/audit                 144K  20.7G   144K  /var/audit
tank0/var/log                   240K  20.7G   240K  /var/log
tank0/var/tmp                   152K  20.7G   152K  /var/tmp

#zfs snapshot tank0/usr@test

#zfs list -t snapshot
Code:
NAME                                         USED  AVAIL  REFER  MOUNTPOINT
tank0/usr@test                                  0      -   144K  -

Then I delete some test files in /usr in order to see if I can recover them from the snapshot. So I create a clone

#zfs clone tank0/usr@test tank0/usrTEST

#zfs list

Code:
NAME                            USED  AVAIL  REFER  MOUNTPOINT
...
tank0/usr                      2.98G  20.7G   144K  /mnt/usr
tank0/usrTEST                     8K  20.7G   144K  legacy
...

Why is it mounted as legacy and not at /mnt/usrTEST? I can't access this clone. I tried with a different filestystem so I created a snapshot and then a clone of tank0/usr/home. This gets mounted properly and I can access the deleted files.

Code:
...
tank0/usr/home                  292K  20.7G   196K  /usr/home
tank0/usr/homeTEST                8K  20.7G   196K  /mnt/usr/homeTEST
...

However I noticed that it is mounted at /mnt/usr and not at /usr where the original filesytem is mounted. Is there a reason for this? Could this create any problems in the future especially if want to promote this clone?

So why clones of /usr get mounted as legacy, how can I change this? Why is there a slight difference of mount point of other child filesystems of /usr. WIll any of this create any problems in the future.

Is there another approach to tackle this issue? How do people work with snapshots for /usr

Thanks
 
Something like

# zfs create -o mountpoint=/usr/local tank0/local

was what I used. I also created

# zfs create -p tank0/var/db/pkg

And before updating the ports, I take snapshots of both.
 
Mountpoints are inherited in ZFS, so if you have pool/usr set with a mountpoint of /usr, then create pool/usr/home, it automatically will mount on /usr/home.

Because the root of your pool is set to use the legacy mountpoint (usually used if you want to use traditional fstab or mount to control mounting), it is inheriting that setting. ( tank0/usrTEST is a direct child of tank0).

The easiest way to access that clone you made would be to temporarily mount it somewhere by running mount -t zfs tank0/usrTEST /some/available/path.
 
t1066 said:
Something like

# zfs create tank0/var/pkg

And before updating the ports, I take snapshots of both.

Thanks for pointing this. I didn't pay any attention to /var but it is important to have a snapshot of pkg database too. At the moment I am using pkgng which creates its own database. Is it possible to move this to /usr/local so that I can ignore /var when taking snapshots before installing/upgrading ports. I am trying to have as few steps as possible.


usdmatt said:
Because the root of your pool is set to use the legacy mountpoint (usually used if you want to use traditional fstab or mount to control mounting),

I wasn't aware that there are other (non-legacy) ways to have zfs on root. I did a standard FreeBSD installation from PC-BSD ISO, because they provide a vanilla version of FreeBSD with out of the box zfs data set. Is there any benefits of using non-legacy? What do people prefer here?

usdmatt said:
The easiest way to access that clone you made would be to temporarily mount it somewhere by running mount -t zfs tank0/usrTEST /some/available/path.

Ok. So if I manually create the mount points to imitate the original mount points, e.g.
#mkdir /usrTEST
#mkdir /usr/homeTest

and then mount the clones
#mount -t zfs tnak0/usrTEST /usrTEST
#mount -t zfs tank0/usr/home@test /usr/homeTEST

Will this work when I want to promote them?
 
I believe legacy is still the recommended option for the root filesystem - tank0/ROOT/default in your case.

I'm not sure what it is you're trying to do with the clones/promotion. I tend to stay away from these features unless I really need to use them for a purpose as I find the dependencies it creates between file systems a bit of a pain.

If you just want to take snapshots and be able to view the original files, you can access the snapshots directly:

Code:
(assuming tank0/usr/home is a dataset mounted on /usr/home)
# zfs snapshot tank0/usr/home@day1
# zfs snapshot tank0/usr/home@day2
# zfs snapshot tank0/usr/home@day3
# cd /usr/home/.zfs/snapshot/day1 (<- go into snapshot directory from day 1)
 
usdmatt said:
I'm not sure what it is you're trying to do with the clones/promotion. I tend to stay away from these features unless I really need to use them for a purpose as I find the dependencies it creates between file systems a bit of a pain.

Well, if for example a port installation goes wrong or there is a power loss during a port upgrade. It will be easier to replace a whole filesystem tank0/usr by promoting a good snapshot, rather then trying to figure out all the files that need copying or deleting. However I just read that zfs rollback might be a better solution in that case.

Thanks
 
blazingice said:
Thanks for pointing this. I didn't pay any attention to /var but it is important to have a snapshot of pkg database too. At the moment I am using pkgng which creates its own database. Is it possible to move this to /usr/local so that I can ignore /var when taking snapshots before installing/upgrading ports. I am trying to have as few steps as possible.
Ouch, sorry, it should be /var/db/pkg. You may try the following (I have not tested it)

Code:
# mkdir /usr/local/pkg
# ln -s /usr/local/pkg /var/db/pkg

Actually, you can also do a

# zfs create -o mountpoint=/var/db/pkg tank0/local/pkg

Then a simple zfs snapshot -r tank/local@snapshot would take care of both.
 
After reading around I found that you can define the location of database in /usr/local/etc/pkg.conf So you can change that to e.g. /usr/local/pkgdb.

Code:
PKG_DBDIR   : /usr/local/pkgdb

The only problem is that if you have already installed some packages you will need to manually copy everything that is in /var/db/pkg to the new location otherwise ports-mgmt/pkgng will not list your previously installed packages. It seems to be working. I am not sure if it is advisable, but if there is an option for this why not use it. This could be a cleaner solution than changing datasets and mount points.
 
That's a very good idea actually, with that /usr/local becomes almost completely self contained with the package database included.
 
What about the other dirs

/usr/bin
/usr/include
/usr/lib
/usr/lib32
/usr/libdata
/usr/libexec
/usr/sbin


Is it essential to snapshot them in order to recover from a port install failure? I am not sure what some of them do, so I don't know if I should create a separate filesystem tank0/usr/local and just snapshot that before any installation, or leave the dataset as I have it so that filesystem tank0/usr includes both local and the above dirs.
 
I guess in that case, the safest thing to do is to combine snapshots of /usr/local and beadm, which should include ports that are installed in the core system.

Will beadm still work with a separate tank0/usr/local filesystem? Most of the data sets examples for beadm don't seem to have a separate filesystem for tank0/usr/local. Is that on purpose?
 
Back
Top