Solved overmounts after manually creating file system

For some reason zfs is mounting file systems in wrong order. When I create new zfs filesystem it is mounted correctly:
Code:
[root@system ~]# zfs create zroot/usr/jails 
[root@system ~]# zfs set mountpoint=/usr/jails zroot/usr/jails
[root@system ~]# mount
zroot on / (zfs, local, nfsv4acls)
devfs on /dev (devfs)
zroot/tmp on /tmp (zfs, local, nosuid, nfsv4acls)
zroot/usr on /usr (zfs, local, nfsv4acls)
[...]
zroot/usr/jails on /usr/jails (zfs, local, nfsv4acls)
[root@system ~]#

Everything is fine until reboot. Seems zfs mount -a mounts my new fs before /usr is mounted therefore it gets overmounted.

Code:
[root@pigwalk ~]# mount
zroot on / (zfs, local, nfsv4acls)
devfs on /dev (devfs)
zroot/usr/jails on /usr/jails (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, nosuid, nfsv4acls)
zroot/usr on /usr (zfs, local, nfsv4acls)
[...]
[root@system ~]#

I'm trying to understand what's going on here and where I made a mistake. Does anybody have thoughts about this setup?
 
I was looking into that already and looks ok:
Code:
[root@system ~]# grep zfs /etc/fstab
[root@system ~]# ls -l /etc/fstab
-rw-r--r--  1 root  wheel  0 Jun 24 15:47 /etc/fstab
[root@system ~]#

Code:
[root@system ~]# mount|grep zfs
zroot on / (zfs, local, nfsv4acls)
zroot/tmp on /tmp (zfs, local, nosuid, nfsv4acls)
zroot/usr on /usr (zfs, local, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, nfsv4acls)
zroot/usr/jails on /usr/jails (zfs, local, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, nosuid, nfsv4acls)
zroot/usr/ports/distfiles on /usr/ports/distfiles (zfs, local, noexec, nosuid, nfsv4acls)
zroot/usr/ports/packages on /usr/ports/packages (zfs, local, noexec, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var on /var (zfs, local, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/db on /var/db (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/db/pkg on /var/db/pkg (zfs, local, nosuid, nfsv4acls)
zroot/var/empty on /var/empty (zfs, local, nosuid, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/run on /var/run (zfs, local, noexec, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, nosuid, nfsv4acls)
zroot/usr/jails/fulljail on /zroot/usr/jails/fulljail (zfs, local, nfsv4acls)
[root@system ~]#

Code:
[root@system ~]# zfs list -o name,canmount,mountpoint
NAME                       CANMOUNT  MOUNTPOINT
zroot                      on        /zroot
zroot/swap                 -         -
zroot/tmp                  on        /zroot/tmp
zroot/usr                  on        /zroot/usr
zroot/usr/home             on        /zroot/usr/home
zroot/usr/jails            on        /zroot/usr/jails
zroot/usr/jails/fulljail   on        /zroot/usr/jails/fulljail
zroot/usr/ports            on        /zroot/usr/ports
zroot/usr/ports/distfiles  on        /zroot/usr/ports/distfiles
zroot/usr/ports/packages   on        /zroot/usr/ports/packages
zroot/usr/src              on        /zroot/usr/src
zroot/var                  on        /zroot/var
zroot/var/crash            on        /zroot/var/crash
zroot/var/db               on        /zroot/var/db
zroot/var/db/pkg           on        /zroot/var/db/pkg
zroot/var/empty            on        /zroot/var/empty
zroot/var/log              on        /zroot/var/log
zroot/var/mail             on        /zroot/var/mail
zroot/var/run              on        /zroot/var/run
zroot/var/tmp              on        /zroot/var/tmp
[root@system ~]#

So from this perspective all looks ok, and it's puzzling to me why newly created file system is mounted BEFORE the ones created during install. No matter if I do that manually or with ezjail-admin install
 
I've had the same problem. It means the way , the order the mount happens is not how you had planned it in you head. With zfs and fstab and canmount & inheritance you can do that.
 
zfs set mountpoint=/usr/jails2 zroot/usr/jails
And check if there is data in /usr/jails , delete or move that data.
There's no data there, just empty fs.

Did you actually run that command at some point?

(Just curious. I see it in /etc/rc.d/zfs.)

Also:

sysctl security.jail.mount_allowed
zfs mount -a is executed during boot process. It has nothing to do with sysctl security.jail.mount_allowed.
Please, what do you mean by overmounted? (What's the impact?)
Overmount happens when your already mounted file system - in this example /usr/jails is mounted before the /usr which effectively "covers" your /usr/jails
 
A wrong order should not happen.
Try to set canmount to noauto for zroot/usr/jails. Then reboot if everything is fine you can add a line
zfs mount zroot/usr/jails in rc.local or fstab and check if order is ok.
 
Code:
zroot/usr on /usr (zfs, local, nfsv4acls)

NAME                       CANMOUNT  MOUNTPOINT
zroot/usr                  on        /zroot/usr
That doesn't look correct. On all my systems zroot/usr is not mounted:
Code:
# zfs get -d 1 mounted zroot/usr
NAME             PROPERTY  VALUE    SOURCE
zroot/usr        mounted   no       -
zroot/usr/home   mounted   yes      -
zroot/usr/ports  mounted   yes      -
zroot/usr/src    mounted   yes      -

# zfs list -o name,canmount,mountpoint | egrep 'NAME|usr'
NAME                CANMOUNT  MOUNTPOINT
zroot/usr           off       /usr
zroot/usr/home      on        /usr/home
zroot/usr/ports     on        /usr/ports
zroot/usr/src       on        /usr/src
 
Indeed I have canmount noauto for /
And canmount off for:
/usr
/var
Some subdirectories of those two directories are not part of what is considered boot environment
 
Well, changing canmount=noauto did not do the trick, still that's what mount returns:

Code:
[root@system /usr]# mount
zroot on / (zfs, local, nfsv4acls)
devfs on /dev (devfs)
zroot/usr/jails on /usr/jails (zfs, local, noatime, nfsv4acls)
zroot/usr/jails/basejail on /usr/jails/basejail (zfs, local, noatime, nfsv4acls)
zroot/usr/jails/newjail on /usr/jails/newjail (zfs, local, noatime, nfsv4acls)
zroot/tmp on /tmp (zfs, local, nosuid, nfsv4acls)
zroot/usr on /usr (zfs, local, nfsv4acls)

The problem is that for some reason newly created zfs file system is being mounted right after / and /dev and I can't figure out why. I need to note though that zpool and all zfs file systems were created manually during install.
 
What i had to do in this case was to boot with an usb stick and to mount the zpool altroot to check and verify the actual layout.
 
I fixed that eventually, but I'm not entirely happy with it.

What solved that was actually setting mountpoint to /usr for zroot/usr

Code:
zfs set mountpoint=/usr zroot/usr

From now on file systems created by zfs create tank/file/system are created correctly and there are no overmounts. Previously mountpoint was inherited from zroot itself which pointed to /zroot/usr/jails instead of /usr/jails.

What bugs me still is that now SOURCE for zroot/usr is "local" instead of "default"
Code:
NAME       PROPERTY    VALUE       SOURCE
zroot/usr  mountpoint  /usr        local

All of them:
Code:
[root@system ~]# zfs get -r mountpoint zroot
NAME                       PROPERTY    VALUE                 SOURCE
zroot                      mountpoint  /zroot                default
zroot/swap                 mountpoint  -                     -
zroot/tmp                  mountpoint  /zroot/tmp            default
zroot/usr                  mountpoint  /usr                  local
zroot/usr/home             mountpoint  /usr/home             inherited from zroot/usr
zroot/usr/jails            mountpoint  /usr/jails            inherited from zroot/usr
zroot/usr/ports            mountpoint  /usr/ports            inherited from zroot/usr
zroot/usr/ports/distfiles  mountpoint  /usr/ports/distfiles  inherited from zroot/usr
zroot/usr/ports/packages   mountpoint  /usr/ports/packages   inherited from zroot/usr
zroot/usr/src              mountpoint  /usr/src              inherited from zroot/usr
zroot/var                  mountpoint  /zroot/var            default
zroot/var/crash            mountpoint  /zroot/var/crash      default
zroot/var/db               mountpoint  /zroot/var/db         default
zroot/var/db/pkg           mountpoint  /zroot/var/db/pkg     default
zroot/var/empty            mountpoint  /zroot/var/empty      default
zroot/var/log              mountpoint  /zroot/var/log        default
zroot/var/mail             mountpoint  /zroot/var/mail       default
zroot/var/run              mountpoint  /zroot/var/run        default
zroot/var/tmp              mountpoint  /zroot/var/tmp        default

Any thoughts are welcome.
 
What bugs me still is that now SOURCE for zroot/usr is "local" instead of "default"
Why do you think that's a problem?
Code:
root@molly:~ # zfs get mountpoint,canmount zroot/usr
NAME       PROPERTY    VALUE       SOURCE
zroot/usr  mountpoint  /usr        local
zroot/usr  canmount    off         local
Code:
root@maelcum:~ # zfs get mountpoint,canmount zroot/usr
NAME       PROPERTY    VALUE       SOURCE
zroot/usr  mountpoint  /usr        local
zroot/usr  canmount    off         local
 
Why do you think that's a problem?
Code:
root@molly:~ # zfs get mountpoint,canmount zroot/usr
NAME       PROPERTY    VALUE       SOURCE
zroot/usr  mountpoint  /usr        local
zroot/usr  canmount    off         local
Code:
root@maelcum:~ # zfs get mountpoint,canmount zroot/usr
NAME       PROPERTY    VALUE       SOURCE
zroot/usr  mountpoint  /usr        local
zroot/usr  canmount    off         local
I cross checked with my other systems, and you're correct. I didn't said it's a problem, but I had impression that it shouldn't be like that.

Anyways, it does work, but for now I need to go through my script for installation automation on remote machines. I'm thinking of sharing it in separate thread as it might be useful to install dedicated machine from scratch.
 
generic when building a machine from scratch with root on ZFS, keep in mind the structure needed to support Boot Environments. You want to duplicate that configuration or you can break BEs. If you haven't an easy thing to do would be a simple install either on hardware or in a VM and then do zpool history on zroot. That tells you the commands you need to do creating datasets and such.
 
I've written a very small shell script to create zpools & zfs automaticly. It contains /usr /var with setting canmount off.
But I'm afraid of using it with boot environments because its home-brew.
 
I have fixed my script, now all is working fine, basically what I did was:

Code:
zpool create -o altroot=/mnt -O atime=off -m none -f zroot <device>
zfs create -o mountpoint=none zroot/ROOT
zfs create -o mountpoint=/ zroot/ROOT/default
zfs create -o mountpoint=/zroot zroot/ROOT
zfs create -o mountpoint=/usr zroot/usr
zfs create -o mountpoint=/usr/home zroot/usr/home
zfs create -o mountpoint=/var zroot/var
zfs create -o mountpoint=/tmp -o exec=on -o setuid=off zroot/tmp
[...]
zpool set bootfs=zroot/ROOT/default zroot
zpool set cachefile=/mnt/boot/zfs/zpool.cache zroot
 
A slight language barrier? (Sorry.)

From the beginning. I wonder whether you were using a script when you began this topic, or whether it came later.
 
Back
Top