Solved Nested/children ZFS datasets and NFS exports for recursive mount

Oko

Daemon

Thanks: 768
Messages: 1,620

#1
I just spent whole afternoon playing with the new file server I am about to deploy in a very small startup and I have a serious problem.

Based on my previous experience in managing ZFS based file servers in Linux/UNIX computing environment I decided this time around to use a separate data sets for users' home directories. Namely that enables me to set quota per user and easily destroy the home directories of rogue users. It also allows people to search .zfs/snapshots directory without my assistance and much more if I want.

So I have created a ZFS pool storage and a level 1 data set called lab. On the top of lab I created three level 2 data sets called project, data, and home. And on the top of the home I created few test home directories. So the output of zfs list command looks something like

Code:
root@hera:~ # zfs list
NAME                                                           USED  AVAIL  REFER  MOUNTPOINT
storage                                                       1.17M  3.51T    96K  /storage
storage/lab                                            528K  3.51T    88K  /storage/lab
storage/lab/data                                        88K  3.51T    88K  /storage/lab/data
storage/lab/home                                       264K  3.51T    88K  /storage/lab/home
storage/lab/home/usr1                                    88K   512G    88K  /storage/lab/home/usr1
storage/lab/home/usr2                                  88K   512G    88K  /storage/lab/home/usr2
storage/lab/project                                     88K  3.51T    88K  /storage/lab/project

Ideally I would like to be able to put something like
Code:
/storage/lab  -alldirs -network 192.168.1.0/24
into the /etc/exports and just to mount everything on a Linux computing node with a command

Code:
mount -t nfs hera:/storage/lab /zfs
Unfortunately as many of you know NFS version 3 will prevent me from seeing

Code:
/zfs/lab/home/usr1
/zfs/lab/home/usr2
folders on the target machine. I can see level 2 data sets i.e folders

Code:
/zfs/project
/zfs/home
/zfs/data
If I don't use a data sets for user directories and if I mount parent data set /storage/lab/home everything works as expected. I tried to play with NFSv4 but the results are similar. I should also say that I am using "classical" way to do NFS export. Means that I didn't alter sharenfs=off. I tried to play with those properties as well but no avail. It looks like I have a very similar problem to this guy

http://zfsguru.com/forum/zfsgurudevelopment/516

Short of me having a script which will mount each user directory on the computing node just like the post 3 is suggesting is there an elegant way to have each user directory as a separate data set and to mount them recursively without need to specify absolute path of each home dataset in the export file? I am OK if somebody shows me how to do this with NFSv4 although I must admit I am not thrilled to use that protocol. My main concern is that I don't want to have 30+ exports mounted on 10 different machines.
 

sko

Well-Known Member

Thanks: 206
Messages: 407

#3
Can you elaborate on how you solved this?

I just ran into the very same problem with clients that should mount the users home directories via NFS, but instead of not seeing the nested datasets (which are also exported), the clients actually write onto the parent dataset within a folder named identical to the nested (and mounted!) dataset.

The datasets on the NFS server are automatically created with the proper "sharenfs" options (and quotas, permissions etc) from the users advertised in NIS and I _really_ don't want to try to manage dozens of fstab entries on dozens of clients...
I was going to set up automount on the clients to automatically mount subfolders in /usr/home on access, although I haven't tried yet if this is actually possible. (It sounds like it should be from the NFS page in the FreeBSD handbook).
 
OP
OP
Oko

Oko

Daemon

Thanks: 768
Messages: 1,620

#4
Can you elaborate on how you solved this?

I just ran into the very same problem with clients that should mount the users home directories via NFS, but instead of not seeing the nested datasets (which are also exported), the clients actually write onto the parent dataset within a folder named identical to the nested (and mounted!) dataset.

The datasets on the NFS server are automatically created with the proper "sharenfs" options (and quotas, permissions etc) from the users advertised in NIS and I _really_ don't want to try to manage dozens of fstab entries on dozens of clients...
I was going to set up automount on the clients to automatically mount subfolders in /usr/home on access, although I haven't tried yet if this is actually possible. (It sounds like it should be from the NFS page in the FreeBSD handbook).
Short answer is that once I realized that I rediscovered :oops: an ancient UNIX phenomena called under-mount point content hiding the fix was easy. I just used autofs to mount all home directories/datasets which are listed as absolute directory paths in /etc/exports file. So if you have 1000 users' home directories your /etc/exports file should have 1000 absolute directory paths and if you have 50 servers it means that your file server could theoretically be serving 50000 clients. I underlined word theoretically as that would be only true if you use /etc/fstab to mount those directories not autofs. This is the correct way to export my original example

Code:
/storage/lab/home/usr1 /storage/lab/home/usr2  -alldirs -network 192.168.1.0/24

My original understanding was that you were complicating things by "sharenfs" options which on the first glance seems unnecessary. My understanding was that FreeBSD is oblivious to Solaris ZFS sharing techniques (please read about legacy sharing option and ZFS sharing option in Solaris)

https://docs.oracle.com/cd/E19253-01/819-5461/gamnd/index.html

However turning sharenfs=on will indeed have a positive effect of automatically creating entries in /etc/zfs/exports for all children data sets shared datasets i.e. if /storage/lab is shared than all children will be shared too which means that your /etc/zfs/exports will look like

Code:
/storage/lab -alldirs -network 192.168.1.0/24
/storage/lab/home -alldirs -network 192.168.1.0/24
/storage/lab/home/usr1 -alldirs -network 192.168.1.0/24
/storage/lab/home/usr2 -alldirs -network 192.168.1.0/24
For NSFv3 server to work one doesn't have to copy that file to /etc/exports. It just works magically.

Without much explanation I will say that people should stick to NSFv3 server on BSDs at least possibly on Red Hat as well as there are many negative things about NFSv4.

Since I started all this in order to have convenience of being able to give users access to .zfs/snapshots directory without login into the file server and being able to destroy home directories of rogue users I will also point out importance of setting a reserved empty data set for maintenance purposes on your file server which will prevent filling zfs pool to 100% capacity in the case autofs messes up things. People should also make sure to adjust number of NFS server thread accordingly to the number of expected clients. Default

Code:
nsf_server_flags="-u -t -n 4"
is just sub-optimal on the file server with 48 or 64 threads (in my case).
 
Last edited:

sko

Well-Known Member

Thanks: 206
Messages: 407

#5
So it seems we had the same thought process with the conclusion to use automount (or autofs). Thanks for sharing your experiences :)

Regarding sharenfs:
The property takes the same options as one would set in /etc/exports, not only "on/off". It also automatically generates the entries in /etc/zfs/exports, which is read by mountd the same way like /etc/exports is, and mountd is reloaded when the property changes.
So as I understand, the mechanism is the same but already automated.

I was already using the sharenfs property with NFSv3 with other datasets and on other servers without ever touching the /etc/exports file (aside from leaving a note there to remind me that shares are managed by zfs sharenfs) and it always worked just as expected.
What doesn't work without some extra work is the sharesmb property, but SMB won't be needed any more here :)
 

danzi

New Member


Messages: 5

#6
Hey, just did a search for sharesmb.

Do you guys know where to ask whether this feature flag will be implemented in FreeBSD? I am using nas4free and love it to bits, but I could do away with Samba if only this one feature of ZFS was implemented. Right now as far as I know this is a non-working flag.
 

sko

Well-Known Member

Thanks: 206
Messages: 407

#7
The samba service will always be needed, regardless of the ZFS flag. Same is true for sharenfs - the NFS service still has to be configured and running; the zfs property only eases dealing with shares and replaces/complements the /etc/exports file.
Due to SMB being a foreign protocol/concept to unix systems, it will always be more complex and bloated in terms of configuration and services involved; no ZFS feature can do away with that. If you don't need shares for Windows machines or if they are running in VMs anyways (and you can mount shares through the host), just nuke SMB from orbit and go with NFSv4 - that's what we did here and it made life so much easier...
 

SirDice

Administrator
Staff member
Administrator
Moderator

Thanks: 6,520
Messages: 27,956

#8
Do you guys know where to ask whether this feature flag will be implemented in FreeBSD?
The feature came from Solaris where the CIFS filesharing is built into the kernel. ZFS directly talks to the kernel for this. On FreeBSD this is not possible because we don't have that CIFS implementation in the kernel. The same is true for sharenfs but on FreeBSD it was hacked to create a /etc/zfs/exports instead of ZFS directly sharing things through the kernel.

One could envision a similar hack for sharesmb but you're always going to need Samba in order to make this work.
 

danzi

New Member


Messages: 5

#9
Thanks for the replies smart people :) I will use nfs me thinks. I only have a mac client and rarely ever connect from Windows anyways
 
OP
OP
Oko

Oko

Daemon

Thanks: 768
Messages: 1,620

#11
NFS should work fine on MacOS. As I recall SMB file shares perform rather poorly on MacOS anyway.
OS X had a long standing more than 10 years NFS client bug which was almost show stopper but IIRC it is fixed few years ago. On the another hand Apple phased out its own Apple File Protocol (AFS for short and not to be mistaken with Andrew File System which uses the same acronym) in favour of SMB. Apple SMB performance is probably stellar in comparison to FreeBSD.
 

SirDice

Administrator
Staff member
Administrator
Moderator

Thanks: 6,520
Messages: 27,956

#12
Apple SMB performance is probably stellar in comparison to FreeBSD.
My last encounter with MacOS was a couple of years ago. I had a MacBook Pro, I still have it somewhere but never use it. It was my first, and last, Apple product. Copying files to/from a Windows machine was terrible. My FreeBSD machines copied files much, much faster. This may have improved over the years but it certainly wasn't the case with (Snow) Leopard.
 
Top