Help with zfs sharenfs and NFSv4

Hi all

I think I'm well out of my depth on this one... :)

I have a zfs-based server (8-STABLE) that is working fine with NFSv3, using:

Code:
zfs sharenfs="-maproot=root -alldirs -network 192.168.0.1 -mask 255.255.255.0" zroot/data

Now, I would like to try NFSv4 so as to get round some problems with locking. Trouble is I can't find clear info on how to modify the scheme above to do this. I've done the following groundwork:

/etc/rc.conf.local:
Code:
#NFSv4:
nfs_server_enable="YES"
nfsv4_server_enable="YES"
nfsuserd_enable="YES"
nfsd_flags="-t -n6 -e -h192.168.0.51"
mountd_enable="YES"
mountd_flags="-r -p4002 -e -h192.168.0.51"
rpcbind_flags="-h192.168.0.51"

but I can't work out the appropriate zfs sharenfs options. In fact I don't even know if that is even applicable for nfsv4 or whether I should be doing something completely different. :(

Am I on to a loser here or can an NFS guru cast any light?

Much appreciated!

sim
 
Update:

I'm now thinking that all the server-side stuff above may be ok as it is, and that the real problem is to mount the share from my client (also 8-STABLE).

At the moment I'm getting the following:

Code:
root@nostromo> mount_nfs -o nfsv4 area51:/data /area51/data/
mount_nfs: /area51/data, : Permission denied

Anyway I'm going to continue playing but if anyone has any leads I'd be very grateful :)

sim
 
NFSv3 shares a directory on the server, and the client has to know the entire path to that directory in order to mount it. For example, you share /some/path/mydir on the server, and the client mounts server:/some/path/mydir. IOW, client mount paths are relative to the root directory of the server.

NFSv4 shares a directory on the server, and exports that as the root directory. Client mounts are done relative to that. For example, you share /some/path/mydir on the server, and the client mounts server:/.

That's the part that trips everyone up. :)

If you want the client mount options to be the same as for NFSv3, then you need to export the root filesystem on the server. Otherwise, you need to alter your client mount options to be relative to the exported directory (ie, mount server:/ on the client).

The other "gotcha" with NFSv4 and ZFS is that you have to export each ZFS filesytem in the path. Meaning, if /some/path/mydir is a ZFS filesystem and exported via NFSv4, and you have a separate ZFS filesystem /some/path/mydir/movies then you need to export /some/path/mydir/movies separately, and mount it separately on the client.
 
How would this work if you have many separate NFSv4 mounts though? As far as can understand you, you're saying that if I had this on the server:

/pool/somedir/
/pool/somedir2/

and export those paths NFSv4, then on the client, I would mount

server:/
server:/

How does that work? To make NFSv4 work, I have to export my entire root (export /) as -alldirs, then mount appropriate directories? Then what about other ZFS filesystems, mounted as subdirectories? eg /pool/this, and /pool/this/that?

I am utterly confused I'm afraid :/

Also - if I zfs set sharenfs="<some nfs export options>" for a filesystem, is there anything you need to add to this string to make it share exclusively as NFSv4? Or is turning on the NFSv4 server via rc.conf (as above comments), and mounting it version NFSv4, enough?
 
I have an other problem with nfsv4, but I can say that if you have
/pool/somedir/
/pool/somedir2/

You must put a V4: /pool in your exports and mount with :/somedir and :/somedir2
Or put a v4: / in you exports and mount with :/pool
 
Hi,

Now, it works for me with this in /etc/exports:

Code:
/u  -maproot=root  -network 10.35.66.0 -mask 255.255.255.0
/u/user1  -maproot=root  -network 10.35.66.0 -mask 255.255.255.0
/u/user1/home  -maproot=root  -network 10.35.66.0 -mask 255.255.255.0
/u/user1/home/www  -maproot=root  -network 10.35.66.0 -mask 255.255.255.0
V4: /u

But, I have a UID/GID problem:

On the server:
Code:
# ls -lan
total 11
drwx--x--x   3 0  0   3 Jul  9 16:39 .
drwxr-xr-x  20 0  0  25 Apr 22 15:06 ..
drwx--x--x   3 0  0   3 Jul  9 16:37 user1

On the client:
Code:
# ls -lan
total 11
drwx--x--x+  3 65534  65533   3 Jul  9 16:39 .
drwxr-xr-x  19 0      0      24 Jul 19 15:59 ..
drwx--x--x+  3 65534  65533   3 Jul  9 16:37 user1

Why does user1 go from UID 0 to 65533?
 
Why does user1 go from UID 0 to 65533?

This should normally only happen in the absence of the -maproot option in /etc/exports.

Quoting from the exports(5) manpage:
Code:
     In the absence of -maproot and -mapall options, remote accesses by root
     will result in using a credential of 65534:65533.  All other users will
     be mapped to their remote credential.  If a -maproot option is given,
     remote access by root will be mapped to that credential instead of
     65534:65533.  If a -mapall option is given, all users (including root)
     will be mapped to that credential in place of their own.
 
I suspect most of us have done this, but the last post was from 2013. One wonders if they managed to solve their question by now. :)
 
Now I am reviving this thread, 5 years between 2013 and 2018, and 7 years between 2018 and today :O

Suppose in my big tank pool, Ive got datasets like

tank/vm
tank/jail
tank/user
tank/user/files


and I want to export via NFS tank/user/files and all descendent ZFS datasets. Whats the easiest way to set this up considering that there are a lot of descendant datasets? Setting "sharenfs" options on the zfs property on all child datasets? Whats the simplest way to mount all of that on a client? I know autofs is an option but I'm looking for the easiest and simplest setup possible.

I've tried it on NFS3 but I have to list all the datasets I want to mount, and I was not able to mount one dataset inside another dataset following the parent-child hierarchy.
 
I want to export via NFS tank/user/files and all descendent ZFS datasets. Whats the easiest way to set this up considering that there are a lot of descendant datasets? Setting "sharenfs" options on the zfs property on all child datasets?
If all clients in the same subnet should have access to all child datasets, then setting "sharenfs=<options>" to the parent tank/user/files is enough to let inherit the property and options to all child datasets.

Example: All clients in all subnets, all child datasets:
Code:
# zfs  set  sharenfs=on  tank/user/files

Clients on one specific subnets, all child datasets:
Code:
# zfs  set  sharenfs="network 192.168.1.0/24"  tank/user/files

Specific clients, on specific subnets, on specific datasets:
Code:
# zfs  set  sharenfs="192.168.1.10,192.168.2.68"  tank/user/files/cdatas0

# zfs  set  sharenfs="192.168.2.68.192.168.2.69  tank/user/files/cdatas0/cdatas1

# zfs  set  sharenfs="192.168.1.10,192.168.2.69  tank/user/files/cdatas0/cdatas1/cdatas2/cdatas3
The second examples "sharenfs" options would be inherited by ../cdatas2, the last examples "sharenfs" by following child datasets.

Whats the simplest way to mount all of that on a client? I know autofs is an option but I'm looking for the easiest and simplest setup possible.
Besides autofs(5) you can configure fstab(5). In this case you might need to configure also rc.conf(5) netwait_enable="YES".
 
Back
Top