Solved NFSv3/NFSv4, mounting, permissions, NFSv4 man-page, and more

Hi T-Daemon,

just wanted to let you know that your advice to invoke the -o nfsv4 fixed the pesky
Code:
RPCPROG_MNT: RPC: Timed out
error, and replaced it with
Code:
mount_nfs: nmount: /mnt: Permission denied
:(

I will try to figure it out; just to reiterate my steps, in case someone spots whether I am making any other moronic error:

The server has an IP address XXX.XXX.XXX.111. To allow mounting only on a client with IP address XXX.XXX.XXX.110, I set: zfs set sharenfs='rw=XXX.XXX.XXX.110' pool/filesystem on the server, and verify that the property was set: zfs get sharenfs pool/filesystem

Then on the client I issue (as a root): mount -t nfs -o nfsv4 XXX.XXX.XXX.111:/pool/filesystem /mnt, and promptly receive the aforementioned error.

Kindest regards,

M

You are getting the error because you haven't set a NFSv4 root point, and are using the incorrect mount path for the NFS share.

I'm speaking here for ZFS "sharenfs" set file systems (data sets).

There are two required configurations to set for using NFSv4 on FreeBSD to export ZFS file systems (data sets).

1 - Set in /etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES
# optional
# nfsv4_server_only="YES"

2 - A single NFSv4 root point in the local file system for the exported ZFS file system, set in /etc/exports

From nfsv4(4):
The NFSv4 protocol does not use a separate mount protocol and assumes
that the server provides a single file system tree structure, rooted at
the point in the local file system tree specified by one or more

V4: <rootdir> [-sec=secflavors] [host(s) or net]

Example: The zroot/sharenfs/share1 ZFS file system (data set) mounted on /sharenfs/share1 should be exported, then the root point of the NFSv4 share must be set in /etc/exports as:
Code:
V4: /sharenfs

After editing the file, depending if you have mountd_enable="YES" set, execute
service mountd restart|onerestart

Note: ZFS "sharenfs" doesn't require mountd(8) set in rc.conf, we restart or onerestart it to reread /etc/exports.

To export the ZFS file system:
zfs set sharenfs="x.x.x.110" zroot/sharenfs/share1

To mount from client:

mount -o nfsv4 x.x.x.111:/share1 /mnt

Notice the /path in server:/path is set relative to the NFSv4 root point /sharenfs.
 
Hi Lamia,

thank you again for your reply.

I had reviewed the maproot setting in the handbook:
The -maproot=root flag allows the root user on the remote system to write data on the exported file system as root. If the -maproot=root flag is not specified, then even if a user has root access on the remote system, he will not be able to modify files on the exported file system.
and I do not need such a feature.

Kindest regards,

M
 
Hi T-Daemon,

it seems that you have posted, while I was composing a response to Lamia. Thank you for responding and the additional comments.

That option is not a valid mount_nfs(8) option but to mount file systems which have not nfsv4acls enabled by default, for example UFS2.
When I read the mount_nfs(8), I did not find the limitation that you describe. To wit:

nfsv4 Use the NFS Version 4 protocol. This option will force the mount to use TCP transport. By default, the highest minor version of NFS Version 4 that is supported by the NFS Version 4 server will be used. See the minorversion option.
The -o accompanying text does refer to the mount(8), but again:

nfsv4acls
Enable NFSv4 ACLs, which can be customized via the
setfacl(1) and getfacl(1) commands. This flag is mutu-
ally exclusive with acls flag.
What text am I missing?

To check which NFS protocol is used execute on the client side after mounting the NFS share nfsstat -m.
Thank you for that, it is very helpful.

There are two required configurations to set for using NFSv4 on FreeBSD to export ZFS file systems (data sets).

1 - Set in /etc/rc.conf
I did not find it in the Handoobk, the mount_nfs(8), or the mount(8). But, I did search and found it in NFSV4(4). Thank you.

2 - A single NFSv4 root point in the local file systems for the exported ZFS file system, set in /etc/exports
I have to re-read the NFSV4(4) and find some more examples because my simple mind does not quite grasp it from your explanation. And I do not want just type cmmands without understanding that they do.

Or, perhaps I just live with the NFS v3 mount and try to figure out how to limit the mount to only certain clients (done, configuration error) because the mount for NFSv3 works (apart from the ACLs). Is there any significant advantage of v4 over v3? Maybe support for ACL?

Thank you once again for your time and explanation.

Kindest regards,

M
 
You are getting the error because you haven't set a NFSv4 root point, and are using the incorrect mount path for the NFS share.

I'm speaking here for ZFS "sharenfs" set file systems (data sets).

There are two required configurations to set for using NFSv4 on FreeBSD to export ZFS file systems (data sets).

1 - Set in /etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES
# optional
# nfsv4_server_only="YES"

2 - A single NFSv4 root point in the local file system for the exported ZFS file system, set in /etc/exports

From nfsv4(4):


Example: The zroot/sharenfs/share1 ZFS file system (data set) mounted on /sharenfs/share1 should be exported, then the root point of the NFSv4 share must be set in /etc/exports as:
Code:
V4: /sharenfs

After editing the file, depending if you have mountd_enable="YES" set, execute
service mountd restart|onerestart

Note: ZFS "sharenfs" doesn't require mountd(8) set in rc.conf, we restart or onerestart it to reread /etc/exports.

To export the ZFS file system:
zfs set sharenfs="x.x.x.110" zroot/sharenfs/share1

To mount from client:

mount -o nfsv4 x.x.x.111:/share1 /mnt

Notice the /path in server:/path is set relative to the NFSv4 root point /sharenfs.
Thanks T-Daemon for this post. I can confirm that my clients show nfsv3 going by the 'nfsstat -m' command. I have now made a few changes to see if nfsv4 would now replace the nfsv3.
 
Hi Lamia,

as you may have seen from my posts above, thanks to yours and T-Daemon's help, I have NFSv3 working.

If, or better to say, when you figure out the setup for NFSv4, could you please share it? Perhaps even via p.m. if you prefer.

Kindest regards,

M
 
That option is not a valid mount_nfs(8) option but to mount file systems which have not nfsv4acls enabled by default, for example UFS2.
When I read the mount_nfs(8), I did not find the limitation that you describe. To wit:

nfsv4 Use the NFS Version 4 protocol. This option will force the mount to use TCP transport. By default, the highest minor version of NFS Version 4 that is supported by the NFS Version 4 server will be used. See the minorversion option.

The -o accompanying text does refer to the mount(8), but again:

nfsv4acls
Enable NFSv4 ACLs, which can be customized via the
setfacl(1) and getfacl(1) commands. This flag is mutu-
ally exclusive with acls flag.

What text am I missing?
It is not documented but apparently "nfsv4acls" is not a valid option for NFS mounts (as some other mount(8) options are neither), it is valid for certain file systems.

After some testing, my suspicion I expressed early in post 19 was incorrect. It has no influence for the rejection of the option if the file system ACLs are NFSv4-style or not.

2 - A single NFSv4 root point in the local file system for the exported ZFS file system, set in /etc/exports
I have to re-read the NFSV4(4) and find some more examples because my simple mind does not quite grasp it from your explanation. And I do not want just type cmmands without understanding that they do.
If it's still unclear, for NFSv4 shares a root directory must be set where the NFSv4 file system tree begins. Think of it like a FreeBSD root file system tree ( / ). Only a single NFSv4 root directory can be set.

In case of a ZFS, lets assume the pool/NFS file system mounted at /NFS is the root directory of the NFSv4 file system tree.

All NFS shares must be spawned from that root directory as child ZFS file systems (datasets) or sub-directories.

/etc/exports
Code:
V4: /NFS

Example with multiple ZFS child file systems, inheriting from parent file system "sharenfs" options or every child file system has different "sharenfs" options:
Code:
NAME                 MOUNTPOINT
pool/NFS/FS1         /NFS/FS1
pool/NFS/FS2         /NFS/FS2
pool/NFS/FS3/FSa     /NFS/FS3/FSa
pool/NFS/FS3/FSb     /NFS/FS3/FSb
etc.

Example with sub-directories:
Code:
NAME                          MOUNTPOINT
pool/NFS/DIR                  /NFS/DIR/
             (subdirectories) /NFS/DIR/DIR1
                              /NFS/DIR/DIR2

Is there any significant advantage of v4 over v3? Maybe support for ACL?
Decide yourself. There are optional features not present in NFSv3 listed in the DESCRIPTION section of the manual: nfsv4(4).

If, or better to say, when you figure out the setup for NFSv4, could you please share it?
Is the problem still permission issues or has it changed?

It might be easier to determine what is wrong when you post all NFS configurations and describe the expected result. You can anonymize what you consider to sensitive to display them publicly.
 
Hi T-Daemon,

thank you for the reply. I start with the end quoting:
Is the problem still permission issues or has it changed?

It might be easier to determine what is wrong when you post all NFS configurations and describe the expected result. You can anonymize what you consider to sensitive to display them publicly.
Currently, I have the NFSv3 working correctly, that is (i) I can configure the server to allow mounting on only authorized clients, and (ii) the mount has UNIX permissions correspondint to the permission on the server, on both testing machines.

I did not proceed with testing the NFSv4 because I did not understand the quote from the NFSv4(4). I did spent some time reading the NFSv4(4), and based on the listed differences, I would like to proceed with the NFSv4.

My principal problem with the wording is to interpret the quote:
V4: <rootdir> [-sec=secflavors] [host(s) or net]
in view of the following text, namely:
As such, the ``<rootdir>'' can be
in a non-exported file system. The exception is ZFS, which checks ex-
ports and, as such, all ZFS file systems below the ``<rootdir>'' must be
exported. However, the entire tree that is rooted at that point must be
in local file systems that are of types that can be NFS exported. Since
the NFSv4 file system is rooted at ``<rootdir>'', setting this to any-
thing other than ``/'' will result in clients being required to use dif-
ferent mount paths for NFSv4 than for NFS Version 2 or 3.]
Regarding the term "<rootdir>", does it refer to the root as it is generally understand, i.e., (/), or as a (mount-point) that is an origin or source, which may be different from the (/) as long as (i) it is exported and (ii) all the file systems are below it. This confusion comes from the sentence, (original emphasis removed, my emphasis supplied):
Since the NFSv4 file system is rooted at ``<rootdir>'', setting this to anything other than ``/'' will result in clients being required to use different mount paths.
That sentence implies that the "<rootdir>" does not necessarily have to be root as in (/). Did I sufficiently confused the issue? ;)

So, perhaps instead, let me present simplified part of my structure at the server and what I want to do:
Code:
NAME                            MOUNTPOINT
rpool                                      rpool
rpool/ROOT                           legacy
rpool/ROOT/13.1-Release        /
.
.
.
storage                /storage
storage/data        /storage/data

Under /storage/data, I have several directories and sub-directories, e.g.:
Code:
/storage/data/DIR_01
/storage/data/DIR_01/DIR_02
.
.
/storage/data/DIR_0N

Referring to your post above, if I wanted to mount /storage/data/DIR_01, I need to place into /etc/exports:
Code:
V4: /sotrage/data/DIR_01 XXX.XXX.XXX.111

Did I get it?

Kindest regards,

M
 
Regarding the term "<rootdir>", does it refer to the root as it is generally understand, i.e., (/), or as a (mount-point) that is an origin or source, which may be different from the (/) as long as (i) it is exported and (ii) all the file systems are below it.
It refers to
a (mount-point) that is an origin or source, which may be different from the (/) as long as (i) it is exported and (ii) all the file systems are below it.
In context with NFSv4, <rootdir> here does refer to the NFSv4 file system root directory (not the FreeBSD file system root directory " / "), which can be any directory in the FreeBSD file system hierarchy, inclusive FreeBSD file system root directory " / ".

But the NFSv4 <rootdir> doesn't need to be NFS exported (neither all file systems below it).
Code:
 ... the ``<rootdir>'' can be     in a non-exported file system.

The statement in the following sentence in nfsv4(4) is incorrect (the manual is a little messy, there are other false claims. See Note 2 at the end of post):
Rich (BB code):
                                                  ... the ``<rootdir>'' can be
     in a non-exported file system.   The exception is ZFS, which checks
     exports and, as such, all ZFS file systems below the ``<rootdir>'' must
     be exported.
That's not true. The NFSv4 <rootdir> in ZFS can be, for example, the non-exported FreeBSD root zroot/ROOT/default file system at mount point " / " ( V4: / ), without exporting all the other ZFS file systems below ( zfs list), only export the ZFS file system intended for NFS export.

This confusion comes from the sentence, (original emphasis removed, my emphasis supplied):
Rich (BB code):
     Since the NFSv4 file system is rooted at ``<rootdir>'', setting this to
     anything other than ``/'' will result in clients being required to use
     different mount paths for NFSv4 than for NFS Version 2 or 3.
That sentence implies that the "<rootdir>" does not necessarily have to be root as in (/).
Correct.

And furthermore, the sentence wants to say: The NFS exported directory path in the clients mount(8) command with the NFSv4 protocol is different than with NFSv3|2 when the NFSv4 <rootdir> is not set to FreeBSD file system root ( / ). Users are generally confused about the NFSv4 path in the mount(8) command on the clients.

For example, taking here your server structure, ZFS storage/data is NFS exported (by the ZFS "sharenfs" property) and the NFSv4 <rootdir> is set to V4: / :

mount(8) command on clients (notice in following examples path of NFS exported directory in server:path):

NFSv3: mount server:/storage/data/DIR_01 /mnt
NFSv4: mount -o nfsv4 server:/storage/data/DIR_01 /mnt

server:path is the same.

Now the NFSv4 <rootdir> is set to V4: /storage/data/DIR_01 :

NFSv3: mount server:/storage/data/DIR_01 /mnt
NFSv4: mount -o nfsv4 server:/ /mnt

The root directory of the NFSv4 file system /storage/data/DIR_01 becomes / path in server:path .

mount(8) a sub-directory of DIR_01, DIR_02:

NFSv3: mount server:/storage/data/DIR_01/DIR_02 /mnt
NFSv4: mount -o nfsv4 server:/DIR_02 /mnt



Did I get it?
All in all, yes. Can you mount the NFSv4 shares correctly?



Note 1: The network option in V4: ..... xxx.xxx.xxx.111 has no effect there. Set it besides the "sharenfs" property only.

Note 2: There is a incorrect statement in the nfsv4(4) manual. It says "one or more" V4: <rootdir> ... lines can be specified:
Rich (BB code):
 The NFSv4 protocol does not use a separate mount protocol and assumes
     that the server provides a single file system tree structure, rooted at
     the point in the local file system tree specified by one or more

           V4: <rootdir> [-sec=secflavors] [host(s) or net]

     line(s) in the exports(5) file.

Only a single V4: line is allowed. When more are set, the first entry in /etc/exports is accepted, all others are rejected and mountd(8) will complain about "bad exports list line 'V4: ..' " in console.

It is correct stated in exports(5):
Code:
     Only one V4: line is needed or allowed to declare where NFSv4 is rooted.
 
Note 1: The network option in V4: ..... xxx.xxx.xxx.111 has no effect there. Set it besides the "sharenfs" property only.
So you're saying that the following in the exports(5) page is incorrect as well:
V4: / -sec=krb5:krb5i:krb5p -network 131.104.48 -mask 255.255.255.0
...
For the experimental server, the NFSv4 tree is rooted at ``/'', and any client within the 131.104.48 subnet is permitted to perform NFSv4 state operations on the server, so long as valid Kerberos credentials are provided.

The last part of that man page explains why we have the V4: stuff in exports at all. It's to allow clients that can do either nfsv3 or nfsv4 to choose which protocol to use merely by using a different remote path to mount.
 
:) We had that discussion before in Thread kerberized-nfsv4-nfs-over-tls-on-13-0.83484, and setting the "network" option in the V4: line didn't work then either (see /etc/exports which worked), at least it didn't work on my test systems. I can't check them again, I don't have the VM's anymore.

The last part of that man page explains why we have the V4: stuff in exports at all. It's to allow clients that can do either nfsv3 or nfsv4 to choose which protocol to use merely by using a different remote path to mount.
I've read it too (many times), but what I observed on the test systems then and now doesn't coincide with the manual.


Current test system: Fresh installed 13.1-RELEASE ZFS NFS server in a VirtualBox VM, taking mefizto's server file system structure for NFS export.

Test objective: Set a V4: NFSv4 <rootdir>, export a NFSv4 file system from a ZFS and "sharenfs" property set, allow one specific client to mount the share.

NFSv4 server IP 192.168.0.101, allowed client IP 192.168.0.115.

Test 1:

/etc/exports:
Code:
V4: /storage/data/DIR_01 -network 192.168.0.115 -mask 255.255.255.0

Code:
# zfs set sharenfs=on storage/data
# zfs get sharenfs storage/data
NAME          PROPERTY   VALUE    SOURCE
storage/data  sharenfs   on       local

On client:
Code:
# showmount -e 192.168.0.101
Exports list on 192.168.0.101:
/storage/data                      Everyone


Test 2:

/etc/exports:
Code:
V4: /storage/data/DIR_01

Code:
# zfs set sharenfs="network 192.168.0.115,mask 255.255.255.0" storage/data
# zfs get sharenfs storage/data
NAME          PROPERTY   VALUE                              SOURCE
storage/data  sharenfs   192.168.0.115,mask 255.255.255.0   local

On client:
Code:
 # showmount -e 192.168.0.101
Exports list on 192.168.0.101:
/storage/data                      192.168.0.115

Summary Test 1, Test 2:
Code:
V4: /storage/data/DIR_01 -network 192.168.0.115 -mask 255.255.255.0
sharenfs=on
showmount     Everyone
Code:
V4: /storage/data/DIR_01
sharenfs="network 192.168.0.115,mask 255.255.255.0"
showmount    192.168.0.115

What conclusion should one draw when showmount(8) shows for Test 1 "Everyone" and for Test 2 the IP of the allowed client?
 
T-Daemon Thanks a million. I said it in one of my posts that I had no luck with /etc/exports too. I can now CONFIRM that nfsv4 now works here. The 'Showmount -e IP_Address' throws error "RPC Program not registered" despite that RPC is running on both server (@port 111:tcp/udp/local & 2049:tcp/udp) and client
(@port 111:tcp/udp/local only).

Showmount used to work on nfsv3; this article - https://powercampus.de/en/special-features-of-nfsv4-mounts/ - confirms that it may fail.

Notwithstanding, files/folders are now mounted with the argument -o nfsv4. In addition, 'nfsstat -m' shows '......nfsv4,minorversion=2,tcp,resvport,nconnect=1,hard,cto,sec.......'. In the past, it used to be '.....nfsv3,minor......'

Since I have '-maproot=...,192.168.1....,-alldirs,ro' in zfs sharenfs, I cannot confirm that the entire line 'V4: /..network.. mask...' in /etc/exports works. /etc/exports never worked before now for zfs shares; only the 'V4: /' part might likely be working now.
 
Hi T-Daemon,

again, than you very much for your time writing the explanation, especially the examples. Despite your assertion:
All in all, yes. Can you mount the NFSv4 shares correctly?
it was not until I read and re-read your examples in the current post several times that my lizard sized brain got the false impression that I understood it. So, I can now start to experiment to, hopefully, confirm my understanding.

The next question probably prove the above incorrect. You wrote:
Only a single V4: line is allowed.
Let us say that I want to share /storage/data/DIR_01 and since it contains data related to user home directory, I want to mount it under /home/user. So, following your example, I can do e.g.:

Set the NFSv4 <rootdir> to V4: /storage/data/DIR_01 and then
mount -o nfsv4 server:/ /home/user.

Now, I want to also mount /storage/music, but I would like to mount them on /media. Can I set another line in the /etc/export?

From reading the nfsv4(4), I do not see why not.

Kindest regards,

M
 
What conclusion should one draw when showmount(8) shows for Test 1 "Everyone" and for Test 2 the IP of the allowed client?
That showmount(8) only uses the nfsv3 protocol. Look at the end of its manpage

BUGS
The mount daemon running on the server only has an idea of the actual
mounts, since the NFS server is stateless. The showmount utility will
only display the information as accurately as the mount daemon reports
it.

And at the mountd(8) manpage

-R Do not support the Mount protocol and do not register with
rpcbind(8). This can be done for NFSv4 only servers, since the
Mount protocol is not used by NFSv4. Useful for NFSv4 only
servers that do not wish to run rpcbind(8). showmount(8) will
not work, however since NFSv4 mounts are not shown by
showmount(8), this should not be an issue for an NFSv4 only
server.

In your Test1, the nfsv4 export is /DIR_01. The /storage/data is an nfsv3 export. I think. I haven't actually tried any of this myself.

Edit: Actually I think the nfsv4 export is / which gets mapped to /storage/data/DIR_01. So mount server:/ /mnt would mount the server's /storage/data/DIR_01 at /mnt on the client. Think of the V4: line as a mechanism to re-root the filesystem hierarchy that is exposed to the client.
 
Hi Lamia,

You should then set "V4: /storage" and remount.
1) mount -o nfsv4 server:/data/DIR_01 /mnt
2) mount -o nfsv4 server:/music /media
What you are suggesting is, that only a single tree structure under the <rootdir> can be mounted. I do not find such a limitation in the nsfv4(4). As I read:
The NFSv4 protocol does not use a separate mount protocol and assumes
that the server provides a single file system tree structure, rooted at
the point in the local file system tree specified by one or more

V4: <rootdir> [-sec=secflavors] [host(s) or net]
it does not (explicitly) say that only a single V4 declaration can be made.

Could you point me to the text supporting your assertion?

Kindest regards,

M
 
Hi Lamia,


What you are suggesting is, that only a single tree structure under the <rootdir> can be mounted. I do not find such a limitation in the nsfv4(4). As I read:

it does not (explicitly) say that only a single V4 declaration can be made.

Could you point me to the text supporting your assertion?

Kindest regards,

M
There is no reference; rather I had made suggestions based on your tree structure. One can have multiple 'V4: / .......' in the /etc/exports. You have seemed to have one common root /storage/, hence my suggestions.
 
it does not (explicitly) say that only a single V4 declaration can be made.
T-Daemon points out that the man pages are inconsistent, and appears to have actually tried it. You can only have one V4: line:

Note 2: There is a incorrect statement in the nfsv4(4) manual. It says "one or more" V4: <rootdir> ... lines can be specified:
Rich (BB code):
 The NFSv4 protocol does not use a separate mount protocol and assumes
     that the server provides a single file system tree structure, rooted at
     the point in the local file system tree specified by one or more

           V4: <rootdir> [-sec=secflavors] [host(s) or net]

     line(s) in the exports(5) file.

Only a single V4: line is allowed. When more are set, the first entry in /etc/exports is accepted, all others are rejected and mountd(8) will complain about "bad exports list line 'V4: ..' " in console.

It is correct stated in exports(5):
Code:
     Only one V4: line is needed or allowed to declare where NFSv4 is rooted.
 
What you are suggesting is, that only a single tree structure under the <rootdir> can be mounted. I do not find such a limitation in the nsfv4(4).
it does not (explicitly) say that only a single V4 declaration can be made.
One can have multiple 'V4: / .......' in the /etc/exports.
To make it clear:

Only a single V4: line can be set in /etc/exports to specify the NFSv4 tree root!

You can verify if multiple V4: lines are allowed or not with a simple test yourself.

Edit /etc/exports, add line "V4: /" after the first V4: line, save changes, execute

service mountd restart (or onerestart)

Change to console ( 'Alt + F1', 'Ctrl + Alt +F1' from Xorg ). You will see there a similar mountd message as shown below:
Code:
Jul  17 05:44:04 nfsserv mountd[3091]: different V4 dirpath /
Jul  17 05:44:04 nfsserv mountd[3091]: bad exports list line 'V4: /'
 
I hope mefizto doesn't mind when the discussion here deviates into lengthy details (like walls of details), not necessarily important to the NFS issues of this thread.

That showmount(8) only uses the nfsv3 protocol. Look at the end of its manpage
And at the mountd(8) manpage
Good point. That's an important factor which makes my testing inconclusive (but it doesn't change the proof, see last paragraph).

In your Test1, the nfsv4 export is /DIR_01. The /storage/data is an nfsv3 export. I think.
You are correct:
Code:
zfs list -r -o name,sharenfs,mountpoint storage
NAME          SHARENFS  MOUNTPOINT
storage       off       /storage
storage/data  on        /storage/data

/storage/data/DIR_01
NFSv3 mount executions:
Rich (BB code):
# mount 192.168.0.101:/storage/data/DIR_01   /mnt
[tcp] 192.168.0.101:/storage/data/DIR_01: Permission denied

# mount 192.168.0.101:/storage/data/   /mnt
Mount OK.

# mount 192.168.0.101:/storage   /mnt
[tcp] 192.168.0.101:/storage: Permission denied

Edit: Actually I think the nfsv4 export is / which gets mapped to /storage/data/DIR_01.
Hmmm, that's not what is stated in the manual of nfsv4(4)
Rich (BB code):
     The NFSv4 protocol does not use a separate mount protocol and assumes
     that the server provides a single file system tree structure, rooted at
     the point in the local file system tree specified by one or more

           V4: <rootdir> [-sec=secflavors] [host(s) or net]

     line(s) in the exports(5) file.

Think of the V4: line as a mechanism to re-root the filesystem hierarchy that is exposed to the client.
To my understanding, the above statement from nfsv4(4) says that the "NFSv4 files system tree root" must be at a point in the local (FreeBSD) file system tree, and the V4: line is specifying that NFSv4 root point, and not it is re-rooting the (FreeBSD) file system hierarchy (assuming this is what you meant).



Back to the problem: Is the "network" option in V4: respected or ignored?

The showmount(8) issue doesn't change the conclusion. On a third test run the "network" option in the V4: line on a NFSv4 only server is still ignored.

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"
nfsv4_server_only="YES"

/etc/exports
V4: /storage/data/DIR_01 -network 192.168.43.115 -mask 255.255.255.0

On client:
Code:
# mount -o nfsv4 192.168.0.101:/ /mnt
mount_nfs: nmount: /mnt: Permission denied

When the "network" option is set in ZFS "sharenfs" the NFS share is mounted correctly.

The exports(5) manual actually supports what I'm trying to proof (I pointed that out in the other thread before).
Rich (BB code):
                                           ... The third form has the
     string ``V4:'' followed by a single absolute path name, to specify the
     NFSv4 tree root

     ...

                                         ... .For the NFSv4 tree root, the

     only options that can be specified in this section are ones related to
     security: -sec, -tls, -tlscert and -tlscertuser.
 
Begging mefizto's pardon for continuing to derail his thread.

To my understanding, the above statement from nfsv4(4) says that the "NFSv4 files system tree root" must be at a point in the local (FreeBSD) file system tree, and the V4: line is specifying that NFSv4 root point, and not it is re-rooting the (FreeBSD) file system hierarchy (assuming this is what you meant).
Nah, I mean that the NFSv4 filesystem hierarchy is re-rooted by that line. So if you have
Code:
V4: /storage
/storage/data  -network client-net
You mount it with nfsv3 like this
Code:
mount server:/storage/data /mnt
But a v4 mount would use this server-side path
Code:
mount server:/data /mnt

Back to the problem: Is the "network" option in V4: respected or ignored?

The showmount(8) issue doesn't change the conclusion. On a third test run the "network" option in the V4: line on a NFSv4 only server is still ignored.

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"
nfsv4_server_only="YES"

/etc/exports
V4: /storage/data/DIR_01 -network 192.168.43.115 -mask 255.255.255.0
What does the exported directory line look like?

The exports(5) manual actually supports what I'm trying to proof (I pointed that out in the other thread before).
Rich (BB code):
                                           ... The third form has the
     string ``V4:'' followed by a single absolute path name, to specify the
     NFSv4 tree root

     ...

                                         ... .For the NFSv4 tree root, the

     only options that can be specified in this section are ones related to
     security: -sec, -tls, -tlscert and -tlscertuser.
Fair enough. I'm just wondering why so many of the examples in that man page specify a network on the V4: line.
 
Hi Lamia,
There is no reference; rather I had made suggestions based on your tree structure. One can have multiple 'V4: / .......' in the /etc/exports. You have seemed to have one common root /storage/, hence my suggestions.
Ah, O.K., understood. However, the structure was made up just for the sake of example. Please see further discussion below.

Hi Jose,
[FONT=monospace]T-Daemon[/FONT] points out that the man pages are inconsistent, and appears to have actually tried it. You can only have one V4: line:
You are, of course, correct, I missed it, and T-Daemon confirmed it in subsequent post. This begs a different question, see below.

Hi T-Daemon,
I hope [FONT=monospace]mefizto[/FONT] doesn't mind when the discussion here deviates into lengthy details (like walls of details), not necessarily important to the NFS issues of this thread.
I do not mind at all, just the opposite, I would suggest that this is one of the best treatment/explanation of the NFSv3 vs NFSv4 issues. I just wonder how to change the title, so that other people might have the benefit of the thread in the future.

I am sorry that I could not experiment, but I have been re-building the server (ran out of space) and given the amount of data, it will take a while, so I am at least trying to keep up with the thread. In that regard, I would like to return to the problem of mounting several (disjoint) shares.

Let us assume, for the sake of an argument that the nfsv4(4) is incorrect, i.e., it is not an implementation problem, and only one line in /etc/export is allowed. That would mean serious limitation. E.g., one's scheme to mount /storage/data/DIR_01 limited to authorized users only, but /storage/music worldwide would not be possible.

However, the /etc/export is not the only way to set the export, cf. https://klarasystems.com/articles/nfs-shares-with-zfs/ mentioned above. Could this aproach be used?

Kindest regards,

M
 
Let us assume, for the sake of an argument that the nfsv4(4) is incorrect, i.e., it is not an implementation problem, and only one line in /etc/export is allowed. That would mean serious limitation. E.g., one's scheme to mount /storage/data/DIR_01 limited to authorized users only, but /storage/music worldwide would not be possible.
I don't think so. The V4: line does not export any filesystems. You still need an additional line in the exports file, even with V4:, and you can set permissions as you see fit on this exports line. Also, the exports(5) page states:
(An) NFSv4 mount request for a directory that the client does not have permission for will succeed and read/write access will fail afterwards, whereas NFSv3 rejects the mount request.
However, the /etc/export is not the only way to set the export, cf. https://klarasystems.com/articles/nfs-shares-with-zfs/ mentioned above. Could this aproach be used?
That's very interesting. I do wonder how ZFS's sharenfs interacts with NFSv4.

I need to get off my duff and actually try some these things. I've been trying to get through a Kerberos install first, though. I forced myself to read all of the dialogue, and I think I finally have a grasp on how Kerberos works. Now it's time to try things out. I'm not always good at this part.
 
Back
Top