NFS v3 & v4 on same server

Is it possible to have v3 and v4 NFS shares on the same server?

If so, an example of /etc/exports, and/or /etc/rc.conf would be useful
 
If I try to mount an NFS share from Android, does it default to V4?
A quick web search shows, mounting nfsv4 on Android requires the Android devices Linux nfs kernel module must support nfsv4. if it does, it would depend on the mount arguments (options).

Android client, hypothetical use case (mount(8) here Linux command, see also nfs(8) EXAMPLES):

NFSv4: mount -t nfsv4 nfsserver:/path [mountpoint]
NFSv3: mount nfsserver:/path [mountpoint]
 
A quick web search shows, mounting nfsv4 on Android requires the Android devices Linux nfs kernel module must support nfsv4. if it does, it would depend on the mount arguments (options).

Android client, hypothetical use case (mount(8) here Linux command, see also nfs(8) EXAMPLES):

NFSv4: mount -t nfsv4 nfsserver:/path [mountpoint]
NFSv3: mount nfsserver:/path [mountpoint]
I'm still inclear as to whether you can have both v3 and v4 shares in the same /etc/exports...

Do you need to include V4: /somefolder

before each share defined as a v4 share, otherwise it defined as a V3 share?
 
I'm still inclear as to whether you can have both v3 and v4 shares in the same /etc/exports...
You can. I have a test VM running, having both NFS version shares exported from the same file:

NFS Server

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

/etc/exports
Code:
V4: /data

# NFSv4
/data/nfsShare

# NFSv3
/usr/src


NFS Client

Nothing in /etc/rc.conf NFS related.

Code:
mount -o nfsv4 14nfss:/nfsShare   /mnt
mount 14nfss:/usr/src   /media

mount | grep -E 'mnt|media'
14nfss:/nfsShare on /mnt (nfs, nfsv4acls)
14nfss:/usr/src on /media (nfs)

Rich (BB code):
nfsstat -m
14nfss:/nfsShare on /mnt
nfsv4,minorversion=2,tcp,resvport,nconnect=1,hard,cto,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=4194304,timeout=120,retrans=2147483647
14nfss:/usr/src on /media
nfsv3,tcp,resvport,nconnect=1,hard,cto,lockd,sec=sys,acdirmin=3,acdirmax=60,acregmin=5,acregmax=60,nametimeo=60,negnametimeo=60,rsize=65536,wsize=65536,readdirsize=65536,readahead=1,wcommitsize=4194304,timeout=120,retrans=2

Do you need to include V4: /somefolder

before each share defined as a v4 share, otherwise it defined as a V3 share?
No, only one V4: root directory can be specified. The V4: line doesn't export any file system, to export NFSv4 directories (ZFS data sets) they must be set in a seperate line (see exports(5) EXAMPLES and nfsv4(4))

If NFSv3 should run on the same server, V4: can't be "/" but a subdirectory (or it can be a data set if ZFS with it's designated mountpoint).

All NFSv4 shares must be under that subdirectory, i.e.:

/etc/exports
Code:
V4: /storage

# NFSv4
/storage/packages14      -ro    -network 192.168.1.0
/storage/packages13      -ro    -network 192.168.1.0

# NFSv3
/usr/ports               -ro    -network 192.168.1.0
/usr/src-14.0            -ro    -network 192.168.1.0
/usr/obj                 -ro    -network 192.168.1.0

/usr/ports               -ro    -network 10.0.2.0
/usr/src-14.0            -ro    -network 10.0.2.0
/usr/obj                 -ro    -network 10.0.2.0
 
/etc/rc.conf contains:-

nfs_server_enable="YES"
nfsv4_server_enable="YES"

Until now I did not have a V4: line in /etc/exports.

After including one, I don't see any way to start the V4 server or check its status.

service nfs4vd or nfsv4_server among several other combinations doesn't work. Should service nfsd restart start both services?

And how to tell if my /etc/exports has been activated?
showmount -e 'nfsserver'shows the original setting
 
And how to tell if my /etc/exports has been activated?
The mountd(8) must be restarted to read the modified /etc/exports file.

service mountd onerestart

"onerestart" (and not restart) because "mountd_enable" is not set in /etc/rc.conf, it doesn't need to be, as doesn't /etc/rc.d/rpcbind, as adviced in the handbook [1].

/etc/rc.d/nfsd is force starting /etc/rc.d/mountd and mountd /etc/rc.d/rpcbind.

If the NFS shares were ZFS "sharenfs" exported (zfsprops(7)), every "sharenfs" option change would be immediately recorded in the export file (/etc/zfs/exports) and read automatically by mountd(8), no service restart necessary.

Don't edit /etc/zfs/exports) manually!



[1]
32.3.1. Configuring the Server

To enable the processes required by the NFS server at boot time, add these options to /etc/rc.conf:

rpcbind_enable="YES"
nfs_server_enable="YES"
mountd_enable="YES"
 
The mountd(8) must be restarted to read the modified /etc/exports file.

service mountd onerestart

"onerestart" (and not restart) because "mountd_enable" is not set in /etc/rc.conf, it doesn't need to be, as doesn't /etc/rc.d/rpcbind, as adviced in the handbook [1].

/etc/rc.d/nfsd is force starting /etc/rc.d/mountd and mountd /etc/rc.d/rpcbind.
Is there any way to make this work?

Code:
V4: /repo
/repo/backup/clonezilla -mapall="root"


/ -mapall="root"

I like to mount my server root on /net to give me ready access to it when doing some coding/file management. But I need to set up a V4 share to do backups via Clonezilla



I don't have rpcbind or mountd enabled but nfsd starts OK without. I guess they get started automatically
 
Is there any way to make this work?

Code:
V4: /repo
/repo/backup/clonezilla -mapall="root"


/ -mapall="root"
Bummer!

Working only with ZFS I've neglected UFS. All my systems are ZFS. The above examples in /etc/exports are separate ZFS data sets (and subdirectory on a data set, /usr/obj ), meaning separate file systems.

A UFS, one partition system installation, is one file system. On one file system there can be only NFSv4 or NFSv3, not both at the same time, except the exported shares are on separate file systems, on separate partitions or disks.

In case of UFS, one partition installation, the above example won't work. since "/" and "/repo/backup/clonezilla" are on the same file system

It will work if "/" would be on one partition/disk, "/repo/backup/clonezilla" on another partition/disk.

On ZFS it's different. Every data set within the pool can be a separate file system, i.e.

Code:
zfs create -o mountpoint=/repo zroot/repo
mkdir -p /repo/backup/clonezilla

Or "backup" and "clonezilla" can be separate data sets (after creating zroot/repo):
Code:
zfs create -p zroot/repo/backup/clonezilla

Now "/repo" and "/" are separate file systems and can be exported with different NFS version protocols as shown in the above examples.
Code:
NAME                           MOUNTPOINT
zroot/ROOT/default             /
zroot/repo                     /repo
zroot/repo/backup              /repo/backup
zroot/repo/backup/clonezilla   /repo/backup/clonezilla

Sorry for the confusion.
 

Actually my server is using ZFS...

Code:
root@M73:~ # mount
zroot/ROOT/default on / (zfs, NFS exported, local, noatime, nfsv4acls)
devfs on /dev (devfs)
zroot/var/audit on /var/audit (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/home on /usr/home (zfs, local, noatime, nfsv4acls)
zroot on /zroot (zfs, local, noatime, nfsv4acls)
zroot/var/crash on /var/crash (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/usr/ports on /usr/ports (zfs, local, noatime, nosuid, nfsv4acls)
zroot/var/tmp on /var/tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/tmp on /tmp (zfs, local, noatime, nosuid, nfsv4acls)
zroot/usr/src on /usr/src (zfs, local, noatime, nfsv4acls)
zroot/var/log on /var/log (zfs, local, noatime, noexec, nosuid, nfsv4acls)
zroot/var/mail on /var/mail (zfs, local, nfsv4acls)


Should my /etc/exports be written differently?




Now "/repo" and "/" are separate file systems and can be exported with different NFS version protocols as shown in the above examples.
Code:
NAME                           MOUNTPOINT
zroot/ROOT/default             /
zroot/repo                     /repo
zroot/repo/backup              /repo/backup
zroot/repo/backup/clonezilla   /repo/backup/clonezilla

Sorry for the confusion.
No problem, I'm very grateful for the guidance.
 
Actually my server is using ZFS...
Good, glad to hear.

Should my /etc/exports be written differently?
It can stay that way, you need only to create a "repo" data set, to put it on a separate file system than "/" (which is zroot/ROOT/default).

Code:
zfs create -o mountpoint=/repo zroot/repo
mkdir -p /repo/backup/clonezilla
Or "backup" and "clonezilla" can be separate data sets (after creating zroot/repo):
Code:
zfs create -p zroot/repo/backup/clonezilla

It looks like (from the mount(8) output above in # 11, missing a /repo mount) /repo/backup/clonezilla are simple sub-directories (created by mkdir(1)). If /repo is the root of V4:, then "/" can't be exported as NFSv3, because they are part of the same file system (zroot/ROOT/default).

Again, on ZFS, every file system data set is a separate file system, every of those file systems can be exported with different NFS properties.
 
Thanks for these instructions. My knowledge of ZFS is extremely limited and didn't know that such things could be done. Also I'm rather afraid of experimenting since my system is on a 4TB disk and afraid of breaking something, However I followed your instructions, but was unable to connect to the V4 share.

Code:
[root@X1 ~]# showmount -e 192.168.1.14
Exports list on 192.168.1.14:
/                                  Everyone
/repo/backup/clonezilla            Everyone


[root@X1 ~]# mount -o nfsv4 192.168.1.14:/repo/backup/clonezilla /mnt/tmp
mount_nfs: nmount: /mnt/tmp, Invalid fstype: Invalid argument

mount now includes

zroot/repo on /repo (zfs, NFS exported, local, noatime, nfsv4acls)

I have restarted nfsd and mountd. Maybe something else needs to be done..
 
Noticed this in /var/log/daemon.log:-

Feb 28 00:12:22 M73 mountd[71546]: bad exports list line '/repo/backup/clonezilla': symbolic link in export path or statfs failed
 
mount -o nfsv4 192.168.1.14:/repo/backup/clonezilla /mnt/tmp
The path in server:path is specified wrong.

When a share is exported with the NFSv4 protocol, the path for server:path is relative to the V4: root, meaning, the path begins with what is beneath of that rootdir.

In the case of V4: /repo the path begins after /repo.

When the exported share is /repo/backup/clonezilla the path in server:path at the mount command begins with /backup:

mount -o nfsv4 192.168.1.14:/backup/clonezilla /mnt/tmp

This is mentioned in nfsv4(4) (although not very clear):
Code:
DESCRIPTION
    
     Since the NFSv4 file system is rooted at ``<rootdir>'', setting this to
     anything other than ``/'' will result in clients being required to use
     different mount paths for NFSv4 than for NFS Version 2 or 3.

It's clearer explained in exports(5):
Rich (BB code):
EXAMPLES

     In the following example some directories are exported as NFSv3 and
     NFSv4:

           V4: /wingsdl/nfsv4
           /wingsdl/nfsv4/usr-ports -maproot=root -network 172.16.0.0 -mask 255.255.0.0
           /wingsdl/nfsv4/clasper   -maproot=root clasper

     Only one V4: line is needed or allowed to declare where NFSv4 is rooted.
     The other lines declare specific exported directories with their absolute
     paths given in /etc/exports.

     The exported directories' paths are used for both v3 and v4.  However,
     they are interpreted differently for v3 and v4.  A client mount command
     for usr-ports would use the server-absolute name when using nfsv3:

           mount server:/wingsdl/nfsv4/usr-ports /mnt/tmp

     A mount command using NFSv4 would use the path relative to the NFSv4
     root:

           mount server:/usr-ports /mnt/tmp

Notice the green highlighted V4: rootdir in exports example for NFSv4 and the red highlighted exported share path, there, and at the mount command at the bottom.
 
Back
Top