NFS exports

Sure, no problem. Set the appropriate variables in /etc/rc.conf:
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

Define the NFSv4 tree root:

/etc/exports
Code:
# V4: /<nfsv4_tree_root>, example:

V4: /nfsshares

Define remote mount points for NFS mount requests via /etc/exports file, or ZFS sharenfs.

As for restrictions, search for NFSv3 in exports(5) manual.
 
Sure, no problem. Set the appropriate variables in /etc/rc.conf:
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

Define the NFSv4 tree root:

/etc/exports
Code:
# V4: /<nfsv4_tree_root>, example:

V4: /nfsshares

Define remote mount points for NFS mount requests via /etc/exports file, or ZFS sharenfs.

As for restrictions, search for NFSv3 in exports(5) manual.
I'm not familiar with what ZFS sharenfs does or how to invoke it.


I just found https://forums.freebsd.org/threads/zfs-set-sharenfs-multiple-hosts.94811/
 
I read that but it's as clear as mud.
I'm not an expert but I remember I read several articles about exporting nfs shares with zfs before it worked.

This is how it works in my local network:

The name of my zpool is rpool
I want to export the directory /home/GRUPPE with all subdirectories to all clients in my local network
I'm using NFSv3

Here are the relevant lines of my 'install_freebsd_server.sh' script:

Bash:
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports

I don't know if all the lines are necessary but this works ...

I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports

and again ... I'm not a FreeBSD expert ...
 
I'm not an expert but I remember I read several articles about exporting nfs shares with zfs before it worked.

This is how it works in my local network:

The name of my zpool is rpool
I want to export the directory /home/GRUPPE with all subdirectories to all clients in my local network
I'm using NFSv3

Here are the relevant lines of my 'install_freebsd_server.sh' script:

Bash:
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports

I don't know if all the lines are necessary but this works ...

I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports

and again ... I'm not a FreeBSD expert ...
If I want two shares, one for nfs3 and the other for nfs4 do I need to create two pools?
 
Rich (BB code):
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports

I don't know if all the lines are necessary but this works ...
The red highlighted "mountd" and "rpcbind" services are unnecessary, those services are started automatically (NFSv4 even doesn't require "rpcbind"). Also, all the other red highlighted are unnecessary.

The blue highlighted may or may not be set (see rpc.lockd(8) and rpc.statd(8) for details). NFSv4 configuration is missing in this case.

I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports
No idea what happened on your system, but there is no need for an /etc/exports file in a NFSv3 ZFS "sharenfs" scenario.
 
balanga and Stefan2, the following example is a simple NFSv3 and NFSv4 ZFS "sharenfs" configuration:

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

/etc/exports
Code:
# Assuming /home/GRUPPE is the mount point:

V4:  /home/GRUPPE
Here /home/GRUPPE is the NFSv4 tree root, which will be "/" on the client mount command (see exports(5) for details).

Think of the NFSv4 tree root as a file system root, with directories and files. " / ", /dir0 , /dir0/dirA , /file0 , /file1 , etc. Real paths: /home/GRUPPE/, /home/GRUPPE/dir0/dirA , /home/GRUPPE/file0 , etc.

Create ZFS dataset and NFS share:
Code:
zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE
zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE

The difference in mounting NFSv3 and NFSv4 shares are the path and the "nfsv4" mount option (mount_nfs(8)).

Mounting NFSv3 on client:
Code:
 #  mount  server:/home/GRUPPE  /mnt

Mounting NFSv4 on client:
Code:
 #  mount  -o nfsv4  server:/  /mnt
Note the "path" in "server:path" is "/", which represents the root of the NFSv4 tree, defined in /etc/exports with the V4: line.

On a Linux client:
Code:
 % sudo  mount.nfs4  server:/   /mnt
 
If I want two shares, one for nfs3 and the other for nfs4 do I need to create two pools?
I don't know but I guess you don't have to create a separate pool but a separate zfs dataset ...
If the same data is NFS shared, no seperate dataset (definitively no seperate pool) is necessary for the two protocols. To mount a share with the NFSv4 protocol is to define a NFSv4 tree root and use the "nfsv4" mount option as shown in the examples above. For NFSv3 mounting see examples above as well.
 
nfs works best when mounted on ufs (just saying)
That's not true. One can use /etc/exports to define the shares by directory path (e.g. /home/GRUPPE -network 192.168.2.0/24 -alldirs). If defined in this manner, the file system is unimportend. It can be UFS standart directories, ZFS datasets mountpoints or standart directories inside datasets.

And, the ZFS "sharenfs" is quite capable of defining shares as good as the /etc/exports file.
 
balanga and Stefan2, the following example is a simple NFSv3 and NFSv4 ZFS "sharenfs" configuration:

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

/etc/exports
Code:
# Assuming /home/GRUPPE is the mount point:

V4:  /home/GRUPPE
Here /home/GRUPPE is the NFSv4 tree root, which will be "/" on the client mount command (see exports(5) for details).

Think of the NFSv4 tree root as a file system root, with directories and files. " / ", /dir0 , /dir0/dirA , /file0 , /file1 , etc. Real paths: /home/GRUPPE/, /home/GRUPPE/dir0/dirA , /home/GRUPPE/file0 , etc.

Create ZFS dataset and NFS share:
Code:
zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE
zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE

The difference in mounting NFSv3 and NFSv4 shares are the path and the "nfsv4" mount option (mount_nfs(8)).

Mounting NFSv3 on client:
Code:
 #  mount  server:/home/GRUPPE  /mnt

Mounting NFSv4 on client:
Code:
 #  mount  -o nfsv4  server:/  /mnt
Note the "path" in "server:path" is "/", which represents the root of the NFSv4 tree, defined in /etc/exports with the V4: line.

On a Linux client:
Code:
 % sudo  mount.nfs4  server:/   /mnt
Mounting the server's root fs, especially mounting it read/write. What could possibly go wrong with that?!
 
Back
Top