NFS exports

Sure, no problem. Set the appropriate variables in /etc/rc.conf:
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

Define the NFSv4 tree root:

/etc/exports
Code:
# V4: /<nfsv4_tree_root>, example:

V4: /nfsshares

Define remote mount points for NFS mount requests via /etc/exports file, or ZFS sharenfs.

As for restrictions, search for NFSv3 in exports(5) manual.
 
Sure, no problem. Set the appropriate variables in /etc/rc.conf:
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

Define the NFSv4 tree root:

/etc/exports
Code:
# V4: /<nfsv4_tree_root>, example:

V4: /nfsshares

Define remote mount points for NFS mount requests via /etc/exports file, or ZFS sharenfs.

As for restrictions, search for NFSv3 in exports(5) manual.
I'm not familiar with what ZFS sharenfs does or how to invoke it.


I just found https://forums.freebsd.org/threads/zfs-set-sharenfs-multiple-hosts.94811/
 
I read that but it's as clear as mud.
I'm not an expert but I remember I read several articles about exporting nfs shares with zfs before it worked.

This is how it works in my local network:

The name of my zpool is rpool
I want to export the directory /home/GRUPPE with all subdirectories to all clients in my local network
I'm using NFSv3

Here are the relevant lines of my 'install_freebsd_server.sh' script:

Bash:
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports

I don't know if all the lines are necessary but this works ...

I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports

and again ... I'm not a FreeBSD expert ...
 
I'm not an expert but I remember I read several articles about exporting nfs shares with zfs before it worked.

This is how it works in my local network:

The name of my zpool is rpool
I want to export the directory /home/GRUPPE with all subdirectories to all clients in my local network
I'm using NFSv3

Here are the relevant lines of my 'install_freebsd_server.sh' script:

Bash:
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports

I don't know if all the lines are necessary but this works ...

I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports

and again ... I'm not a FreeBSD expert ...
If I want two shares, one for nfs3 and the other for nfs4 do I need to create two pools?
 
Rich (BB code):
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports

I don't know if all the lines are necessary but this works ...
The red highlighted "mountd" and "rpcbind" services are unnecessary, those services are started automatically (NFSv4 even doesn't require "rpcbind"). Also, all the other red highlighted are unnecessary.

The blue highlighted may or may not be set (see rpc.lockd(8) and rpc.statd(8) for details). NFSv4 configuration is missing in this case.

I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports
No idea what happened on your system, but there is no need for an /etc/exports file in a NFSv3 ZFS "sharenfs" scenario.
 
balanga and Stefan2, the following example is a simple NFSv3 and NFSv4 ZFS "sharenfs" configuration:

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

/etc/exports
Code:
# Assuming /home/GRUPPE is the mount point:

V4:  /home/GRUPPE
Here /home/GRUPPE is the NFSv4 tree root, which will be "/" on the client mount command (see exports(5) for details).

Think of the NFSv4 tree root as a file system root, with directories and files. " / ", /dir0 , /dir0/dirA , /file0 , /file1 , etc. Real paths: /home/GRUPPE/, /home/GRUPPE/dir0/dirA , /home/GRUPPE/file0 , etc.

Create ZFS dataset and NFS share:
Code:
zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE
zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE

The difference in mounting NFSv3 and NFSv4 shares are the path and the "nfsv4" mount option (mount_nfs(8)).

Mounting NFSv3 on client:
Code:
 #  mount  server:/home/GRUPPE  /mnt

Mounting NFSv4 on client:
Code:
 #  mount  -o nfsv4  server:/  /mnt
Note the "path" in "server:path" is "/", which represents the root of the NFSv4 tree, defined in /etc/exports with the V4: line.

On a Linux client:
Code:
 % sudo  mount.nfs4  server:/   /mnt
 
If I want two shares, one for nfs3 and the other for nfs4 do I need to create two pools?
I don't know but I guess you don't have to create a separate pool but a separate zfs dataset ...
If the same data is NFS shared, no seperate dataset (definitively no seperate pool) is necessary for the two protocols. To mount a share with the NFSv4 protocol is to define a NFSv4 tree root and use the "nfsv4" mount option as shown in the examples above. For NFSv3 mounting see examples above as well.
 
nfs works best when mounted on ufs (just saying)
That's not true. One can use /etc/exports to define the shares by directory path (e.g. /home/GRUPPE -network 192.168.2.0/24 -alldirs). If defined in this manner, the file system is unimportend. It can be UFS standart directories, ZFS datasets mountpoints or standart directories inside datasets.

And, the ZFS "sharenfs" is quite capable of defining shares as good as the /etc/exports file.
 
balanga and Stefan2, the following example is a simple NFSv3 and NFSv4 ZFS "sharenfs" configuration:

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

/etc/exports
Code:
# Assuming /home/GRUPPE is the mount point:

V4:  /home/GRUPPE
Here /home/GRUPPE is the NFSv4 tree root, which will be "/" on the client mount command (see exports(5) for details).

Think of the NFSv4 tree root as a file system root, with directories and files. " / ", /dir0 , /dir0/dirA , /file0 , /file1 , etc. Real paths: /home/GRUPPE/, /home/GRUPPE/dir0/dirA , /home/GRUPPE/file0 , etc.

Create ZFS dataset and NFS share:
Code:
zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE
zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE

The difference in mounting NFSv3 and NFSv4 shares are the path and the "nfsv4" mount option (mount_nfs(8)).

Mounting NFSv3 on client:
Code:
 #  mount  server:/home/GRUPPE  /mnt

Mounting NFSv4 on client:
Code:
 #  mount  -o nfsv4  server:/  /mnt
Note the "path" in "server:path" is "/", which represents the root of the NFSv4 tree, defined in /etc/exports with the V4: line.

On a Linux client:
Code:
 % sudo  mount.nfs4  server:/   /mnt
Mounting the server's root fs, especially mounting it read/write. What could possibly go wrong with that?!
 
Mounting the server's root fs, especially mounting it read/write. What could possibly go wrong with that?!
There seems to be a misunderstanding. Throughout the entire instructions above, none of the server's local root file system is exported.

You’ve likely confused the NFSv4 tree root with the server’s root file system. Those are two entirely different things.

NFSv4 requires a tree root, which is defined in /etc/exports with the "V4:" line. When defined, it becomes the NFSv4 exported file system's root ( in this case V4: /home/GRUPPE becomes " / " for NFSv4 mounts.

Hense the "/" path in server:path : e.g. mount -o nfsv4 server:/ /mnt,

The NFSv4 mount path is relative to the NFSv4 tree root, whereas NFSv3 must be specified with a absolute path (see examples below).

nfsv4(4)
Code:
     The NFSv4 protocol does not use a separate mount protocol and assumes
     that the server provides a single file system tree structure, rooted at
     the point in the local file system tree specified by one or more

           V4: <rootdir> [-sec=secflavors] [host(s) or net]

     line(s) in the exports(5) file.  (See exports(5) for details.)

exports(5)
Rich (BB code):
     In the following example some directories are exported as NFSv3 and
     NFSv4:

           V4: /wingsdl/nfsv4
           /wingsdl/nfsv4/usr-ports -maproot=root -network 172.16.0.0 -mask 255.255.0.0
           /wingsdl/nfsv4/clasper   -maproot=root clasper

     Only one V4: line is needed or allowed to declare where NFSv4 is rooted.
     The other lines declare specific exported directories with their absolute
     paths given in /etc/exports.

     The exported directories' paths are used for both v3 and v4.  However,
     they are interpreted differently for v3 and v4.  A client mount command
     for usr-ports would use the server-absolute name when using nfsv3:

           mount server:/wingsdl/nfsv4/usr-ports /mnt/tmp

     A mount command using NFSv4 would use the path relative to the NFSv4
     root:

           mount server:/usr-ports /mnt/tmp
 
balanga and Stefan2, the following example is a simple NFSv3 and NFSv4 ZFS "sharenfs" configuration:

/etc/rc.conf
Code:
nfs_server_enable="YES"
nfsv4_server_enable="YES"

/etc/exports
Code:
# Assuming /home/GRUPPE is the mount point:

V4:  /home/GRUPPE
Here /home/GRUPPE is the NFSv4 tree root, which will be "/" on the client mount command (see exports(5) for details).

Think of the NFSv4 tree root as a file system root, with directories and files. " / ", /dir0 , /dir0/dirA , /file0 , /file1 , etc. Real paths: /est phone annual rates for calls only polandest phone annual rates for calls only polandhome/GRUPPE/, /home/GRUPPE/dir0/dirA , /home/GRUPPE/file0 , etc.

Create ZFS dataset and NFS share:
Code:
zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE
zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE

The difference in mounting NFSv3 and NFSv4 shares are the path and the "nfsv4" mount option (mount_nfs(8)).

Mounting NFSv3 on client:
Code:
 #  mount  server:/home/GRUPPE  /mnt

Mounting NFSv4 on client:
Code:
 #  mount  -o nfsv4  server:/  /mnt
Note the "path" in "server:path" is "/", which represents the root of the NFSv4 tree, defined in /etc/exports with the V4: line.

On a Linux client:
Code:
 % sudo  mount.nfs4  server:/   /mnt
I want my v4 mount point to be /repo/backup.

This is specifically for Clonezilla where I want my Clonezilla backups to go..

I think I went through this some time ago, but gave up because I couldn't make it work.

I may have created a pool, but an still struggling with ZFS so I don't know how to tell if this is part of a pool.

If I run zfs list I see:

zroot/repo 96K 2.47T 96K /repo

so it looks like I tried to get things set up but didn't manage to achieve what I wanted.
 
Back
Top