nfs_server_enable="YES"
nfsv4_server_enable="YES"
# V4: /<nfsv4_tree_root>, example:
V4: /nfsshares
sharenfs.I'm not familiar with what ZFS sharenfs does or how to invoke it.Sure, no problem. Set the appropriate variables in /etc/rc.conf:
Code:nfs_server_enable="YES" nfsv4_server_enable="YES"
Define the NFSv4 tree root:
/etc/exports
Code:# V4: /<nfsv4_tree_root>, example: V4: /nfsshares
Define remote mount points for NFS mount requests via /etc/exports file, or ZFSsharenfs.
As for restrictions, search for NFSv3 in exports(5) manual.
I read that but it's as clear as mud.![]()
NFS Shares with ZFS - Klara Systems
Learn to configure and manage NFS shares with OpenZFS in FreeBSD. This article covers using the sharenfs property for easy NFS management.klarasystems.com
I'm not an expert but I remember I read several articles about exporting nfs shares with zfs before it worked.I read that but it's as clear as mud.
sysrc nfs_server_enable="YES"
sysrc mountd_enable="YES"
sysrc rpcbind_enable="YES"
sysrc rpc_lockd_enable="YES"
sysrc rpc_statd_enable="YES"
zfs create rpool/home/GRUPPE
zfs set sharenfs=on rpool/home/GRUPPE
zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE
ln -s /etc/zfs/exports /etc/exports
If I want two shares, one for nfs3 and the other for nfs4 do I need to create two pools?I'm not an expert but I remember I read several articles about exporting nfs shares with zfs before it worked.
This is how it works in my local network:
The name of my zpool is rpool
I want to export the directory /home/GRUPPE with all subdirectories to all clients in my local network
I'm using NFSv3
Here are the relevant lines of my 'install_freebsd_server.sh' script:
Bash:sysrc nfs_server_enable="YES" sysrc mountd_enable="YES" sysrc rpcbind_enable="YES" sysrc rpc_lockd_enable="YES" sysrc rpc_statd_enable="YES" zfs create rpool/home/GRUPPE zfs set sharenfs=on rpool/home/GRUPPE zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE ln -s /etc/zfs/exports /etc/exports
I don't know if all the lines are necessary but this works ...
I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports
and again ... I'm not a FreeBSD expert ...
The red highlighted "mountd" and "rpcbind" services are unnecessary, those services are started automatically (NFSv4 even doesn't require "rpcbind"). Also, all the other red highlighted are unnecessary.Rich (BB code):sysrc nfs_server_enable="YES" sysrc mountd_enable="YES" sysrc rpcbind_enable="YES" sysrc rpc_lockd_enable="YES" sysrc rpc_statd_enable="YES" zfs create rpool/home/GRUPPE zfs set sharenfs=on rpool/home/GRUPPE zfs set sharenfs="-alldirs,-network=192.168.2.0,-mask 255.255.255.0" rpool/home/GRUPPE ln -s /etc/zfs/exports /etc/exports
I don't know if all the lines are necessary but this works ...
No idea what happened on your system, but there is no need for an /etc/exports file in a NFSv3 ZFS "sharenfs" scenario.I remember I had problems with the exports file so I created a symlink from /etc/zfs/exports to /etc/exports
nfs_server_enable="YES"
nfsv4_server_enable="YES"
# Assuming /home/GRUPPE is the mount point:
V4: /home/GRUPPE
zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE
zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE
# mount server:/home/GRUPPE /mnt
# mount -o nfsv4 server:/ /mnt
V4: line. % sudo mount.nfs4 server:/ /mnt
If I want two shares, one for nfs3 and the other for nfs4 do I need to create two pools?
If the same data is NFS shared, no seperate dataset (definitively no seperate pool) is necessary for the two protocols. To mount a share with the NFSv4 protocol is to define a NFSv4 tree root and use the "nfsv4" mount option as shown in the examples above. For NFSv3 mounting see examples above as well.I don't know but I guess you don't have to create a separate pool but a separate zfs dataset ...
That's not true. One can use /etc/exports to define the shares by directory path (e.g.nfs works best when mounted on ufs (just saying)
/home/GRUPPE -network 192.168.2.0/24 -alldirs). If defined in this manner, the file system is unimportend. It can be UFS standart directories, ZFS datasets mountpoints or standart directories inside datasets.Oh really?!nfs works best when mounted on ufs (just saying)
Mounting the server's root fs, especially mounting it read/write. What could possibly go wrong with that?!balanga and Stefan2, the following example is a simple NFSv3 and NFSv4 ZFS "sharenfs" configuration:
/etc/rc.conf
Code:nfs_server_enable="YES" nfsv4_server_enable="YES"
/etc/exports
Here /home/GRUPPE is the NFSv4 tree root, which will be "/" on the client mount command (see exports(5) for details).Code:# Assuming /home/GRUPPE is the mount point: V4: /home/GRUPPE
Think of the NFSv4 tree root as a file system root, with directories and files. " / ", /dir0 , /dir0/dirA , /file0 , /file1 , etc. Real paths: /home/GRUPPE/, /home/GRUPPE/dir0/dirA , /home/GRUPPE/file0 , etc.
Create ZFS dataset and NFS share:
Code:zfs create -o mountpoint=/home/GRUPPE rpool/home/GRUPPE zfs set sharenfs="alldirs,network 192.168.2.0/24" rpool/home/GRUPPE
The difference in mounting NFSv3 and NFSv4 shares are the path and the "nfsv4" mount option (mount_nfs(8)).
Mounting NFSv3 on client:
Code:# mount server:/home/GRUPPE /mnt
Mounting NFSv4 on client:
Note the "path" in "server:path" is "/", which represents the root of the NFSv4 tree, defined in /etc/exports with theCode:# mount -o nfsv4 server:/ /mntV4:line.
On a Linux client:
Code:% sudo mount.nfs4 server:/ /mnt
Sorry my bad. It was when you mount zfs onto nfs when things can go wrong.Oh really?!