ZFS How export by NFS the content of ZFS recursively?

angelvg

Member

Reaction score: 7
Messages: 36

Hello friends,

I need to export by NFS the home for a remote server...

The creation process was as follows:

Create the pool called 'storage'
# zpool create storage da1

See the pool mounted
# df -h | egrep 'Filesystem|storage '
Code:
Filesystem                              Size    Used   Avail Capacity  Mounted on
storage                                 6.4T    628G    5.8T    10%    /storage

Creation of ZFS datasets (file systems)
# zfs create - storage/home

See the ZFS dataset
# df -h | egrep 'Filesystem|storage |storage/home '
Code:
Filesystem                              Size    Used   Avail Capacity  Mounted on
storage                                 6.4T    628G    5.8T    10%    /storage
storage/home                            5.8T    1.6G    5.8T     0%    /storage/home

Creation of more ZFS datasets (file systems)
Code:
# zfs create - storage/home/mvilla
# zfs create - storage/home/vcordoba

See the ZFS dataset
# zfs list
Code:
NAME                                   USED  AVAIL  REFER  MOUNTPOINT
storage                               8.58T  7.17T   628G  /storage
storage/home                           549G  7.18T  1.65G  /storage/home
storage/home/vcordoba                   96K  10.0G    96K  /storage/home/vcordoba
storage/home/mvilla                   41.6G  7.18T  41.6G  /storage/home/mvilla

See the ZFS dataset mounted
# df -h | egrep 'Filesystem|storage |storage/home |storage/home/mvilla|vcordoba'
Code:
Filesystem                              Size    Used   Avail Capacity  Mounted on
storage                                 6.5T    628G    5.8T     9%    /storage
storage/home                            5.8T    1.6G    5.8T     0%    /storage/home
storage/home/vcordoba                    10G     96K     10G     0%    /storage/home/vcordoba
storage/home/mvilla                     5.9T     42G    5.8T     1%    /storage/home/mvilla

The reason for doing it that way was to be able to control the disk quota for every user.

I am trying to share it like this:
# cat /etc/exports
Code:
/storage          -alldirs -network 10.2.1.10/24

From a remote server view the shared
# showmount -e alpha
Code:
Exports list on alpha:
/storage          10.2.1.10

Mount the shared
# mount_nfs -o nfsv3 alpha:/storage /mnt

View the mounted data
# ls /mnt
Code:
home

View the data inside /home
# ls /mnt/home
Code:
(none here)

My question is how would you go about NFS exporting the content recursively?

Thanks.
 

sko

Aspiring Daemon

Reaction score: 400
Messages: 708

as with other filesystems and NFS, each filesystem has to be explicitly exported.

Also with ZFS you usually don't use /etc/exports but rather just set the sharenfs property of the dataset.
 

mer

Aspiring Daemon

Reaction score: 334
Messages: 546

as with other filesystems and NFS, each filesystem has to be explicitly exported.

Also with ZFS you usually don't use /etc/exports but rather just set the sharenfs property of the dataset.
Since properties are inherited you should be able to set the sharenfs property on the toplevel, perhaps on storage/home.
 
OP
angelvg

angelvg

Member

Reaction score: 7
Messages: 36

mer

I try with:

# cat /etc/exports
Code:
/storage/home                   -alldirs -network 10.2.1.10/24
# service mountd reload

# showmount -e alpha
Code:
Exports list on alpha:
/storage/home                      10.2.1.10

# mount_nfs -o nfsv3 alpha:/storage/home /mnt
# ls /mnt
Code:
vcordoba
mvilla

# ls /mnt/vcordoba
Code:
(none here)

# ls /mnt/mvilla
Code:
(none here)

Any idea?
 

mer

Aspiring Daemon

Reaction score: 334
Messages: 546

So getting to the output of ls /mnt, you see the directories that are exported; that's good as a first step.
I'm assuming that there are files in the vcordoba and mvilla directories (when you are logged into that host system).
On the client system are you logged in as root or as a normal user?
You may need to map UID/GUID between the client and server. That should be reasonably standard NFS stuff.
 
OP
angelvg

angelvg

Member

Reaction score: 7
Messages: 36

sko

I try this

On the ALPHA server
# zfs set sharenfs='ro=@10.2.1.10/24' storage/home

Note:
For undo 'zfs set sharenfs', the command is:
# zfs set sharenfs='off' storage/home

# showmount -e alpha
Code:
Exports list on localhost:
/storage/home/vcordoba             Everyone
/storage/home/mvilla               Everyone


Remotelly
# showmount -e alpha
Code:
vcordoba
mvilla

# mount_nfs -o nfsv3 alpha:/storage/home /mnt
# ls /mnt/vcordoba
Code:
(none here)

Any idea?
 

T-Daemon

Daemon

Reaction score: 878
Messages: 1,753

storage
storage/home
storage/home/mvilla
storage/home/vcordoba
are separate file systems in data sets, as sko said. If using /etc/exports all of them need to be exported one by one:
Code:
/storage/home/mvilla      -alldirs -network 10.2.1.10/24
/storage/home/vcordoba    -alldirs -network 10.2.1.10/24

Also mounted separate:
Code:
mount alpha:/storage/home/mvilla  /mnt
mount alpha:/storage/home/vcordoba  /mnt2

If using ZFS sharenfs instead of /etc/exports, assuming child data sets of storage/home to be shared:
Code:
zfs set sharenfs="-alldirs -network 10.2.1.10/24" storage/home
All following data sets will inherit from storage/home.

If you want all following data sets of storage/home together exported and mounted, create one data set storage/home instead and mkdir(1) sub-directories /storage/home/mvilla /storage/home/vcordoba. Disk quota for every user won't be possible in this case.
 
  • Like
Reactions: mer

mtu

Active Member

Reaction score: 115
Messages: 166

I solved this problem by using NFSv4. It exports all sub-datasets automatically, and my (Linux) clients also mount them automatically on access.
 
OP
angelvg

angelvg

Member

Reaction score: 7
Messages: 36

Hi friends...

I am testing from another approach (SAMBA)

-------------------------------------------------------
ALPHA SERVER


# edit /usr/local/etc/smb4.conf

Code:
[...]
[storage]
   comment = All groups
   path = /storage      
   valid users = angel              
   read only = yes
[...]

-------------------------------------------------------
ALPHATMP SERVER


# mount_smbfs //angel@alpha/storage /mnt

Code:
Password: ******


# ls /mnt

Code:
[...]
home
[...]


# ls /mnt/home/mvilla

Code:
[...]
(user information can be viewed)
[...]


# umount /mnt



# edit ~/.nsmbrc

Code:
[ALPHA:ANGEL]
password=secretpasswordhere

I create a script


# edit ~/Rsync-FreeBSD-1-to-FreeBSD-2.sh


Code:
#!/bin/sh

# [...]

# Set variables:
 STARTING="$(date)"

# The path for each executable file, use the command 'which program-name'
 MOUNT_SMBFS='/usr/sbin/mount_smbfs'
 RSYNC='/usr/local/bin/rsync'
 UMOUNT='/sbin/umount'

# The paths for user, password, server and directories
 SOURCE1='/mnt/home'
 TARGET1='/storage1/home/CORP'
 
# Others
#RATE='1024'            #    1024 =   1MB/sec.
#RATE='10240'           #   10240 =   10MB/sec.
#RATE='102400'          #  102400 =  100MB/sec.
#RATE='1024000'         # 1024000 = 1000MB/sec.
 RATE='65536'           #   65536 =   64MB/sec. (64 Megabyte per second = 512 Megabit per second)

# Working...

# Mount remote server
 "${MOUNT_SMBFS}" //angel@alpha/storage /mnt

# Sync up users (about 200 users)
#"${RSYNC}" --verbose --archive --human-readable --progress --stats --bwlimit="${RATE}"  "${SOURCE1}"/mvilla/ "${TARGET1}"/miguel.villa/
#"${RSYNC}" --verbose --archive --human-readable --progress --stats --bwlimit="${RATE}"  "${SOURCE1}"/vcordoba/ "${TARGET1}"/victor.cordoba/
# we continue with the other users in a similar way...

 "${UMOUNT}" /mnt

# Some information
 echo "${STARTING}"
 echo "$(date)"

Make it executable

# chmod +x ~/Rsync-FreeBSD-1-to-FreeBSD-2.sh


Example to run the script

# ~/Rsync-FreeBSD-1-to-FreeBSD-2.sh


-------------------------------------------------------

Please note:
In the ALPHA server the users are mvilla and in ALPHATMP the users are miguel.villa
The dataset needs to be created on the ALPHATMP server for each user and put a quote, by example

# zfs create storage1/home/CORP/miguel.villa
# zfs set quota=20G storage1/home/CORP/miguel.villa


For now it is working this way!

Thank you all for your attention and interest.
 
OP
angelvg

angelvg

Member

Reaction score: 7
Messages: 36

I have found that apparently using rsync over samba causes the server ALPHATMP to crash.

I am testing using rsync over ssh.

I will comment on the result later!
 
Top