PF how can i set some rule to workaround keep state

i set a rule "pass out all keep state" ,which to make outgoing packet from local would not be blocking and ack packet can pass in.
some reason this not working for nfs,is there any workaround to instead of "keep state" .outgoing packet has random src port
 
 
NFSv3 uses RPC that opens a random port, awful to firewall. Use NFSv4, that only uses TCP/2049 and no "dynamic" ports.
 
Or keep NFS traffic local to the subnet it is used on. You usually don't want to have all file transfers between your clients an fileserver passing through your firewall...

(And I really REALLY hope OP isn't using NFS over the WAN/internet...)
 
Or keep NFS traffic local to the subnet it is used on. You usually don't want to have all file transfers between your clients an fileserver passing through your firewall...
Big networks typically have specific subnet(s) (One or more VLANs) for storage and if everything is firewalled between individual VLANs you're going to have to poke a hole in those firewalls. NFSv3 was a royal pain in the posterior due to the dynamic nature of the RPC ports (similar to FTP without the active/passive modes). NFSv4 'solved' this by doing everything on a single port in a single direction.

(And I really REALLY hope OP isn't using NFS over the WAN/internet...)
WAN isn't a problem, except it typically has quite a bit of latency making it fairly impractical. The internet? Yeah, don't do that.
 
seems tcp go to the four-way FIN completes, and pfctl moves the state to FIN_WAIT_2.
When a "df" (or any operation performed on the nfs mount), the nfs client will initiate a new session (SYN) using the same port number.
pf will drop/reject the SYN because of incorrect state.
pf cleans-up FIN_WAIT_2 state only on "tcp.closed" timeout, AND the timeout is started n seconds after a new state entry (i.e. a new unique session) is created
 
NFS and PF
rc.conf entries to nail mountd to port 4046,
rpc.lockd to 4045, and rpc.statd to 4047:

mountd_flags="-r -p 4046"
rpc_lockd_flags="-p 4045"
rpc_statd_flags="-p 4047"

Can use these ports in my packet filter rules, providing some protection to my NFS server.

Important:
One drawback of using /etc/fstab is that, the system must dedicate resources to keep the mounted file system in place. This is not a problem with one or two mounts, but when the system is maintaining mounts to many systems at one time, overall system performance can be affected. An alternative to /etc/fstab is to use the kernel-based automount utility.

The automount utility can mount and unmount NFS file systems on-demand mounting, therefore saving system resources.
 
Something like this

Autofs consults the master map configuration file /etc/auto_master to determine the mount points defined. Then, it starts an automatic mounting process. Each line of the master map defines a mount point and a separate map file that defines the file systems to be mounted under this mount point. for example, /etc/auto.misc defines mount points in the /misc directory; this relationship would be defined in /etc/auto_master.

Each entry in auto_master has
1 Assembly point.
2 The location of the map file

To mount the shared /usr/backup/poolrecovery directory of the storm server to the /misc/poolrecovery mount point, add the following line to the /etc/auto_master file

/misc /etc/auto.misc

Then we add this line to the /etc/auto.misc file

poolrecovery -rw,soft,intr,tcp,rsize=8192,wsize=8192 \
tormenta:/usr/backup/poolrecovery

The first field in /etc/auto.misc is the name of the /misc subdirectory and is created dynamically by automount. The second field is the mount options and the third field is the NFS export location which includes the hostname and directory.

The /misc directory must exist on the local file system and must not contain a subdirectory under /misc
We restart the service

# service automountd restart
Stopping automountd.
Waiting for PIDS: 37558.
Starting automountd.

% mount | grep autofs
map -hosts on /net (autofs)
map /etc/auto.misc on /misc (autofs)

% ls /misc/poolrecovery
...
map -hosts on /net (autofs)
tormenta:/usr/backup/poolrecovery on /misc/poolrecovery (nfs, automounted)

Of course, you also need:

NFS over ZFS

# zfs create -o canmount=off zroot/usr/backup
# zfs get mounted zroot/usr/backup
NAME PROPERTY VALUE SOURCE
zroot/usr/backup mounted no -

# zfs create -o mountpoint=/usr/backup/poolrecovery zroot/usr/backup/poolrecovery

Regardless of the method chosen /etc/exports must exist
Create /etc/exports

# touch /etc/exports

Start sharing sharenfs=on

# zfs set sharenfs=on zroot/usr/backup/poolrecovery

To stop sharing the data set, set sharenfs to off.

# zfs set sharenfs=off zroot/dellhome

Set a maproot user and restrict clients to the local network

# zfs set sharenfs="-maproot=0 192.168.88.51" zroot/usr/backup/poolrecovery

Using ZFS is less flexible for managing NFS exports because all allowed hosts get the same options.
ZFS automatically creates the File /etc/exports


cat /etc/zfs/exports
# !!! DO NOT EDIT THIS FILE MANUALLY !!!

/usr/backup/poolrecovery -maproot=0 192.168.88.51


Enable NFS client (solaris)

/etc/hosts
...
192.168.88.160 solaris
192.168.88.51 tormenta
...

sysrc nfs_client_enable=YES

List all NFS exports available to a client

$ showmount -e tormenta
Exports list on tormenta:
/usr/backup/poolrecovery 192.168.88.51

solaris:~ % showmount -e tormenta
Exports list on tormenta:
/usr/backup/poolrecovery 192.168.88.51
 
Back
Top