NATD: users speed become slow

Hello, friends!

Have a problem.

After a month of uptime users connections become slow. When i'm restart the natd the connections speed revert to normal. There are no any kernel messages or any errors at all.

Here is a config of natd:

Code:
deny_incoming yes                                                                                                                           
use_sockets yes                                                                                                                            
same_ports yes                                                                                                                             
dynamic yes

There is a FreeBSD version: FreeBSD 7.0-RELEASE-p5

There is a kernel options:

Code:
options         IPFIREWALL              #firewall                                                                                           
options         IPFIREWALL_VERBOSE      #enable logging to syslogd(8)                                                                       
options         IPFIREWALL_VERBOSE_LIMIT=100    #limit verbosity                                                                            
#options        IPFIREWALL_DEFAULT_TO_ACCEPT    #allow everything by default                                                                
options         IPFIREWALL_FORWARD      #packet destination changes                                                                         
options         IPFIREWALL_NAT          #ipfw kernel nat support                                                                            
options         IPDIVERT                #divert sockets                                                                                     
#options        IPFILTER                #ipfilter support                                                                                   
#options        IPFILTER_LOG            #ipfilter logging                                                                                   
#options        IPFILTER_LOOKUP         #ipfilter pools                                                                                     
#options        IPFILTER_DEFAULT_BLOCK  #block all packets by default                                                                       
options         IPSTEALTH               #support for stealth forwarding                                                                     
#options        TCPDEBUG
I have a 100 mbit fast ethernet connection to my provider...

Also, have a dummy net rules for traffic shaping...

There are additional info from top (it's a normal):

Code:
last pid: 33724;  load averages:  0.85,  0.53,  0.52                                                               up 40+02:58:49  14:21:10
24 processes:  2 running, 22 sleeping
CPU states:  5.8% user,  0.0% nice,  5.8% system,  2.9% interrupt, 85.6% idle
Mem: 19M Active, 472M Inact, 165M Wired, 324K Cache, 112M Buf, 1346M Free
Swap: 4096M Total, 4096M Free

  PID USERNAME  THR PRI NICE   SIZE    RES STATE  C   TIME   WCPU COMMAND
33577 root        1  96    0  7444K  5892K CPU3   0  14:32 74.61% natd

The current speed:

Code:
fxp0          112177 kb/total            5823 pkts/sec         4024991 bytes/sec

Code:
Name    Mtu Network       Address              Ipkts Ierrs    Opkts Oerrs  Coll
fxp0   1500 <Link#1>      00:b0:d0:fe:ec:cf 3596638730     0 2395299449     2     0
fxp0   1500 10.252.9.0    firewall           7352382     -  7490433     -     -
bge0   1500 <Link#2>      00:b0:d0:fe:ec:d0 2675368980     0 3507647935     0     0
bge0   1500 1.1.1.1 firewall           6506038     - 3507645553     -     -
plip0  1500 <Link#3>                               0     0        0     0     0
lo0   16384 <Link#4>                            1809     0     1809     0     0
lo0   16384 fe80:4::1     fe80:4::1                0     -        0     -     -
lo0   16384 localhost     ::1                      0     -        0     -     -
lo0   16384 your-net      localhost             1809     -     1809     -     -

Code:
732/2733/3465 mbufs in use (current/cache/total)
705/1619/2324/65536 mbuf clusters in use (current/cache/total/max)
705/1599 mbuf+clusters out of packet secondary zone in use (current/cache)
0/135/135/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
1661K/4461K/6123K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/9/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines

If any additional info needed, i can to provide it.

It so strange, cause a FreeBSD is so stable for me... Where is a problem? Please, help or get some recommendations.
 
Hello boot0user,

first, I'm not sure if your nat-setup works at all, because "natd" is not "ipfw-nat".
See "man ipfw" for properly configuring ipfw-nat. And also read "man natd".
"natd" is a divert daemon enabled by "natd_enable="YES"" in /etc/rc.conf.

Second, the fact that the connections become slow after a while might due to a misconfigured dummynet.

So, how did you setup your dummynet? What do the pipes and queues look like?
And does they work?

Code:
root# ipfw pipe 1 show
00001: 320.000 Kbit/s    0 ms  100 sl. 0 queues (1 buckets)
           RED w_q 0.002991 min_th 45 max_th 95 max_p 0.099991
q00001: weight 50 pipe 1  100 sl. 0 queues (1 buckets)
           RED w_q 0.001999 min_th 55 max_th 95 max_p 0.099991
q00002: weight 50 pipe 1  100 sl. 0 queues (1 buckets)
           RED w_q 0.001999 min_th 55 max_th 95 max_p 0.099991

root# ipfw queue 1 show
q00001: weight 50 pipe 1  100 sl. 0 queues (1 buckets)
           RED w_q 0.001999 min_th 55 max_th 95 max_p 0.099991

root# ipfw pipe 6 show
00006:  96.000 Kbit/s    0 ms  100 sl. 1 queues (1 buckets)
           RED w_q 0.003998 min_th 35 max_th 95 max_p 0.099991
    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 tcp    82.135.81.117/49245  194.126.158.74/22022 1248905 172549893  0    0   0

root# ipfw queue 12 show
q00012: weight 50 pipe 5  100 sl. 1 queues (1 buckets)
           RED w_q 0.001999 min_th 55 max_th 95 max_p 0.099991
    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 udp    82.135.81.117/12345   62.245.169.50/12345 1990494 575586089  0    0  66


bye
 
I'm use natd daemon and divert rule and not user ipfw-nat.

There is my rc.firewall (some parts skipped):

Code:
${fwcmd} add 50 divert natd ip4 from any to any via ${natd_interface}

Code:
        ############                                                                                                                        
        # Flush out the pipes and queues before we begin.                                                                                   
        #                                                                                                                                   
        ${fwcmd} -f pipe flush                                                                                                              
        ${fwcmd} -f queue flush                                                                                                             
                                                                                                                                            
        ############                                                                                                                        
        # This is a prototype setup for a FIREWALL firewall.                                                                                
        #                                                                                                                                   
                                                                                                                                            
        ############                                                                                                                        
        # Interfaces and networks difinitions                                                                                               
        #                                                                                                                                   
                                                                                                                                            
        # Interfaces                                                                                                                        
        ifInt="fxp0"                                                                                                                        
        ipInt="$(ifconfig $ifInt | grep "inet " | awk '{printf $2};')"                                                                      
                                                                                                                                            
        ifExt="bge0"                                                                                                                        
        ipExt="$(ifconfig $ifExt | grep "inet " | awk '{printf $2};')"                                                                      
                                                                                                                                            
        # 75Mbit/s                                                                                                                          
                                                                                                                                            
        # Networks                                                                                                                          
        ipBadGays=""                                                                                                                        
                                                                                                                                            
        bwServers="5Mbit/s"                                                                                                                 
        ipServers="10.252.0.0/24"                                                                                                           
                                                                                                                                            
        bwAdmins="20Mbit/s"                                                                                                                 
        ipAdmins="10.252.10.2"

       bwManagers="20Mbit/s"                                                                                                               
        ipManagers="10.252.10.0/24"                                                   
                                                                                                                                            
        bwEmployes="30Mbit/s"                                                                                                               
        ipEmployes="10.252.0.0/16, 10.1.4.0/24"                                                                                             
                                                                                                                                            
        qu="100"                                                                                                                            
        ms="0xffffffff"                                                                                                                     
        gr="0.002/10/50/0.1"                                                                                                                
                                                                                                                                            
        ############                                                                                                                        
        # Packet rules                                                                                                                      
        #                                                                                                                                   
                                                                                                                                            
        # Internal interface                                                                                                                
        ${fwcmd} add pass all from ${ipInt} to any via ${ifInt}                                                                             
        ${fwcmd} add pass all from any to ${ipInt} via ${ifInt}                                                                             
                                                                                                                                            
        # External interface                                                                                                                
        ${fwcmd} add pass all from ${ipExt} to any via ${ifExt}                                                                             
        ${fwcmd} add pass all from any to ${ipExt} via ${ifExt}                                                                             
                                                                                                                                            
        # Bad gays                                                                                                                          
        if [ "${ipBadGays}" != "" ]; then                                                                                                   
            ${fwcmd} add deny all from ${ipBadGays} to any in via ${ifInt}                                                                  
            ${fwcmd} add deny all from any to ${ipBadGays} out via ${ifInt}                                                                 
        fi
 
Code:
        # Servers pipe and queue                                                                                                            
        ${fwcmd} pipe 10 config bw ${bwServers} queue ${qu}                                                                                 
        ${fwcmd} queue 11 config weight 50 pipe 10 mask src-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} queue 12 config weight 50 pipe 10 mask dst-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} add queue 11 ip from ${ipServers} to any in via ${ifInt}                                                                   
        ${fwcmd} add queue 12 ip from any to ${ipServers} out via ${ifInt}

       # Admins pipe and queue                                                                                                             
        ${fwcmd} pipe 20 config bw ${bwAdmins} queue ${qu}                                                                                  
        ${fwcmd} queue 21 config weight 50 pipe 20 mask src-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} queue 22 config weight 50 pipe 20 mask dst-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} add queue 21 ip from ${ipAdmins} to any in via ${ifInt}                                                                    
        ${fwcmd} add queue 22 ip from any to ${ipAdmins} out via ${ifInt}                                                                   
                                                                                                                                            
        # Managers pipe and queue                                                                                                           
        ${fwcmd} pipe 30 config bw ${bwManagers} queue ${qu}                                                                                
        ${fwcmd} queue 31 config weight 50 pipe 30 mask src-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} queue 32 config weight 50 pipe 30 mask dst-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} add queue 31 ip from ${ipManagers} to any in via ${ifInt}                                                                  
        ${fwcmd} add queue 32 ip from any to ${ipManagers} out via ${ifInt}                                                                 
                                                                                                                                            
        # Employes pipe and queue                                                                                                           
        ${fwcmd} pipe 40 config bw ${bwEmployes} queue ${qu}                                                                                
        ${fwcmd} queue 41 config weight 50 pipe 40 mask src-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} queue 42 config weight 50 pipe 40 mask dst-ip ${ms} queue ${qu} gred ${gr}                                                 
        ${fwcmd} add queue 41 ip from ${ipEmployes} to any in via ${ifInt}                                                                  
        ${fwcmd} add queue 42 ip from any to ${ipEmployes} out via ${ifInt}                                                                 
                                                                                                                                            
        # Employes rules                                                                                                                    
        ${fwcmd} add pass all from ${ipEmployes} to any                                                                                     
        ${fwcmd} add pass all from any to ${ipEmployes}
 
and continue:

Code:
00050 2449493710 1677064235102 divert 8668 ip4 from any to any via bge0
00100        528         84350 allow ip from any to any via lo0
00200          0             0 deny ip from any to 127.0.0.0/8
00300          0             0 deny ip from 127.0.0.0/8 to any
00400    4081560     624415437 allow ip from 10.252.9.2 to any via fxp0
00500    3969003     209436864 allow ip from any to 10.252.9.2 via fxp0
00600 1218246096  715874361838 allow ip from 1.1.1.1 to any via bge0
00700        171        226042 allow ip from any to 1.1.1.1 via bge0
00800    1239210      92984008 queue 11 ip from 10.252.0.0/24 to any in via fxp0
00900     988052     375203222 queue 12 ip from any to 10.252.0.0/24 out via fxp0
01000   19579314    1081411595 queue 21 ip from 10.252.10.2 to any in via fxp0
01100   19539279   27281262726 queue 22 ip from any to 10.252.10.2 out via fxp0
01200  183248108  155689145508 queue 31 ip from 10.252.10.0/24 to any in via fxp0
01300  127028894   61279835296 queue 32 ip from any to 10.252.10.0/24 out via fxp0
01400 1045077395  573893360537 queue 41 ip from 10.252.0.0/16,10.1.4.0/24 to any in via fxp0
01500  966521737  795196661516 queue 42 ip from any to 10.252.0.0/16,10.1.4.0/24 out via fxp0
01600      77080       4324210 allow ip from 10.252.0.0/16,10.1.4.0/24 to any
01700 1114078024  884132965000 allow ip from any to 10.252.0.0/16,10.1.4.0/24
65535       6375        769973 deny ip from any to any
 
There is a part from pipe show:

Code:
0020:  20.000 Mbit/s    0 ms  100 sl. 0 queues (1 buckets) droptail
00040:  30.000 Mbit/s    0 ms  100 sl. 0 queues (1 buckets) droptail
00010:   5.000 Mbit/s    0 ms  100 sl. 0 queues (1 buckets) droptail
00030:  20.000 Mbit/s    0 ms  100 sl. 0 queues (1 buckets) droptail
q00032: weight 50 pipe 30  100 sl. 9 queues (64 buckets) 
	  GRED w_q 0.001999 min_th 10 max_th 50 max_p 0.099991

Code:
13 ip           0.0.0.0/0       10.252.84.245/0     6543  7784305  0    0   0
 13 ip           0.0.0.0/0        10.252.46.53/0        7     5329  0    0   0
 14 ip           0.0.0.0/0       10.252.85.246/0     3115   181878  1  125   0
 14 ip           0.0.0.0/0       10.252.86.246/0     1489554 1978129547 100 140002 80645
 14 ip           0.0.0.0/0       10.252.11.246/0       72     9464  0    0   0
 15 ip           0.0.0.0/0       10.252.53.247/0      196   110218  0    0   0
 16 ip           0.0.0.0/0        10.252.12.40/0     224961 248515694  0    0   0
 16 ip           0.0.0.0/0        10.252.34.40/0        8      980  0    0   0
 17 ip           0.0.0.0/0        10.252.80.41/0     23522 34225050  5 7500   0
 17 ip           0.0.0.0/0       10.252.86.233/0       23    19824  0    0   0
 17 ip           0.0.0.0/0       10.252.81.105/0     1040   107111  0    0   0
 18 ip           0.0.0.0/0        10.252.34.42/0     1464   828084  0    0   0
 18 ip           0.0.0.0/0       10.252.85.106/0        1       40  0    0   0
 19 ip           0.0.0.0/0        10.252.11.43/0      125   140538  0    0   0

Shaping is working fine for my users...
 
Have a near 200 active users with torrent clients, web, mail, icq...

I distribute bandwidth between users: each of thems have the identical weight and as result - speed.

P.S. Sorry for my english... :(
 
The idea is....

Group of users have a common pipe. I want distribute the bandwidth evenly between users. Traffic from each user include download and upload traffic of this user. All users are equal in his rights (weights).
 
ufff ... ok.

first I would suggest that you use all your queue slots:
You have configured the queues with 100 slots (qu="100"), but you start dropping at 10 packets in queue and drop all as soon as 50 slots are full (gr="0.002/10/50/0.1").
So try setting gr="0.002/55/95/0.1" for example.
Or try "red" instead of "gred". For me "red" works fine.

And as soon as the speed goes down, look if the queues are full and if the drop rate increases:

Code:
root# ipfw pipe 5 show
00005: 320.000 Kbit/s    0 ms  100 sl. 0 queues (1 buckets)
           RED w_q 0.002991 min_th 45 max_th 95 max_p 0.099991
q00011: weight 50 pipe 5  100 sl. 0 queues (1 buckets)
           RED w_q 0.001999 min_th 55 max_th 95 max_p 0.099991
q00012: weight 50 pipe 5  100 sl. 1 queues (1 buckets)
           RED w_q 0.001999 min_th 55 max_th 95 max_p 0.099991
    mask: 0x00 0x00000000/0x0000 -> 0x00000000/0x0000
BKT Prot ___Source IP/port____ ____Dest. IP/port____ Tot_pkt/bytes Pkt/Byte Drp
  0 udp    82.135.81.117/12345   62.245.169.50/12345 1993259 576145804  0    0  66
As you can see 66 of 1993259 packets are droped.


For now I cannot say anything further. I must reinform myself. But I will come back.
 
Oh, i will try your solution... Possibly, this help me... Thank's... After some time i'm come back again so...

I also disable use_sockets & same_ports and this is my gred params gr="0.002/50/90/0.1".
 
Also i additionaly increase kernel variables:

Code:
net.inet.ip.dummynet.hash_size=2048                                                                                                         
net.inet.ip.dummynet.max_chain_len=1024                                                                                                     
net.inet.ip.fw.dyn_buckets=2048                                                                                                                                                                                                              
net.inet.ip.fw.dyn_max=32768

and still waiting... :)
 
boot0user said:
Hello, friends!
After a month of uptime users connections become slow. When i'm restart the natd the connections speed revert to normal. There are no any kernel messages or any errors at all.
Sorry, no help.
FWIW, I can confirm that I have the same issue with natd. I run my own personal firewall, so I just restart natd after a few weeks (whenever the problem shows).
I run ipfw + natd, and have been thinking about trying out another combo to see if I could get rid of the problem, but for me the issue isn't very big so I haven't taken the time to test anyhing else yet.
 
Oh, i recommend to you try the in-kernel nat, if you realy plan to substitute the natd. But i'm will tune kernel vars. I believe in natd.
 
tingo said:
Is it faster? Or better in any way?
Yeah! We use pf's nat for over 2500+ clients on only one dual-core box.
Simplest nat with pf can be done with only one line of code:
Code:
nat on $if_ext from <grey_clients> to any -> ($if_ext)

Also pf spports CIDR with source-hash to map all clients to permanent IP over range of IP addresses:
Code:
nat on $if_ext from <grey_clients> to any -> 62.*.*.0/24 source-hash
 
SaveTheRbtz said:
Yeah! We use pf's nat for over 2500+ clients on only one dual-core box.
Simplest nat with pf can be done with only one line of code:
Code:
nat on $if_ext from <grey_clients> to any -> ($if_ext)

Also pf spports CIDR with source-hash to map all clients to permanent IP over range of IP addresses:
Code:
nat on $if_ext from <grey_clients> to any -> 62.*.*.0/24 source-hash

PF doesn't make traffic shaping... Must use ALTQ for this with PF...

NATD also use the CIDR notation in rules...
 
Yes, but you can shape traffic with ipfw's dummynet(and net.inet.ip.dummynet.io_fast=1) and then NAT it with pf. We do like this.
 
SaveTheRbtz said:
Yes, but you can shape traffic with ipfw's dummynet(and net.inet.ip.dummynet.io_fast=1) and then NAT it with pf. We do like this.

Oh, this is great idea.. Thanks...
 
My tunning does't help. The natd after a week slow down again the users connections speed. Somewhere i read a recommendation to recompile natd with cpu and other optimization flags, but didn't want to do this cause it's a long way.

I try to switch to the in-kernel nat, possibly that help me.

P.S. As alternative way is to use ipnat for masquerading, but as the addon is to change ip_nat.h files in kernel tree defining LARGE_NAT there.
 
Hi again, Friends!

Nothing help for a natd and in-kernel nat. Is there any professionals?

I realy need help from you? Hot to make work the natd or in-kernel nat? Where is a problem? How to make my box stable?

P.S. Doesn't try a ipnat, want to the natd or in-kernel nat. Ipnat is a last instance.
 
SaveTheRbtz said:
Did you tried combination of ipfw(dummynet shaping via pipes) + pf (kernel NAT) ?

No yet... Wanna make the natd (or in-kernel nat) to work. Ipnat is the last instance, if nothing more help...
 
I'm resolve a problem!!! I switch to a new hardware and the problem is disappear!!!
The old hardware was Dell PowerEdge 4600 and network interfaces are bge and fxp (on a new bge and em). There are a lot of errors on old hardware. Possibly there is a hardware malfunction.

Now have no problem at all, still use the FreeBSD 7.1, new hardware and built in-kernel natd and ipfw. CPU load is around 2% with 75 mbit/s traffic bandwidth.

Thanks a lot to the FreeBSD team for the realy great and amazing OS!!!
 
Back
Top