pppoe server(mpd) about 10,000 user possible?

hi all
I am newbie in FreeBSD and working in ISP. I have 2 questions. I can't buy Cisco ASR RAS because of some problems I can't explain right now. :D


1) Is it possible to have 10,000 users that connected at the same time in one server with FreeBSD + mpd + ipfw (for shaping) with any hardware? If possible what hardware do you recommend?

2) I tested with some app I created, that sends 10,000 pppoe connections at the same time to server and it worked! All users connected under 10 min :D so in my mini lab everything works fine and cpu usage after all user connected is about 90%~98% idle! Polling enabled!

Server hardware detail:
Code:
   proliant ml350 g5 (http://h10010.www1.hp.com/wwpc/ca/en/sm/WF05a/15351-15351-241434-241477-241477-1121586.html?dnr=1)
   2x xeon cpu quadcore 2.4 Ghz   
   4 GB ram 
   freebsd 9.0
   nic 1: (LAN) HP NC110T PCI Express Gigabit Server Adapter (em)
   nic 2: (WAN) Onboard brodcom Gigabit Adapter (bce)

So I tested in real world with the same hardware!!! But the results is not good!!!

Deference in real world:
  1. BW usage of users (I used natd and ipfw nat)
  2. mpd listens on vlans! means that I have a trunk (70 vlans that I added in FreeBSD) and every vlan is for a different location of users (em interface)
Results in real world:

connected users in real world in 10 min: about 3000 user and not increasing :(
cpu usage 50% idle and all is for mpd not natd! Also I have many users in links that is waiting for connection about 3000 and can't connect.
free ram showing in top 2.7 GB
there is no failed in vmstat -z


rc.conf tunings:
Code:
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
openntpd_enable="NO"
ntpd_enable="NO"
devd_enable="NO"
postfix_enable="NO"
defaultrouter="x.x.x.x"
gateway_enable="YES" 
firewall_enable="YES"
firewall_type="OPEN" 
natd_enable="YES"
natd_interface="bce0"
hostname="pppoe"
ifconfig_bce0=" inet x.x.x.x netmask 0xffffff80"
ifconfig_em0=" inet 172.14.1.1 netmask 255.255.255.0 polling"
sshd_enable="YES"
mpd_enable="YES"

loader.conf
Code:
net.graph.maxdgram=2096000
net.graph.recvspace=2096000
net.graph.threads=100
net.graph.maxalloc=65536
net.graph.maxdata=65536
net.inet.tcp.delayed_ack=0
net.inet.ip.intr_queue_maxlen=1000
net.isr.maxthreads=7
net.isr.direct=1
net.isr.direct_force=1
net.isr.bindthreads=0
net.inet.tcp.syncache.hashsize=1024
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=4096

sysctl.conf
Code:
kern.ipc.maxsockbuf=16777216     
kern.ipc.nmbclusters=262144     
kern.ipc.somaxconn=32768         
kern.ipc.maxsockets=204800      
kern.randompid=348              
net.inet.icmp.icmplim=50          
net.inet.ip.process_options=0    
net.inet.ip.redirect=0          
net.inet.ip.rtexpire=2           
net.inet.ip.rtminexpire=2        
net.inet.ip.rtmaxcache=256       
net.inet.icmp.drop_redirect=1    
net.inet.tcp.blackhole=2         
net.inet.tcp.delayed_ack=0        
net.inet.tcp.drop_synfin=1     
net.inet.tcp.msl=5000          
net.inet.tcp.nolocaltimewait=1   
net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=8192      
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=16384     
net.inet.udp.blackhole=1        
security.bsd.see_other_uids=0 
security.bsd.see_other_gids=0
Main question: where is the problem in real world! Is it about vlans and hardware issue(nic) or FreeBSD tuning?
 

Attachments

  • TP.jpg
    TP.jpg
    26.8 KB · Views: 1,553
Thanks for reply. I removed line Up-script that I used for shaping from mpd.conf and disabled devd(8). Now everything work fine in my lab with 70 vlans. 10,000 connection in 10 min, I am going to test it in real world :)
I Attached pciconf -lvbc :)
 

Attachments

  • pciconf.txt
    14.8 KB · Views: 558
Results?

I'm going to be trying this as well, and wondering if you have production testing of this done yet?

I'm in a similar situation, need to terminate just a few thousand pppoe sessions, and am not doing nat or shaping/limiting.
 
I want to build this kind of device also.
If you have a more detailed info like what version of freebsd you have used.. radius etc..
I will be more then glad to get info on that.
 
What chip network card are you using? Please show a peak load, netstat -w1 -h and top -SHPI. Thanks! Really online 9000? I am using HP Proliant 120 G6 Xeon 2.4 Online 1500 users, interrupt 80 %, RX 600 TX 400 Mbit/s.
 
Dear roysbike
Sorry, because of hard work that i have for now!I stopped working on MPD for while(about 2 or 3 months) :)
and yes i really reached 9000 users in one box.i used 2 old Intel interfaces(em) and an old IBM server with 2 quad Xeon 2.0 CPU, tested it for about 6 hours and it worked without problems, cpu usage for 9000 users without any nat about 70% interrupt in 2 core, with natd 100% cpu usage for 2 cores with low performance.

Dear ecazamir
maybe later i make a port for tester! :)

Dear gkontos
it is not completed and i am not sure about how it works for long time! maybe after full test i send it to Howtos & FAQs section.:)

Dear hack2003
FreeBSD stable 9, Mpd 5.6, IBSng or Abills for radius
 
s_265_925 said:
Dear gkontos
it is not completed and i am not sure about how it works for long time! maybe after full test i send it to Howtos & FAQs section.:)

I am looking forward to it ;)
 
Hello to you. You configured PPPoE, very interesting! You can support up to 10000 users really good. Please can you configure the document sharing with you the installation method and detailed:
  • PPPoE server configuration files
  • PPPoE user limit (such as: 512 Kb upload, download the 2048 Kb)
  • Radius detailed configuration

Regarding your question I may make a suggestion:
  • the server architecture with E5, 2 CPU
  • the card as a Intel of more than 82576, or the 82580 conditional word choice 10 Gb card
I have a question to ask you, you have 10000 users, but you just run 3-400 Mb much less is it right?

If the installation of our situation here estimates require 3G flow can drive the 10000 users.

Here we basically single user speed limits in 4-6-10 Mb (downlink) uplink in 512 Kb - 2 Mb.
 
Back
Top