hi all
I am newbie in FreeBSD and working in ISP. I have 2 questions. I can't buy Cisco ASR RAS because of some problems I can't explain right now.
1) Is it possible to have 10,000 users that connected at the same time in one server with FreeBSD + mpd + ipfw (for shaping) with any hardware? If possible what hardware do you recommend?
2) I tested with some app I created, that sends 10,000 pppoe connections at the same time to server and it worked! All users connected under 10 min so in my mini lab everything works fine and cpu usage after all user connected is about 90%~98% idle! Polling enabled!
Server hardware detail:
So I tested in real world with the same hardware!!! But the results is not good!!!
Deference in real world:
connected users in real world in 10 min: about 3000 user and not increasing
cpu usage 50% idle and all is for mpd not natd! Also I have many users in links that is waiting for connection about 3000 and can't connect.
free ram showing in top 2.7 GB
there is no failed in vmstat -z
rc.conf tunings:
loader.conf
sysctl.conf
Main question: where is the problem in real world! Is it about vlans and hardware issue(nic) or FreeBSD tuning?
I am newbie in FreeBSD and working in ISP. I have 2 questions. I can't buy Cisco ASR RAS because of some problems I can't explain right now.
1) Is it possible to have 10,000 users that connected at the same time in one server with FreeBSD + mpd + ipfw (for shaping) with any hardware? If possible what hardware do you recommend?
2) I tested with some app I created, that sends 10,000 pppoe connections at the same time to server and it worked! All users connected under 10 min so in my mini lab everything works fine and cpu usage after all user connected is about 90%~98% idle! Polling enabled!
Server hardware detail:
Code:
proliant ml350 g5 (http://h10010.www1.hp.com/wwpc/ca/en/sm/WF05a/15351-15351-241434-241477-241477-1121586.html?dnr=1)
2x xeon cpu quadcore 2.4 Ghz
4 GB ram
freebsd 9.0
nic 1: (LAN) HP NC110T PCI Express Gigabit Server Adapter (em)
nic 2: (WAN) Onboard brodcom Gigabit Adapter (bce)
So I tested in real world with the same hardware!!! But the results is not good!!!
Deference in real world:
- BW usage of users (I used natd and ipfw nat)
- mpd listens on vlans! means that I have a trunk (70 vlans that I added in FreeBSD) and every vlan is for a different location of users (em interface)
connected users in real world in 10 min: about 3000 user and not increasing
cpu usage 50% idle and all is for mpd not natd! Also I have many users in links that is waiting for connection about 3000 and can't connect.
free ram showing in top 2.7 GB
there is no failed in vmstat -z
rc.conf tunings:
Code:
sendmail_enable="NO"
sendmail_submit_enable="NO"
sendmail_outbound_enable="NO"
sendmail_msp_queue_enable="NO"
openntpd_enable="NO"
ntpd_enable="NO"
devd_enable="NO"
postfix_enable="NO"
defaultrouter="x.x.x.x"
gateway_enable="YES"
firewall_enable="YES"
firewall_type="OPEN"
natd_enable="YES"
natd_interface="bce0"
hostname="pppoe"
ifconfig_bce0=" inet x.x.x.x netmask 0xffffff80"
ifconfig_em0=" inet 172.14.1.1 netmask 255.255.255.0 polling"
sshd_enable="YES"
mpd_enable="YES"
loader.conf
Code:
net.graph.maxdgram=2096000
net.graph.recvspace=2096000
net.graph.threads=100
net.graph.maxalloc=65536
net.graph.maxdata=65536
net.inet.tcp.delayed_ack=0
net.inet.ip.intr_queue_maxlen=1000
net.isr.maxthreads=7
net.isr.direct=1
net.isr.direct_force=1
net.isr.bindthreads=0
net.inet.tcp.syncache.hashsize=1024
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=4096
sysctl.conf
Code:
kern.ipc.maxsockbuf=16777216
kern.ipc.nmbclusters=262144
kern.ipc.somaxconn=32768
kern.ipc.maxsockets=204800
kern.randompid=348
net.inet.icmp.icmplim=50
net.inet.ip.process_options=0
net.inet.ip.redirect=0
net.inet.ip.rtexpire=2
net.inet.ip.rtminexpire=2
net.inet.ip.rtmaxcache=256
net.inet.icmp.drop_redirect=1
net.inet.tcp.blackhole=2
net.inet.tcp.delayed_ack=0
net.inet.tcp.drop_synfin=1
net.inet.tcp.msl=5000
net.inet.tcp.nolocaltimewait=1
net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.recvbuf_max=16777216
net.inet.tcp.recvspace=8192
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.sendspace=16384
net.inet.udp.blackhole=1
security.bsd.see_other_uids=0
security.bsd.see_other_gids=0