Hi list.
I recently installed FreeBSD 7.4 on my server. I have this machine:
The server has 4 bgp session and 8 network cards: Intel and Broadcom.
The problem is: my bce0 card is directly connected with my CMTS. When I run a ping to CMTS, I have a high latency. Like this:
The response time is high and not lower. Sometimes I have packet loss too.
My kernel have this options:
Some sysctls that I changed:
When I enable the "net.isr.direct" I have more packet loss with a very poor performance.
Some more information:
I hope that someone can help me with this. I dont known what I can do to solve this problem.
Regards.
I recently installed FreeBSD 7.4 on my server. I have this machine:
Code:
CPU: Intel(R) Xeon(R) CPU E5504 @ 2.00GHz (2000.08-MHz 686-class CPU)
real memory = 4831834112 (4607 MB)
avail memory = 4180480000 (3986 MB)
ACPI APIC Table: <HP ProLiant>
FreeBSD/SMP: Multiprocessor System Detected: 4 CPUs
cpu0 (BSP): APIC ID: 0
cpu1 (AP): APIC ID: 2
cpu2 (AP): APIC ID: 4
cpu3 (AP): APIC ID: 6
Code:
bce0@pci0:14:0:0: class=0x020000 card=0x7059103c chip=0x163914e4 rev=0x20 hdr=0x00
vendor = 'Broadcom Corporation'
device = 'NetXtreme II Gigabit Ethernet (BCM5709)'
class = network
subclass = ethernet
bce1@pci0:14:0:1: class=0x020000 card=0x7059103c chip=0x163914e4 rev=0x20 hdr=0x00
vendor = 'Broadcom Corporation'
device = 'NetXtreme II Gigabit Ethernet (BCM5709)'
class = network
subclass = ethernet
bge0@pci0:3:4:0: class=0x020000 card=0x703e103c chip=0x167814e4 rev=0xa3 hdr=0x00
vendor = 'Broadcom Corporation'
device = 'BCM5715C 10/100/100 PCIe Ethernet Controller'
class = network
subclass = ethernet
bge1@pci0:3:4:1: class=0x020000 card=0x703e103c chip=0x167814e4 rev=0xa3 hdr=0x00
vendor = 'Broadcom Corporation'
device = 'BCM5715C 10/100/100 PCIe Ethernet Controller'
class = network
subclass = ethernet
The problem is: my bce0 card is directly connected with my CMTS. When I run a ping to CMTS, I have a high latency. Like this:
Code:
PING 10.20.0.2 (10.20.0.2): 56 data bytes
64 bytes from 10.20.0.2: icmp_seq=0 ttl=255 time=0.439 ms
64 bytes from 10.20.0.2: icmp_seq=1 ttl=255 time=0.285 ms
64 bytes from 10.20.0.2: icmp_seq=2 ttl=255 time=0.280 ms
64 bytes from 10.20.0.2: icmp_seq=3 ttl=255 time=0.492 ms
64 bytes from 10.20.0.2: icmp_seq=4 ttl=255 time=0.257 ms
64 bytes from 10.20.0.2: icmp_seq=5 ttl=255 time=0.302 ms
64 bytes from 10.20.0.2: icmp_seq=6 ttl=255 time=0.342 ms
64 bytes from 10.20.0.2: icmp_seq=7 ttl=255 time=0.266 ms
[snip]
64 bytes from 10.20.0.2: icmp_seq=17 ttl=255 time=79.075 ms
64 bytes from 10.20.0.2: icmp_seq=18 ttl=255 time=12.466 ms
64 bytes from 10.20.0.2: icmp_seq=19 ttl=255 time=45.409 ms
64 bytes from 10.20.0.2: icmp_seq=20 ttl=255 time=45.705 ms
64 bytes from 10.20.0.2: icmp_seq=21 ttl=255 time=7.613 ms
64 bytes from 10.20.0.2: icmp_seq=22 ttl=255 time=7.436 ms
64 bytes from 10.20.0.2: icmp_seq=23 ttl=255 time=7.609 ms
64 bytes from 10.20.0.2: icmp_seq=24 ttl=255 time=7.541 ms
[snip]
64 bytes from 10.20.0.2: icmp_seq=28 ttl=255 time=113.203 ms
[snip]
64 bytes from 10.20.0.2: icmp_seq=36 ttl=255 time=8.471 ms
64 bytes from 10.20.0.2: icmp_seq=37 ttl=255 time=12.514 ms
64 bytes from 10.20.0.2: icmp_seq=38 ttl=255 time=24.049 ms
64 bytes from 10.20.0.2: icmp_seq=39 ttl=255 time=66.910 ms
64 bytes from 10.20.0.2: icmp_seq=40 ttl=255 time=88.233 ms
--- 10.20.0.2 ping statistics ---
41 packets transmitted, 41 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.226/18.730/113.203/27.405 ms
My kernel have this options:
Code:
device pf
device pflog
device pfsync
options ALTQ
options ALTQ_CBQ # Class Bases Queuing (CBQ)
options ALTQ_RED # Random Early Detection (RED)
options ALTQ_RIO # RED In/Out
options ALTQ_HFSC # Hierarchical Packet Scheduler (HFSC)
options ALTQ_PRIQ # Priority Queuing (PRIQ)
options ALTQ_NOPCC # Required for SMP build
options PAE
options TCPDEBUG
options IPSTEALTH
options HZ=1000
options ZERO_COPY_SOCKETS
Code:
kern.ipc.maxsockbuf=8388608
net.inet.tcp.rfc1323=1
net.inet.tcp.sendspace=131072
net.inet.tcp.recvspace=131072
kern.random.sys.harvest.ethernet=0
kern.random.sys.harvest.interrupt=0
kern.ipc.somaxconn=1024
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.isr.direct=0
kern.ipc.nmbclusters=65535
Some more information:
Code:
gw-ija# netstat -m
6816/3429/10245 mbufs in use (current/cache/total)
6814/2926/9740/65536 mbuf clusters in use (current/cache/total/max)
2431/1281 mbuf+clusters out of packet secondary zone in use (current/cache)
0/0/0/12800 4k (page size) jumbo clusters in use (current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
15374K/6709K/22083K bytes allocated to network (current/cache/total)
0/0/0 requests for mbufs denied (mbufs/clusters/mbuf+clusters)
0/0/0 requests for jumbo clusters denied (4k/9k/16k)
0/6/6656 sfbufs in use (current/peak/max)
0 requests for sfbufs denied
0 requests for sfbufs delayed
0 requests for I/O initiated by sendfile
0 calls to protocol drain routines
mbuf_packet: 256, 0, 2390, 1322, 610806721, 0
mbuf: 256, 0, 4389, 2144, 2353625703, 0
mbuf_cluster: 2048, 65536, 8099, 1641, 1143052524, 0
mbuf_jumbo_pagesize: 4096, 12800, 0, 0, 0, 0
mbuf_jumbo_9k: 9216, 6400, 0, 0, 0, 0
mbuf_jumbo_16k: 16384, 3200, 0, 0, 0, 0
gw# netstat -I bce0 -w 1
input (bce0) output
packets errs bytes packets errs bytes colls
19221 0 5929169 25578 0 22801897 0
19063 0 6006409 23729 0 20472136 0
18764 0 5946431 22351 0 19233524 0
19289 0 6033689 25174 0 22177539 0
19314 0 6040090 24675 0 21935126 0
18844 0 5913801 22897 0 20083401 0
PID USERNAME PRI NICE SIZE RES STATE C TIME WCPU COMMAND
12 root 171 ki31 0K 8K RUN 2 651:31 98.68% idle: cpu2
13 root 171 ki31 0K 8K RUN 1 618:59 88.96% idle: cpu1
11 root 171 ki31 0K 8K CPU3 3 704:50 86.96% idle: cpu3
14 root 171 ki31 0K 8K CPU0 0 678:56 60.79% idle: cpu0
17 root -44 - 0K 8K CPU1 3 45:40 51.95% swi1: net
34 root -68 - 0K 8K WAIT 1 97:47 5.76% irq260: bce0
41 root -68 - 0K 8K WAIT 2 78:30 2.20% irq265: em3
Regards.