1. This site uses cookies. By continuing to use this site, you are agreeing to our use of cookies. Learn More.

Rules killing the whole TCP traffic

Discussion in 'Firewalls' started by georgetom19, Apr 30, 2012.

  1. georgetom19

    georgetom19 New Member

    Messages:
    1
    Likes Received:
    0
    Hi, I'm kind of doing some R&D and I needed a freebsd machine with two NIC interfaces to be in between a pair of data generators. The setup wiring looks like below:

    Code:
     ________           _________________         _______
    | Client | ------- | Free BSD Machine| ----- | Server|
     --------           -----------------         -------
    
    Now on the FreeBSD machine, I created a bridge using "ifconfig bridge create" and added the interfaces as members.

    Now I'm pumping TCP traffic from Client and the Server, with 10 sessions. CLient subnet is 4.0.0.0/8 and server subnet is 5.0.0.0/8.

    Now when I apply the below rules the complete tcp traffic is killed by freebsd FreeBSD.

    Code:
    sudo ipfw pipe 1 config bw 35Mbytes delay 50ms queue 1024Kbytes;
    sudo ipfw add 50 pipe 1 tcp from 5.0.0.0/8 to any;
    sudo ipfw pipe 2 config bw 35Mbytes delay 50ms queue 1024Kbytes;
    sudo ipfw add 51 pipe 2 tcp from 4.0.0.0/8 to any;
    Note: The maximum data throughput generated by the data generators is 25Mbps, hence ideally with this rule in place I should not see any drops of packets, right?

    Logs:

    Code:
    wheel# ipfw show
    00050       543       660971 pipe 1 tcp from 5.0.0.0/8 to any
    00051       242        69256 pipe 2 tcp from 4.0.0.0/8 to any
    65535 899588300 107917270559 allow ip from any to any
    
    
    wheel# ifconfig
    em2: flags=20008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
            options=48<VLAN_MTU,POLLING>
            inet6 fe80::204:23ff:fe9e:f73c%em2 prefixlen 64 scopeid 0x3
            inet 0.0.0.0 netmask 0xff000000 broadcast 255.255.255.255
            ether 00:04:23:9e:f7:3c
            media: Ethernet autoselect (1000baseTX <full-duplex>)
            status: active
            port: 3
            name: DATA_1
    em3: flags=20008943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
            options=48<VLAN_MTU,POLLING>
            inet6 fe80::204:23ff:fe9e:f73d%em3 prefixlen 64 scopeid 0x4
            inet 0.0.0.0 netmask 0xff000000 broadcast 255.255.255.255
            ether 00:04:23:9e:f7:3d
            media: Ethernet autoselect (1000baseTX <full-duplex>)
            status: active
            port: 4
            name: DATA_2
    bridge0: flags=8043<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
            inet 1.0.0.10 netmask 0xff000000 broadcast 1.255.255.255
            ether ac:de:48:2e:12:e0
            priority 32768 hellotime 2 fwddelay 15 maxage 20
            member: em3 flags=3<LEARNING,DISCOVER>
            member: em2 flags=3<LEARNING,DISCOVER>
     
  2. Uniballer

    Uniballer Member

    Messages:
    247
    Likes Received:
    1
    Sorry, my answer (below) was based on the expectation of 25 MiB/S (not 25M bits/S). If you in fact meant bits then my reasoning is invalid.

    Without knowing anything about the performance of ipfw I can tell you that the bandwidth delay product of either 25 or 35 MiB/S is greater than 1MiB/50mS. So you will probably drop packets, and your TCP window will never open to the maximum. This leaves open the question of whether your TCP window could grow large enough to keep the pipe full (25MiB * .05S = 1310720).