Does mbuf suffer from same problems as netcat?

Hi everyone,

I have read that if you start netcat in listening mode on a host, start nmap, and then try to send data other from a netcat client, then netcat is likely not to connect, because it is very sensitive to other packages from e.g. nmap.

Does mbuf also suffer from this?
 
A simple explanation... ;)

The mbuf(9) containing an input packet is handed to bpf(4) if you are capturing traffic with libpcap, and they are copied to BPF's own buffer, which is not made of mbufs. BPF was designed to use ping-pong buffers, which create a link to the cache. It is not hard to see an individual bpf buffer size should not close to the cache size if upper layer applications need to continue to capture packets. If up layer program cannot drain the BPF buffer faster than the NIC I/O to fill the buffer, increasing buffer size only gives you a short-time cushion at start phase. Once buffers are filled, you will start to lose packets anyway.

Checking setting values:
Code:
# sysctl net.bpf
net.bpf.zerocopy_enable: 0
net.bpf.maxinsns: 512
net.bpf.maxbufsize: 524288
net.bpf.bufsize: 4096

I recommend that you take a look at the documentation to use the zero-copy BPF functionality discussed in BSDCan 2007 that is not enabled by default. In the normal case, with NIC I/O, buffers are copied from the user process into the kernel on the send side, and from the kernel into the user process on the receiving side. Using zero-copy provide a shared memory buffer to the kernel that will be written into, avoiding the copy from kernel-space to user-space. The send side zero copy code should work with most any network adapter. The receive side code, however, requires an adapter with an MTU that is at least a page size, due to the alignment restrictions for page substitution or page flipping.

Finally, to benchmark the performance try with benchmarks/netperf or benchmarks/nttcp to determine maximum throughput. Check for further benchmarks to obtain more reliability data.
 
@cpu82

Very interesting reply!

When you say

The mbuf(9) containing an input packet is handed to bpf(4) if you are capturing traffic with libpcap, and they are copied to BPF's own buffer, which is not made of mbufs.

when what happens?

When I test mbuf with a 2GB buffer, it is ~80% full for the entire transfer.

I haven't been able to make it refuse connecting so far, but is that luck? =)
 
littlesandra88 said:
when what happens?

The tcpdump's packets are stored on a BPF device with BIOCSBLEN ioctl command and this buffer into which the copy is made is not mbufs. The bd_bufsize (global variable) records the size of the 2 buffers associated with the device. For net.bpf.bufsize sets value defaults to 4096 bytes. The default value can be changed by patching the kernel, or bd_bufsize can be changed for a particular BPF device with the BIOCSBLEN ioctl command. Only one process is allowed access to BPF device at a time. If the bpf_d structure is already active, EBUSY is returned. Programs such as tcpdump try next device when this error is returned. See /usr/src/sys/net/bpf.c

littlesandra88 said:
When I test mbuf with a 2GB buffer, it is ~80% full for the entire transfer.

To avoid degrade perfomance, if you change net.bpf.bufsize kernel variable should never set above hardware cache size, optimal size is 50%><80% of the hardware cache size.

Please, show outputs:

# sysctl -a | grep vm.kmem && sysctl net.bpf


Reference:

[1] TCP/IP Illustrated Series Books.
 
@cpu82

To avoid degrade perfomance, if you change net.bpf.bufsize kernel variable should never set above hardware cache size, optimal size is 50%><80% of the hardware cache size.

Please, show outputs:

# sysctl -a | grep vm.kmem && sysctl net.bpf

Very interesting. I get

Code:
vm.kmem_map_free: 66452250624
vm.kmem_map_size: 192598016
vm.kmem_size_scale: 1
vm.kmem_size_max: 329853485875
vm.kmem_size_min: 0
vm.kmem_size: 66644860928
net.bpf.zerocopy_enable: 0
net.bpf.maxinsns: 512
net.bpf.maxbufsize: 524288
net.bpf.bufsize: 4096
 
The FreeBSD VM system has very good auto-tuning of parameters and limits. Since FreeBSD 7.2+ has improved kernel memory allocation strategy and no tuning may be necessary on systems with more than 2GB of RAM. To know how system calculates your specific values automaticaly, see defined macro in sys/amd64/include/vmparam.h, where it comes the constant value 329853485875 (~307GB).
Code:
#define VM_KMEM_SIZE_MAX        ((VM_MAX_KERNEL_ADDRESS - \
        VM_MIN_KERNEL_ADDRESS + 1) * 3 / 5)

So value resulting is:
Code:
((1<<39) * 3 / 5) = 329853488332

Sure that VM_MAX_KERNEL_ADDRESS is limited to 512GB. However, you can manualy set in /boot/loader.conf some changes. For vm.kmem_size_max and vm.kmem_size (see src/sys/kern/kern_malloc.c and sys/boot/common/loader.8).

Add in /boot/loader.conf:
Code:
vm.kmem_size_max: 329853488332
vm.kmem_size: 329853488332

To increasing networking performance, add in /etc/sysctl.conf:
Code:
# Increase send/receive buffer maximums to 16MB.
# FreeBSD 7.x and later will auto-tune the size, but only up to the max.
net.inet.tcp.sendbuf_max=16777216
net.inet.tcp.recvbuf_max=16777216

# Double send/receive TCP datagram memory allocation.  
# This defines the amount of memory taken up by default *per socket*.
net.inet.tcp.sendspace=65536
net.inet.tcp.recvspace=131072

# Enlarge buffers for BPF device.
net.bpf.bufsize=65536
net.bpf.maxbufsize=524288
 
Back
Top