AWS EC2 ENA: Poor network performance low PPS

Hi,

I am using FreeBSD based servers to process UDP traffic on AWS EC2. I am encountering major issues achieving anything close to decent network performance when processing UDP traffic (no issues with TCP).

I setup the test environment detailed below and used iperf to send and receive data from a firing station on an external host to a server on AWS EC2. See the test results below.
iperf v2.0.13 on both servers.

  • Firing-Station1: VMware External - FreeBSD 11.4-R-p3; 4 core, 4GB RAM; no tweaks; NIC: 10Gbps vmxnet3 default offloading; [iperf -ue -c x.x.x.x -X -t10 -b <pps cap> pps -r -l512 ]
  • Receiver: AWS EC2 - FreeBSD 11.4-R-p3; c5n.xlarge; no tweaks; NIC x 2: 10Gpbs ENA 2.2.0 default offloading; [iperf -b10g -eus] (only one NIC with public IP)

Results :

datagram szpps capemitting bandwidthemitting total/ppsin-transit losesreceived bandwithreceived totalreverse bandwidhreverse total/ppsin-transit loses
512700k1.88 Gbps4636865/45949783%317 Mbps780787354 Mbps865873/865470%
512700k1.86 Gbps4575497/45326583%314 Mbps773108374 Mbps913491/913090%
512500k1.84 Gbps4524000/44836183%307 Mbps776176341 Mbps832814/832820%
512500k1.85 Gbps4569716/45276883%317 Mbps781555373 Mbps910784/910790%
512250k1.01 Gbps2500001/24777069%314 Mbps774888362 Mbps884735/884760%
512250k1.01 Gbps2500001/24776469%314 Mbps775337347 Mbps847032/847070%
51280k325 Mbps800001/792793%315 Mbps776158369 Mbps901813/901640.02%

The FreeBSD OS has not been altered, it a vanilla install.

Can anyone provide any insight in to these results? Why is the reverse bandwidth so low when using a 10Gpbs NIC? What settings could be altered to increase bandwidth/PPS?

Is anyone else experiencing similar issues?

Any and all help greatly appreciated.

Regards,
Stevo.
 
Hey, I'm so sorry nobody ever replied. That should be really frustrating.

I have the suspicion that the problem has still not been resolved now months later. And the problem is old, it has been reported back in 2016 and 2018.

I have pretty much committed to use FreeBSD on AWS anyway, but it's getting urgent that we address that. What can be done? How can I help?

Did you fix your problem?

It might be interesting to do this test on two EC2 systems talking over the local IP between them. Then you get all external effects removed. Even test with loopback. I used to have my own UDP bandwidth test, I don't know what you use, I should get mine out and try it myself.
 
Hi,

Thanks for replying :)

I did not resolve the issue, but worked around it by adding more servers. AWS were not able to assist, and as you can see I got no assistance here. In it's current state, FreeBSD simply cannot perform at the level of other modern O/S when processing UDP traffic in the AWS environment. I suspect this an issue that both AWS and the FreeBSD devs are aware of considering how long it has been around, but they are either unable or unwilling to fix it.

Unfortunatley, after using FreeBSD since 1999, this issue has resulted in the decision to redesign the product and base it on a Linux distribution instead. It was the only to ensure that the product could continue to scale up without having to deploy an excessive number of servers.

Stevo
 
I whipped out my udpblast testing tool, and I hit its own limit. I had developed that tool over 15 years ago to test home "broadband" connections for video conferencing. The default throughput is 1 Mbit/s, that is obviously no problem. What I noticed is if I was pushing UDP packets faster than whatever is needed for 100 Mbit/s then I would get

sendto: No buffer space available

So ultimately I was not able to push the system to its limits or I reached a hard limit.

Still I find it very strange that FreeBSD would have fallen so far behind on performance, that which once was our biggest strength. And why doesn't anybody care?

Is there any other BSD system that does a better job? NetBSD? OpenBSD? Do you know?
 
How are you testing? Have you installed FreeBSD on hardware? VM? If VM what hypervisor or cloud service?

I've seen the "sendto: No buffer space available" issue on Hyper-V, but I don't think it's related to the issue I am encountering, but I could be wrong. Run "netstat -s -p udp" and check the UDP statistics as you test. That will show any buffer errors.
 
Back
Top