Hello, folks!
I am using FreeBSD 12 under KVM for vnet jail and PF testing, and I am seeing a very bad performance of any kind (TCP or UDP) of traffic between the host and guest's vnet jail.
My setup:
Ubuntu 18.04 as host.
FreeBSD 12 as guest, with one VirtIO network card talking to the host and the Internet (in fact, the problem occurs with Intel em too), gateway_enable=YES in rc.conf.
Two interconnected netgraph interfaces, one to FreeBSD guest, the other to the vnet jail
There are no bridges, because my goal is to test FreeBSD guest as a router between the outside world and the vnet jail.
My interpretation is that the FreeBSD guest would simply route traffic that comes in from the VirtIO interface to the netgraph, when the destination is the other side of the interconnected ng_eifaces. Indeed, that is what happens, given that the routing tables are correctly configured - host can ping guest's vnet jail and vice-versa.
But when I do something like:
inside the guest's vnet jail, and from the host I issue:
I can not go much further than 10Mbps. I was expecting at least 1000Mbps.
But if I issue the same command inside the guest instead, I can achieve 12000Mbps. Yes!, 12Gbps. That is just amazing! Or, if set it up two nc "servers" inside the jail, I can get somewhat near 7Gbps of down and upstreams.
Any clue on what can go wrong with this setup?
Thank you all!
I am using FreeBSD 12 under KVM for vnet jail and PF testing, and I am seeing a very bad performance of any kind (TCP or UDP) of traffic between the host and guest's vnet jail.
My setup:
Ubuntu 18.04 as host.
FreeBSD 12 as guest, with one VirtIO network card talking to the host and the Internet (in fact, the problem occurs with Intel em too), gateway_enable=YES in rc.conf.
Two interconnected netgraph interfaces, one to FreeBSD guest, the other to the vnet jail
There are no bridges, because my goal is to test FreeBSD guest as a router between the outside world and the vnet jail.
My interpretation is that the FreeBSD guest would simply route traffic that comes in from the VirtIO interface to the netgraph, when the destination is the other side of the interconnected ng_eifaces. Indeed, that is what happens, given that the routing tables are correctly configured - host can ping guest's vnet jail and vice-versa.
But when I do something like:
nc -l 500 < /dev/zero
inside the guest's vnet jail, and from the host I issue:
nc 10.0.0.2 > /dev/null
(assuming that 10.0.0.0/24, from host, is pointing to guest's VirtIO interface's address and that the vnet jail has the way back setup too)I can not go much further than 10Mbps. I was expecting at least 1000Mbps.
But if I issue the same command inside the guest instead, I can achieve 12000Mbps. Yes!, 12Gbps. That is just amazing! Or, if set it up two nc "servers" inside the jail, I can get somewhat near 7Gbps of down and upstreams.
Any clue on what can go wrong with this setup?
Thank you all!