Throughput vtnet on bhyve

Ofloo

Well-Known Member

Reaction score: 8
Messages: 284

After upgrade host system to freebsd12, throughput vm bhyve freebsd os freebsd11 went down from

to 3mbits, turn on of csum check whatever nothing helps.
 
OP
OP
Ofloo

Ofloo

Well-Known Member

Reaction score: 8
Messages: 284

Between bhyve vms i'm getting blazing speeds of 5kb/s
 

Phishfry

Son of Beastie

Reaction score: 1,602
Messages: 4,595

What interface is this with? The Intel Quad ix from your other post?. Is this copper or fiber? X520 or X540.

Bridges in general are bad. Here is a chart that shows they take a 55% hit on bandwidth
https://github.com/ocochard/netbenches/tree/master/Atom_C2758_8Cores-Chelsio_T540-CR/bridge/results/fbsd11.1-yandex

I use pass through mode for NIC's. Add some overheads for vtnet driver in addition to the bridge. Tap might eat too.

That said, 5Kb/s is sad.
I am still swapping around stuff for the perfect virt rig..

Seeing how you have been doing this much longer than I have --Why haven't you tried passthough--??
I Switched from VirtIO block driver to AHCI-HD and gained speed. That is the only area I benchmarked with diskinfo -t.
When I turn on AHCI-HD for my VM I get darn near bare metal drive speeds when comparing with diskinfo -t.
Been meaning to get benchmarks/iperf3 running and test VM networking speeds.
Starting to figure out scp so I can copy in files to my VM's without Gigolo (my GUI SCP client).
 
OP
OP
Ofloo

Ofloo

Well-Known Member

Reaction score: 8
Messages: 284

I did passthrough, a lot faster, .. i virtualized pfsense, did passthrough for that but i still need to give some small vms a interface, .. and it used to be fast, .. I mean i only have 8nics on that system :p so i figured i would create some virtual passthrough nics to solve the problem.

it's like after freebsd 10 vritio went down hill, also i was able to get at least 2gbic on virtio on freebsd11.2 after i upgraded to 12 it was horrible.

managed to get it back up though

Same subnet
Code:
% iperf -c loki
------------------------------------------------------------
Client connecting to loki, TCP port 5001
TCP window size: 80.8 KByte (default)
------------------------------------------------------------
[  3] local 10.13.17.10 port 63647 connected with 10.13.17.11 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   963 MBytes   808 Mbits/sec
Same bridge
Code:
# iperf -c loki
------------------------------------------------------------
Client connecting to loki, TCP port 5001
TCP window size: 80.8 KByte (default)
------------------------------------------------------------
[  3] local 10.13.17.14 port 41722 connected with 10.13.17.11 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec   408 MBytes   341 Mbits/sec
Yesterday it was 56xxBps :p
 
Top