MTU on bridge, tap and Bhyve guests (vtnet)

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

I've set MTU to 3000 using the following commands.

On hypervisor:
Code:
ifconfig bridge0 mtu 3000
ifconfig tap0 mtu 3000
ifconfig tap1 mtu 3000
etc.
On guests:
Code:
ifconfig vtnet0 mtu 3000
etc.
I've trapped data and get the MSS is 1358. So the above value is not close the 3000.
Code:
ethertype IPv4 (0x0800), length 66: 10.10.10.131.57199 > 10.12.12.12.22:[mss 1358]
tap0 - OpenVPN server (tun0 with MTU=3000 as well)
tap1 - Mail server

Where is the problem?
 
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

No issue. I'm connected to OpenVPN via Internet and there is MTU=1500 as default.
I've extended to max MSS=1460.

In OpenVPN client:
Code:
tun-mtu 3000
mssfix 2960
In OpenVPN server:
Code:
tun-mtu 3000
mssfix 2960
Due to internet networks equipments up to MTU=1500 and MSS=1460 work.
 
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

By the way, how can I set MTU in rc.conf?
Is this correct?
Code:
cloned_interfaces="bridge0 tap0 tap1 bridge1 tap10 tap11 mtu 3000"
Or should I use option as below?
Code:
ifconfig_bridge0="mtu 3000"
ifconfig_tap0="mtu 3000"
 

PacketMan

Aspiring Daemon

Reaction score: 164
Messages: 955

Due to internet networks equipments up to MTU=1500 and MSS=1460 work.

Correct. Networking equipment (routers, switches, some kinds of transport equipment) have their own MTU.

Today we are seeing a lot of differences there. One new device can have an MTU of 1500 and another 9000. One device can have a global chassis MTU of 9000 but some interface types (or configuration constructs) have an MTU of 1546. The thing to understand is the lowest MTU found in a path end-to-end becomes the 'path MTU'. Devices can send larger sized frames but understand they will get fragmented down at the device with the lower MTU. This can cause performance loss. Some 'end station' devices (PCs, servers, etc) can do a path MTU discovery, and then set the MTU accordingly.

Me thinks this thread should be move to the Networking category.
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 11,612
Messages: 37,948

Note that you can only use jumbo frames on ethernet interfaces if the switch also supports jumbo frames. The bridge(4) interface will "inherit" the MTU settings from its parent interface.
Code:
     The MTU of the first member interface to be added is used as the bridge
     MTU.  All additional members are required to have exactly the same value.
So something like this should work:
Code:
cloned_interfaces="bridge0"
ifconfig_igb0="up mtu 9000"
ifconfig_igb1="up mtu 9000"
ifconfig_bridge0="addm igb0 addm igb1 up"

This will create a bridge(4) interface with MTU 9000.
 
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

Thanks. I've tested it manually and bridge does not set automatically to new MTU.
After several ifconfig tapX mtu 3000 I have to use
Code:
ifconfig bridge0 mtu 3000
to change this value.

Is this better
Code:
ifconfig_tap0="up"
ifconfig_tap1="up"
than
Code:
cloned_interfaces="tap0 tap1"
or it doesn't matter?
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 11,612
Messages: 37,948

You need both. A cloned interface is down by default.
 
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

I use taps for bhyve guests and don't up at startup. I've never used UP to start the interface.
Code:
cloned_interfaces="bridge0 tap0 tap1 tap2 tap3 tap4 tap5"
ifconfig_bridge0="addm tap0 addm tap1 addm tap2 addm tap3 addm tap4 addm tap5"
So I think the bhyve impacts to up the tap interface.
Change somehow startup config rc.conf or make my own in rc.local something like ifconfig tap0 mtu 3000?
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 11,612
Messages: 37,948

If you use vm-bhyve you can remove all the bridge and tap configurations from rc.conf, sysutils/vm-bhyve takes care of this automatically.
 
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

I don't use it. I just have my own one script to run and control bhyve.
I'll try to use rc.local.
 
OP
IPTRACE

IPTRACE

Well-Known Member

Reaction score: 24
Messages: 321

I'm back.
I still have the problem with MTU between host<>VMs and VMs<>VMs.
I cannot use higher Ethertype size than 4084.
Code:
ethertype IPv4 (0x0800), length 4084: truncated-ip - 8 bytes missing! 10.0.0.20 > 10.0.1.15: ICMP echo request, id 60022, seq 22, length 4058
ethertype IPv4 (0x0800), length 4084: truncated-ip - 8 bytes missing! 10.0.0.20 > 10.0.1.15: ICMP echo request, id 60022, seq 23, length 4058
ethertype IPv4 (0x0800), length 4084: 10.0.0.20 > 10.0.1.15: ICMP echo request, id 15479, seq 0, length 4050
ethertype IPv4 (0x0800), length 4084: 10.0.1.15 > 10.0.0.20: ICMP echo reply, id 15479, seq 0, length 4050
Is it the bug in bhyve or vtnet interface?
 
Top