Is it recommended to set default route MTU when using Jumbo frames in LAN?

Greetings, All!

Even before I started experimenting with Jumbo frames in local network, I've been wondering if default route's MTU should be explicitly changed? Correct me if I'm wrong, but what I think is, that it is okay to use Jumbo frames in a local network; however Jumbo frames are not supported behind router/gateway towards the Internet. I run route get freebsd.org what I get is
Code:
# route get freebsd.org
   route to: freebsd.org
destination: default
       mask: default
    gateway: 192.168.1.1
        fib: 0
  interface: lagg0
      flags: <UP,GATEWAY,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    mtu        weight    expire
       0         0         0         0      9000         1         0
and 192.168.1.1 is some cheap gigabit Linksys WAP/router doing NAT. There is also a FreeBSD router with multiple exit points doing NAT, as well.

So, forgive me the question (I do development, not network administration), should I limit default gateway's route MTU to 1500 or not?

Thank you!

PS: I did this already with defaultrouter="192.168.1.1 -mtu 1500" but am asking anyway.
 
It all depend of your router, it should support path MTU discovery and you won't have to change the MTU as it will get lower depending of the destination host.

Can you ping your router with frame above 1500?

# ping -D -s 1500 192.168.1.1
 
Well, ping -s 1505 192.168.1.1 (without -D) doesn't work. However fetch -o /dev/null -vvv https://freebsd.org does work. Yet, the question stands...
 
Then your router interface MTU is 1500, check if your router supports Jumbo frames.

ps.
You need to execute ping command as root to be able to ping above 56 bytes.
 
Then your router interface MTU is 1500, check if your router supports Jumbo frames.
Yes. But that was not the point. Even if the router does support Jumbo frames on LAN-side, they will not be supported on the WAN-side.

Hence the question: should I limit default route to 1500 bytes, and why?
 
It should detect the maximum MTU size using Path MTU so you don't have to change per route MTU. To verify your MTU size use
netstat -rW
 
I've resorted to using Jumbo Frames only between infrastructure hardware (mainly trunks between switches) and on carefully/specifically selected interfaces on some hosts (e.g. Storage- and Backup servers).
Enabling Jumbo Frames for everything in the network (especially on the access layer where clients are connected) usually only causes headaches with the multitude of broken/non-existent path-MTU-discovery implementations. Printers, Android and Windows clients are absolutely notorious for generating the weirdest problems if confronted with a MTU >1500.

One-way traffic *can* often be a sign for broken PMTU-Discovery or mangled MTU e.g. due to tunneling and/or virtualization overhead. Debugging this is a huge PITA - so unless you can really dedicate some time to test and debug everything, just stick with the default MTU on all endpoint-connections.

Also be careful about the assumption of the WAN MTU being 1500 bytes - PPPoE has a lower MTU, but usually this is correctly detectet. But even if the last link happily takes an MTU of 1500bytes, there might be fragmentation happening underneath your L2 link. We had massive problems with high jitter (great for VoIP :rolleyes:) on an ISP link at a branch - turned out they also use the default MTU for their L2 tunneling to the endpoint, which caused a lot of fragmentation that went undetected within the tunnel. Reducing the MTU on our side for that link resolved the Problem.
 
I've resorted to using Jumbo Frames only between infrastructure hardware (mainly trunks between switches)...
Out of curiosity, would there be any difference if the packets over the trunk are with standard MTU?
 
Out of curiosity, would there be any difference if the packets over the trunk are with standard MTU?

There was a _slight_ difference in CPU load and latency/packet loss while totally beating the sh*t out of 2 3750G catalysts I've used for testing.
I doubt it really makes any measurable difference in production (except if you constantly run your switches completely saturated/congested all the time...), but cisco also suggests in some documents to set the maximum MTU for inter-switch trunks to reduce load and get the last few percentiles of bandwidth out of the links (and we do have a highly-utilized 2x1GB uplink between buildings...), so I just settled with that config. Hasn't caused any issues for several years, so at least it's not harmful ;)
 
Back
Top