MTU of jail vimage by netgraph

Hi All

I have a question about MTU of jail vimage by netgraph.
Please reference illustration of running system as follows.
1661823234958.png

Running routing mode with gateway_enable="YES" @ /etc/rc.conf.

Connect Jail of ssh from netif msk0(internet) via ssh.
Log in is OK, but disconnect unexpected when high data traffic (example: list directory/file @ ls).

All work fine when all of MTU=1492

Result of ifconfig ng0
Code:
ng_vimage0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1492
        options=28<VLAN_MTU,JUMBO_MTU>
And ifconfig ng0_sshd
Code:
ng0_sshd: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1492
        options=28<VLAN_MTU,JUMBO_MTU>
Both have options JUMBO_MTU, it added automatically either MTU 1492 or 9000.
Jumbo frame support I suppose.

My question:
  • Is netgraph does not support jumbo frame?
  • If above true, problem caused by packet fragmentation/de-fragmentation,
    Is configure require or other?
  • Because host <-> jail @ IPv6 only and IPv6 routers do not fragment IPv6 packets,
    Can not use jumbo frame with IPv6 meant?
  • Missing some else?

Addition information about MTU of bridge
In https://freebsdfoundation.org/wp-content/uploads/2020/03/Jail-vnet-by-Examples.pdf
"MTU increased to 9000, allowing large numbers of neighbors..." written
And example code also as follow
ifconfig bridge create name vnetdemobridge mtu 9000 up
The case is if_bridge(), I does not find this kind of example for ng_bridge() with my case.


Sorry many question.
Thanks a lot.
 
epopen ,
you can't use Jumbo frames with interfaces, which hasn't suitable for this. It's wrong, and can cause some network troubles. You can't create bridge with Jumbo from interfaces, which isn't configured with Jumbo.
Jumbo uses only within LAN or within channels with only clear ethernet proto. It's not suitable for use with pptp, l2tp,
Also, you can remembered, that Jumbo MUST be configured at all devices, which operate this traffic: switches, NICs, routers,...
  • Is netgraph does not support jumbo frame? === Netgraph uses to connect interfaces, may be to forward packets, not to operate Jumbo
May be, you don't understand using Jumbo. It's uses when you have high speed network, and you want to increase speed. Jumbo assumes, that packets don't be fragmented during transfer from one host to other if they have size less Jumbo size. But, if some transition host doesn't accept Jumbo, your packets will be fragmented according to size this host MTU. So, if you try to setup Jumbo over MTU 1492, your 9000 packet will be chunked to packets with 1470 size (MTU minus overhead proto).
 
epopen ,
you can't use Jumbo frames with interfaces, which hasn't suitable for this. It's wrong, and can cause some network troubles. You can't create bridge with Jumbo from interfaces, which isn't configured with Jumbo.
Jumbo uses only within LAN or within channels with only clear ethernet proto. It's not suitable for use with pptp, l2tp,
Also, you can remembered, that Jumbo MUST be configured at all devices, which operate this traffic: switches, NICs, routers,...
  • Is netgraph does not support jumbo frame? === Netgraph uses to connect interfaces, may be to forward packets, not to operate Jumbo
May be, you don't understand using Jumbo. It's uses when you have high speed network, and you want to increase speed. Jumbo assumes, that packets don't be fragmented during transfer from one host to other if they have size less Jumbo size. But, if some transition host doesn't accept Jumbo, your packets will be fragmented according to size this host MTU. So, if you try to setup Jumbo over MTU 1492, your 9000 packet will be chunked to packets with 1470 size (MTU minus overhead proto).
Thanks you a lot, your explain clarity.

Because there is data exchange between jail.
My consider was that using jumbo frame between jail would reduce the overhead fragmentation/de-fragmentation.
Therefore jumbo frame are only used between jail.

I assume.
  • Virtual interface with physical interface similar.
  • Packet fragmentation at host if packets from MTU=9000 of jail interface ng_vimage0 to MTU=1492 of internet interface ng0.
  • All of jail interface ng_vimage0/ng0_sshd have options of JUMBO_MTU, therefore it can be jumbo frame.
 
I'm a newbie in jail, but if it possible to use a communication via loopback interface between jails it would be the fastest way. Why? Loopback support a Jumbo, loopback doesn't use a routing.

If impossible use loopback, if it applicable for you, you can switch off the checksum check by ifconfig -RXCSUM em0 (or TXCSUM, note, if you disable checksum you also need to disable TSO4, TSO6 on interface). Additionaly, you can tuning your NICs via sysctl: queue size (below example for Intel "em" cards)

Code:
/boot/loader.conf:
hw.em.rxd=4096
hw.em.txd=4096
hw.em.max_interrupt_rate=32000

/etc/sysctl.conf:
dev.em.0.rx_int_delay=200
dev.em.0.tx_int_delay=200
dev.em.0.rx_abs_int_delay=4000
dev.em.0.tx_abs_int_delay=4000
dev.em.0.rx_processing_limit=4096
 
I'm a newbie in jail, but if it possible to use a communication via loopback interface between jails it would be the fastest way. Why? Loopback support a Jumbo, loopback doesn't use a routing.

If impossible use loopback, if it applicable for you, you can switch off the checksum check by ifconfig -RXCSUM em0 (or TXCSUM, note, if you disable checksum you also need to disable TSO4, TSO6 on interface). Additionaly, you can tuning your NICs via sysctl: queue size (below example for Intel "em" cards)

Code:
/boot/loader.conf:
hw.em.rxd=4096
hw.em.txd=4096
hw.em.max_interrupt_rate=32000

/etc/sysctl.conf:
dev.em.0.rx_int_delay=200
dev.em.0.tx_int_delay=200
dev.em.0.rx_abs_int_delay=4000
dev.em.0.tx_abs_int_delay=4000
dev.em.0.rx_processing_limit=4096
Thanks a lot.

In fact, because network isolation, I migrated from conventional(cloned lo1) to vimage before.:)

About checksum, I executed as follows.
ifconfig ng_vimage0 rxcsum
and
ifconfig ng_vimage0 -rxcsum
Both no error, but not affect(options never change) as follows.
Code:
ng_vimage0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1492
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 02:ac:95:e3:83:d2
        hwaddr 58:9c:fc:00:17:0a
        inet6 fd00::ffff:a00:3fe prefixlen 119
        inet6 fe80::ac:95ff:fee3:83d2%ng_vimage0 prefixlen 64 scopeid 0x6
        inet 10.0.3.254 netmask 0xfffffe00 broadcast 10.0.3.255
        fib: 1
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=61<PERFORMNUD,AUTO_LINKLOCAL,NO_RADR>

It affect physical interface only look like.
 
Back
Top