vnet jail not letting ping -6 through from jails on same host

dvl@

Developer
I have a vnet jail (pkg01) on my FreeBSD 13.2 host (r730-01). It is the only vnet jail on this host. The main problem: My monitoring jail cannot ping6 the pkg01 jail.

ping via IP4 is not an issue anywhere. The rest of this discussion is about IPv6 unless otherwise mentioned.

* some jails on that same host cannot ping pkg01, some can't.
* other hosts and jails on other hosts can ping pkg01

Failed pings, from the monitoring jail, are accompanied by these messages:

Code:
19:42:26.224666 IP6 2001:470:8abf:7055:c348:9dc1:0:29 > ff02::1:ff22:ea2d: ICMP6, neighbor solicitation, who has 2001:470:8abf:7055:b6f9:d572:6622:ea2d, length 32
19:42:27.241219 IP6 2001:470:8abf:7055:c348:9dc1:0:29 > ff02::1:ff22:ea2d: ICMP6, neighbor solicitation, who has 2001:470:8abf:7055:b6f9:d572:6622:ea2d, length 32

To me, that's saying the pkg01 jail is getting the ping, but can't reply because the neighbor solicitation is not getting a result.

* 2001:470:8abf:7055:c348:9dc1:0:29 is pkg01
* 2001:470:8abf:7055:b6f9:d572:6622:ea2d is the monitoring jail

This is the pkg01 jai configuration:

Code:
[19:45 r730-01 dvl /etc/jail.conf.d] % cat pkg01.conf
pkg01 {

  #
  # start of standard settings for each jail
  #

#  exec.start  = "/bin/sleep 5";
  exec.start += "/bin/sh /etc/rc";
  exec.stop  = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  path = /jails/$name;

  allow.raw_sockets;
  #securelevel = 2;
 
  exec.prestart  = "logger trying to start jail $name...";
  exec.poststart = "logger jail $name has started";
  exec.prestop   = "logger shutting down jail $name";
  exec.poststop  = "logger jail $name has shut down";
 
  host.hostname = "$name.int.unixathome.org";
  exec.consolelog="/var/tmp/jail-console-$name.log";
 
  persist;

  #
  # end of standard settings for each jail
  #

  allow.chflags;

  allow.mount.devfs;
  allow.mount.fdescfs;
  allow.mount.linprocfs;
  allow.mount.nullfs;
  allow.mount.procfs;
  allow.mount.tmpfs;
  allow.mount.zfs=true;
  allow.mount=true;

  allow.raw_sockets;
  allow.socket_af;

  children.max=200;

  enforce_statfs=1;

  exec.created+="zfs jail $name  data03/poudriere";
  exec.created+="zfs set jailed=on data03/poudriere";

  exec.poststart  += "jail -m allow.mount.linprocfs=1 name=$name";

  exec.poststop   += "/usr/local/sbin/jib destroy $name";

  exec.prestart   += "/usr/local/sbin/jib addm  $name igb0";

  host.domainname=none;

  sysvmsg=new;
  sysvsem=new;
  sysvshm=new;

  vnet.interface   = "e0b_$name";
  vnet;
}

This is the monitoring jail configuration:

Code:
[19:45 r730-01 dvl /etc/jail.conf.d] % cat webserver.conf
webserver {

  #
  # start of standard settings for each jail
  #

  exec.start = "/bin/sh /etc/rc";
  exec.stop  = "/bin/sh /etc/rc.shutdown";
  exec.clean;
  mount.devfs;
  path = /jails/$name;

  allow.raw_sockets;
  #securelevel = 2;
 
  exec.prestart  = "logger trying to start jail $name...";
  exec.poststart = "logger jail $name has started";
  exec.prestop   = "logger shutting down jail $name";
  exec.poststop  = "logger jail $name has shut down";
 
  host.hostname = "$name.int.unixathome.org";
  exec.consolelog="/var/tmp/jail-console-$name.log";
 
  persist;

  #
  # end of standard settings for each jail
  #

    ip4.addr = "igb0|10.55.0.3";
    ip6.addr = "igb0|2001:470:8abf:7055:b6f9:d572:6622:ea2d";
}

Any ideas?

Code:
[19:43 webserver dan ~] % ping pkg01
PING6(56=40+8+8 bytes) 2001:470:8abf:7055:b6f9:d572:6622:ea2d --> 2001:470:8abf:7055:c348:9dc1:0:29
^C
--- pkg01.int.unixathome.org ping6 statistics ---
3 packets transmitted, 0 packets received, 100.0% packet loss
[19:47 webserver dan ~] % ping -4 pkg01
PING pkg01.int.unixathome.org (10.55.0.29): 56 data bytes
64 bytes from 10.55.0.29: icmp_seq=0 ttl=64 time=0.105 ms
64 bytes from 10.55.0.29: icmp_seq=1 ttl=64 time=0.067 ms
64 bytes from 10.55.0.29: icmp_seq=2 ttl=64 time=0.060 ms
64 bytes from 10.55.0.29: icmp_seq=3 ttl=64 time=0.069 ms
^C
--- pkg01.int.unixathome.org ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.060/0.075/0.105/0.018 ms
[19:47 webserver dan ~] %
 
It looks like you are pinging from global ipv6 adress to link-local adress. Link-local adress is not routable, can that be the problem?
 
Last edited:
Do you use a bridge for you vnet jail. ? Does the bridge have an IPv4 and and IPv6 Address ?

On big routers you always have to set your source interface for ping6 . Not sure if you need to do this , but it helps to find the problem.
 
Yes. I don't recall now.

The solution I am using now: move all IP addresses from the primary NIC to the bridge.
Such should not be necessary. A bridge should be unaware of IP addresses.

Over all, I found the ipv6 to work very reliable (at least until we get to the evil things, like filtering fragmented ipv6, or port forwarding. ;) )

But, I am using only vnet jails, and I am using only netgraph bridges.

From the blogpost I might assume the problem comes from using an ifconfig bridge. I avoided that approach from the beginning, because I didn't like these bridges sitting in the list of ifconfig devices and acting there as network devices on their own behalf.

So I went for netgraph eifaces&bridges, which gives me a separate environment where I can independently wire together my VNETs like on a breadboard, and not spam my ifconfig device-lists with strange interdependencies (it would otherwise get completely unintellegible when implementing ipfw with interface-based recv/xmit rules - which is my preferred logic for auto-creating gateway rulesets).
n the outcome, my VNET jails and the host can be treated just like independent systems, carrying only the ifaces that they actually use.

Now in the light of this matter here, it seems that my belly-feeling was correct and the ifconfig bridges do indeed create additional issues. Anyway, configuring the bridge as layer-3 aware (and the regular interfaces not), seems like putting the world upside down.
 
I'm not well versed in this. I'm piecing together what I have been told by others.

A true bridge is layer 2 and an `ifconfig bridge` is definitely more than that. To quote someone else:

* "a historically grown mess".
* "There is no universal solution".
* '“This situation is clearly against the description "zones of the same scope cannot overlap" in Section 5, RFC 4007.”'
Netgraph has issues too (I'm not familiar with them), just perhaps not this issue. If I was already using netgraph, great.

I'm told OpenBSD's (recently new) veb (virtual ethernet bridge) avoids lots of both sets of problems. re https://undeadly.org/cgi?action=article;sid=20210223111210
 
Well, the main issue with netgraph seems, it does things differently and therefore implies a bit of a learning curve. And it is something Berkeley, rarely found at other places. ;)
It certainly has issues too, but I happened not to be encountered by them; it was just fun implementing, and now there is a little rc.d script setting up the fabric as described in rc.conf, and I mostly forgot about it's contents. I had a lot trouble with ipfw, with tunnelling+mtu, with congestion-control, but almost none with this piece.
But then also, use-cases are different, and "there is no universal solution" ...
 
I am using this setup 7+ years or so and I was never able to ping the jail from the host system. I don't know why , but at some point I created a bug ticket and some dev told me to move the ip to the bridge. Not sure how it works in linux , but from a networking point of view. The bridge should not have an ip address. It should work out of the box after I have attached the Nic to the bridge.

Additionally I had to tweak my rc.conf , which is also not documented in the handbook , but it does not seem to be a popular setup...


For me it looks like an arp issue , but could not find any wrong ip to mac

Due to a bug in freebsd 12 or so I also tested netgraph bridges . It worked well , but for my setup it was too advanced and it was difficult to get my head around this hole framework .
 
I'm also interested in trying this. Any chance of a write up?
Hm... what about this?
 
Hm... what about this?
Thanks for that. Would you foresee any issues using ngbridge purely with IPv6?
 
I would highly recommend the tool from freebsdfrau

View: https://twitter.com/freebsdfrau/status/1229466379708846081


It worked very well for my use case. The only thing why I gave up on it , because there aren't a lot of people that can help you if you are running into problems. From freebsdfrau posting , she is very knowledgeable in this framework and she is using it in a big production environment.

PS: Do you still have issues with vnet jails ? Sadly I am not able to use ipv6 , because DT AG is cycling my prefix on regular bases .
 
So what? That works here:
Using IPv6 Dynamic GU Addresses in Nested Subnets

I still can't believe it... hey there's somebody on twitter who is very knowledgeabe... well, my system doesn't work, because it can't. Wow.

Thanks for bringing this up . It's a nice tutorial , I will have to take a closer look . Compared to v4 it is a joke , that it needs a multipage tutorial. If I just want v6 for outgoing connections I would just use SLAAC , but I got problems after activating it . Sometimes the v6 gateway in may jails was not reachable anymore and happy eye balls was broken too. For example pkg was not able to reach its mirror and normally it should just fall back to v4 , but it did not. Never found the root cause , but maybe I will try it again on 14.0
 
Back
Top