Hi!
We're trying to use the
For this test we have two virtual servers on the same subnet.
1881:
Acting as server in our tests, running
IP: 10.250.59.117/24, fd00:1ab::1881/64
1882:
Acting as client in our tests.
IP: 10.250.59.118/24, fd00:1ab::1882/64
In this scenario the
For the test we're using nginx with a default config, the only thing we've changed is that we added
Note the speed above, 4916 bytes/second, this speed is very consistent. IPv4 reaches hundreds of MB/s. Looking at tcpdump on the server side it shows that the reply packets take over 250ms each.
Sending data using netcat on both sides show the same slow transfer speeds, so it has nothing to do with nginx or curl.
Disabling
Enabling
We're grateful for any pointers or ideas how to troubleshoot this, or where to bug report it.
We're trying to use the
reply-to
statement in pf
rules with net.inet.tcp.tso
enabled. It's working great for IPv4, but when trying to do it for IPv6 we get page fault kernel crashes or slow speeds, depending on if nginx is using sendfile
or not. We first noticed it on physical servers and then reproduced in VMs, completely fresh installs with the latest install ISO. Setting up the TCP session and sending small amounts of data works, but larger (we've used 1MB for our tests) doesn't. Let me start by showing our environment.For this test we have two virtual servers on the same subnet.
1881:
Acting as server in our tests, running
pf
.IP: 10.250.59.117/24, fd00:1ab::1881/64
1882:
Acting as client in our tests.
IP: 10.250.59.118/24, fd00:1ab::1882/64
Code:
[root@dev-freebsdtest-arn1-1881 /usr/local/www/nginx]# uname -a
FreeBSD dev-freebsdtest-arn1-1881 13.2-RELEASE FreeBSD 13.2-RELEASE releng/13.2-n254617-525ecfdad597 GENERIC amd64
[root@dev-freebsdtest-arn1-1881 /usr/local/www/nginx]# cat /etc/pf.conf
pass in all
pass out all keep state
pass in quick on vtnet0 reply-to (vtnet0 fd00:1ab::1882) proto tcp to fd00:1ab::1881 port { 80 8888 }
pass in quick on vtnet0 reply-to (vtnet0 10.250.59.118) proto tcp to 10.250.59.117 port { 80 8888 }
In this scenario the
reply-to
doesn't really do anything, traffic would be routed over that interface and destination MAC anyway, but it's enough to demonstrate the problem. IPv6 works perfectly if we remove reply-to (vtnet0 fd00:1ab::1882)
from the pf
config.For the test we're using nginx with a default config, the only thing we've changed is that we added
listen [::]:80 default_server;
and changed server_name
to _
. We're only including the IPv6 tests here, since IPv4 works as intended.
Code:
[root@dev-freebsdtest-arn1-1882 ~]# curl [fd00:1ab::1881]:80/1MB.A.txt -o /dev/null
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 976k 100 976k 0 0 4836 0 0:03:26 0:03:26 --:--:-- 4916
Sending data using netcat on both sides show the same slow transfer speeds, so it has nothing to do with nginx or curl.
Disabling
net.inet.tcp.tso
on the server solves both the slow speeds and the crashes.Enabling
sendfile
in nginx causes an instant kernel panic on the server instead, se attached core.txt file. This is probably a larger problem in general, since it can open up for DoS attacks.We're grateful for any pointers or ideas how to troubleshoot this, or where to bug report it.