Wireguard vpn slow speed

Vovas

Member

Reaction score: 2
Messages: 66

Hi experts!
I have a problem with slow speed with wireguard vpn. FreeBSD 12.0 installed on VPS.
Information about server.
Bash:
# cat /var/run/dmesg.boot | grep CPU
CPU: QEMU Virtual CPU version 1.5.3 (2400.20-MHz K8-class CPU)
cpu0: <ACPI CPU> on acpi0
CPU: QEMU Virtual CPU version 1.5.3 (2400.22-MHz K8-class CPU)
cpu0: <ACPI CPU> on acpi0
Bash:
 cat /var/run/dmesg.boot | grep memory
real memory  = 2147483648 (2048 MB)
avail memory = 2043375616 (1948 MB)
Network:
Bash:
# ifconfig
vtnet0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=6c07bb<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,JUMBO_MTU,VLAN_HWCSUM,TSO4,TSO6,LRO,VLAN_HWTSO,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        ether 52:54:00:c9:7e:b4
        inet 212.0.0.2 netmask 0xffffff00 broadcast 212.0.0.255
        media: Ethernet 10Gbase-T <full-duplex>
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
wg0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1500
        options=80000<LINKSTATE>
        inet 10.0.1.1 --> 10.0.1.1 netmask 0xff000000
        groups: tun
        nd6 options=101<PERFORMNUD,NO_DAD>
        Opened by PID 575
My pf.conf
Bash:
ext_if="vtnet0"
int_if="wg0"
set skip on lo0
scrub in all
nat on $ext_if from $int_if:network to any -> ($ext_if)
pass all
Without VPN connection:

1577374055165.png


With VPN connection:

1577374083063.png

Any suggestions?
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 8,302
Messages: 32,149

Code:
wg0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> metric 0 mtu 1500
  options=80000<LINKSTATE> inet 10.0.1.1 --> 10.0.1.1
  netmask 0xff000000 groups: tun nd6
  options=101<PERFORMNUD,NO_DAD>
  Opened by PID 575
This is a tunnel to itself?
 
OP
OP
Vovas

Vovas

Member

Reaction score: 2
Messages: 66

This is a tunnel to itself?
Yes. This ip set up during the boot system.
/etc/rc.conf
Bash:
wireguard_enable="YES"
wireguard_interfaces="wg0"
ifconfig_wg0="inet 10.0.1.1 netmask 255.255.255.0"
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 8,302
Messages: 32,149

Doesn't wireguard work similar to OpenVPN? For OpenVPN you don't configure the interface in rc.conf. It's dynamically created when the OpenVPN service is started. Can you post your wireguard config (make sure to obfuscate your passwords/public IP addresses)?
 
OP
OP
Vovas

Vovas

Member

Reaction score: 2
Messages: 66

Doesn't wireguard work similar to OpenVPN? For OpenVPN you don't configure the interface in rc.conf. It's dynamically created when the OpenVPN service is started.
I don't know. Maybe!
wg0.conf
Bash:
# cat /usr/local/etc/wireguard/wg0.conf
[Interface]
PrivateKey = <...>
ListenPort = 51820

[Peer]
PublicKey = <...>
AllowedIPs = 10.0.1.2/32
Endpoint = 212.0.0.2:51820

[Peer]
PublicKey = <...>
AllowedIPs = 10.0.1.3/32
Endpoint = 212.0.0.2:51820
And config for my phone:
Bash:
# cat /usr/local/etc/wireguard/ios.conf
[Interface]
Address = 10.0.1.3/32
PrivateKey = <...>
DNS = 9.28.15.8, 212.4.1.11

[Peer]
PublicKey = <...>
AllowedIPs = 0.0.0.0/0
Endpoint = 212.0.0.2:51820
 
OP
OP
Vovas

Vovas

Member

Reaction score: 2
Messages: 66

So, I've removed ifconfig_wg0 from /etc/rc.conf and add ip address to
/usr/local/etc/wireguard/wg0.conf
Code:
[Interface]
Address = 10.0.1.1/24
PrivateKey = <...>
ListenPort = 51820
Restart daemon and same slow incoming speed. Outgoing speed around 10~20Mbps. I've changed MTU for wg0 interface to 1500, like vtnet0, because everytime after restarting daemon system set 16304 MTU by default.
Code:
[#] rm -f /var/run/wireguard/wg0.sock
[#] wireguard-go wg0
INFO: (wg0) 2019/12/30 12:54:59 Starting wireguard-go version 0.0.20191012
[#] wg setconf wg0 /tmp/tmp.fhpTOKz2/sh-np.2zOzEv
[#] ifconfig wg0 inet 10.0.1.1/24 10.0.1.1 alias
[#] ifconfig wg0 mtu 16304
[#] ifconfig wg0 up
[#] route -q -n add -inet 10.0.1.3/32 -interface wg0
[#] route -q -n add -inet 10.0.1.2/32 -interface wg0
[+] Backgrounding route monitor
netstat -r
Code:
Routing tables

Internet:
Destination        Gateway            Flags     Netif Expire
default            212.0.0.1      UGS      vtnet0
10.0.1.1           link#3             UH          wg0
10.0.1.2/32        wg0                US          wg0
10.0.1.3/32        wg0                US          wg0
localhost          link#2             UH          lo0
2212.0.0.0/24   link#1             U        vtnet0
 

zer69

New Member


Messages: 2

Dear all,

I have a very similar issue too, running 11.3-REL.

Tried things which didn't help:
- MTU change
- PF NAT vs IPFW NAT
- tuning of OS network stack

My VPS uplink is on 10Gbps, I am able to achieve 300Mbps speeds from my Windows 10 machine when running over SSH tunnels. When using wireguard VPN it's only ~10Mbps.

Any ideas would be very welcome!

Best wishes,

-Robert
 

acheron

Aspiring Daemon
Developer

Reaction score: 239
Messages: 627

wireguard implementation is userspace only on FreeBSD, what kind of performance do you expect?
 

zer69

New Member


Messages: 2

I have found out that my ISP is somehow throttling UDP connections, and WG is UDP only... Going to try OpenVPN now. Thanks for support.
 

rf10

Member

Reaction score: 1
Messages: 20

I am actually surprised wireguard works on FreeBSD. I tried it a few months ago, and it was a no go (aside from it userspace implementation on FreeBSD and the associated performance). I may dust it off again even to run some perf tests on my lan.
 

ctaranotte

Active Member

Reaction score: 24
Messages: 126

I am actually surprised wireguard works on FreeBSD. I tried it a few months ago, and it was a no go (aside from it userspace implementation on FreeBSD and the associated performance). I may dust it off again even to run some perf tests on my lan.
I am using Wireguard on FreeBSD and Debian peers. Speed seems to be as good as with OpenVPN.
 

Alexander Huemeyer

Member

Reaction score: 5
Messages: 32

I just tried in my homenet: Linux to BSD 55 MByte /s with and without wireguard. No noticable CPU utilization on the FreeBSD Server. I use the latest wireguard packe from the latest repository.
I dont think its a wireguard problem.
 

rf10

Member

Reaction score: 1
Messages: 20

I am using Wireguard on FreeBSD and Debian peers. Speed seems to be as good as with OpenVPN.
I did some performance testing on OpenVPN, and its speed was heavily affected by the encryption algorithm used. The default Blowfish was faster than AES, but I suppose it depends on whether AES hardware acceleration is present in the CPU. Wireguard is using ChaCha20, which is supposed to be fast, especially on older CPUs, but I couldn't do direct performance measurements at the time because I couldn't get Wireguard to work.
 
OP
OP
Vovas

Vovas

Member

Reaction score: 2
Messages: 66

I am using Wireguard on FreeBSD and Debian peers. Speed seems to be as good as with OpenVPN.
Could you post your pc's specifications? I use wireguard on VPS with 1gb RAM and one core processor. May be my VPS too slow:rolleyes:
 

mwest

New Member


Messages: 1

Have you tried using iperf or similar tool to remove Wireguard from the equation while testing?

On the server: iperf --server --port 9898 --udp
On the client: iperf --port 9898 --udp --client <your.server.IP>

Should reveal if the slowness is due to Wireguard, or due to something else affecting UDP traffic.
 

ctaranotte

Active Member

Reaction score: 24
Messages: 126

Have you tried using iperf or similar tool to remove Wireguard from the equation while testing?

On the server: iperf --server --port 9898 --udp
On the client: iperf --port 9898 --udp --client <your.server.IP>

Should reveal if the slowness is due to Wireguard, or due to something else affecting UDP traffic.
I have run iperf as per you suggestion with iperf on my VPS bound to server public IP and wg off.

Code:
# iperf --port 9898 --udp --client "server public IP"
------------------------------------------------------------
Client connecting to server public IP, UDP port 9898
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] client local IP port 30420 connected with server public IP port 9898
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 892 datagrams
I have run iperf as per you suggestion with wg on and iperf on my VPS bound to the server wg0 IP.

Code:
# iperf --port 9898 --udp --client "server wg0 IP"
------------------------------------------------------------
Client connecting to server wg0 IP, UDP port 9898
Sending 1470 byte datagrams, IPG target: 11215.21 us (kalman adjust)
UDP buffer size: 9.00 KByte (default)
------------------------------------------------------------
[  3] client wg0 IP port 63883 connected with server wg0 IP port 9898
[  3] WARNING: did not receive ack of last datagram after 10 tries.
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  1.25 MBytes  1.05 Mbits/sec
[  3] Sent 892 datagrams
The same with TCP packets.

Code:
# iperf --port 9898 --client "server public IP"
------------------------------------------------------------
Client connecting to server public IP, TCP port 9898
TCP window size: 64.8 KByte (default)
------------------------------------------------------------
[  3] client local IP port 58401 connected with server public IP port 9898
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  15.2 MBytes  12.7 Mbits/sec
Code:
# iperf --port 9898 --client "server wg0 IP"
------------------------------------------------------------
Client connecting to server wg0 IP, TCP port 9898
TCP window size: 64.3 KByte (default)
------------------------------------------------------------
[  3] client wg0 IP port 32814 connected with server wg0 IP port 9898
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.1 sec  13.5 MBytes  11.3 Mbits/sec
I have not installed openvpn on my VPS but tell me if you need me to run the same test with openvpn
I hope it helps

EDIT:
Server: 4 cores, 8gb, Debian 10 (amd64), wireguard-dkms (>= 0.0.20200121-2), wireguard-modules (>= 0.0.20191219), wireguard-tools (>= 1.0.20200121-2)
Client: Dell XPS L322x i5, 4gb, FreeBSD 12.1 (amd64), wireguard 1.0.20200121
 
OP
OP
Vovas

Vovas

Member

Reaction score: 2
Messages: 66

So, could somebody explain to me, why incoming speed is too slow (around 1Mbit/s)? Outgoing speed is normal(10Mbit/s) for my slow VPS.
 

Void

New Member


Messages: 2

// I have a problem with slow speed with wireguard vpn. FreeBSD 12.0 installed on VPS.
same to me

wireguard-1.0.20200206
Upload speed is good, but download at 1.5 Mbit/s.
wireguard-go CPU usage is only 5-7% at download time.
 

Alexander Huemeyer

Member

Reaction score: 5
Messages: 32

Both of u are using Freebsd 12.0. Perhaps try to upgrade to 12.1? 12.0 is soon EoL anyway.

Perhaps also a problem with ur vps provider? Some sort of throtteling outgoing udp packages? To test this, you can try the speed with netcat over tcp and udp.
 

Void

New Member


Messages: 2

I use 12.1. Tested with 3 different ISP, same slow ingress speed. 2 different VPS: one hosted by DO and some hosting from Germany, clients: android and windows. Will try to test with Linux soon.

Update:
Tested with Linux on server side. Ingress speed is about 70Mbit/s. Same VPS. I think WG port on FreeBSD seems to be bugged.
 

Futura

New Member


Messages: 1

Hi Vova,

i also faced this issue not only with wireguard but also with OpenVPN. Everything worked fine using different linux distributions. So in my case i could nail it down to the virtio network driver when using a KVM based virtualization. There are some weird udp packet drops with it. The provider i use is Netcup in Germany. Switching to the e1000 driver did solve the performance issue.

Maybe this information will help you.
 

SirDice

Administrator
Staff member
Administrator
Moderator

Reaction score: 8,302
Messages: 32,149

I suspect the original issue may be due to MTU settings. With a quick glance through wireguard documentation I noticed it seems to heavily depend on a working Path MTU Discovery. As a lot of people just blindly block everything, including all ICMP, PMTUD doesn't work.
 
Top