Solved Strange issue with scp performance

We have a weird situation here. We just made a clone of a virtual image running Linux and sent it over to some could provider. The problem is that the scp performance to *many* hosts in our office center has worsened considerably, including one FreeBSD box.

In this FreeBSD box I run also a jail, and a Linux Ubuntu image with bhyve. The crazy thing here is that :

1) scp from the original (local office center) Linux image to the host FreeBSD, its jailed FreeBSD and its bhyve linux is 2.7 MB/s
2) scp from the cloud linux image to the host FreeBSD is slow (~300KB/s)
3) scp from the cloud linux image to the above FreeBSD's jail is slow(~300KB/s)
4) scp from the cloud linux image to the above FreeBSD's bhyve ubuntu is fast(~1->1.5MB/s)

What can cause this behavior?

FreeBSD host and jail is 10.2-RELEASE-p7, bhyve linux runs Ubuntu 16.04.1 LTS
 
What can cause this behavior?

Too many variables could affect your results ... bandwidth, traffic congestion, cpu load, disk I/O load, ssh configuration (i.e. use of compression) ... try to shrink the domain of the issue.

Beside, even 2.7 MB/s seems slow to me ... if MB/s is for MBit/s.

Does a different protocol work any better ? Try to use wget, or fetch from the jails ...
 
The file is

-rw-r--r-- 1 root root 2842464 Oct 30 2014 /boot/vmlinuz-3.2.0-4-amd64


Code:
achill@itdevel:~$ sftp achill@office-freebsd-host
Password for achill@xx.xx.xx.xx:
Connected to xx.xx.xx.xx.
sftp> put /boot/vmlinuz-3.2.0-4-amd64
Uploading /boot/vmlinuz-3.2.0-4-amd64 to /usr/home/achill/vmlinuz-3.2.0-4-amd64
/boot/vmlinuz-3.2.0-4-amd64                           100% 2776KB 462.6KB/s   00:06
sftp> ^D
achill@itdevel:~$
achill@itdevel:~$
achill@itdevel:~$ sftp achill@office-freebsd-bhyve-linux
achill@xx.xx.xx.xx's password:
Connected to xx.xx.xx.xx.
sftp> put /boot/vmlinuz-3.2.0-4-amd64
Uploading /boot/vmlinuz-3.2.0-4-amd64 to /home/achill/vmlinuz-3.2.0-4-amd64
/boot/vmlinuz-3.2.0-4-amd64                           100% 2776KB   2.7MB/s   00:01
sftp>
sftp> ^D
achill@itdevel:~$

Why on earth would scp to the bhyve'd linux run so much faster than to its host FBSD? They share network, bandwidth, traffic congestion, netif, cpu, mem (bare iron) ... both systems host and linux guest are mostly idle. If anything the virtualization layer should add burden to the linux guest.
 
Could be be different MTU values or encryption. Are you using different sftp server in FreeBSD? Can you post your sshd_config?
 
FreeBSD host ifconfig
Code:
achill@smadev:~> ifconfig
re0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=82099<RXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
        ether 94:de:80:28:b8:1c
        inet 10.9.200.131 netmask 0xffffff00 broadcast 10.9.200.255
        inet 10.9.200.216 netmask 0xffffffff broadcast 10.9.200.216
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=600003<RXCSUM,TXCSUM,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
tap0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=80000<LINKSTATE>
        ether 00:bd:05:15:db:00
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
        media: Ethernet autoselect
        status: active
        Opened by PID 8228
bridge0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        ether 02:14:fa:5d:94:00
        nd6 options=9<PERFORMNUD,IFDISABLED>
        id 00:00:00:00:00:00 priority 32768 hellotime 2 fwddelay 15
        maxage 20 holdcnt 6 proto rstp maxaddr 2000 timeout 1200
        root id 00:00:00:00:00:00 priority 32768 ifcost 0 port 0
        member: tap0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 3 priority 128 path cost 2000000
        member: re0 flags=143<LEARNING,DISCOVER,AUTOEDGE,AUTOPTP>
                ifmaxaddr 0 port 1 priority 128 path cost 20000

bhyve guest linux ifconfig
Code:
achill@ubuntu-achill:~$ /sbin/ifconfig
enp0s2    Link encap:Ethernet  HWaddr 00:a0:98:d2:16:22  
          inet addr:10.9.200.206  Bcast:10.9.200.255  Mask:255.255.255.0
          inet6 addr: fe80::2a0:98ff:fed2:1622/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2069671 errors:0 dropped:18 overruns:0 frame:0
          TX packets:13193 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:279585515 (279.5 MB)  TX bytes:3857447 (3.8 MB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:160 errors:0 dropped:0 overruns:0 frame:0
          TX packets:160 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:11840 (11.8 KB)  TX bytes:11840 (11.8 KB)

MTU's are the same.

FreeBSD's sshd_config :

cat /etc/ssh/sshd_config | egrep -v -e '(^#)|(^$)'

Code:
ListenAddress 10.9.200.131
PermitRootLogin yes
Subsystem       sftp    /usr/libexec/sftp-server

bhyve linux guest :
Code:
Port 22
Protocol 2
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_dsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
UsePrivilegeSeparation yes
KeyRegenerationInterval 3600
ServerKeyBits 1024
SyslogFacility AUTH
LogLevel INFO
LoginGraceTime 120
PermitRootLogin prohibit-password
StrictModes yes
RSAAuthentication yes
PubkeyAuthentication yes
IgnoreRhosts yes
RhostsRSAAuthentication no
HostbasedAuthentication no
PermitEmptyPasswords no
ChallengeResponseAuthentication no
X11Forwarding yes
X11DisplayOffset 10
PrintMotd no
PrintLastLog yes
TCPKeepAlive yes
AcceptEnv LANG LC_*
Subsystem sftp /usr/lib/openssh/sftp-server
UsePAM yes
 
I'm unable to see anything relevant ...

please ssh into your machines, host, jails, bhyve and try to scp locally:
(possibly use a large file, 200 MB or more)
Code:
$ sftp asx@localhost
sftp> cd /tmp
sftp> put FreeBSD-10.3-RELEASE-amd64-memstick.img
Uploading FreeBSD-10.3-RELEASE-amd64-memstick.img to /tmp/FreeBSD-10.3-RELEASE-amd64-memstick.img
FreeBSD-10.3-RELEASE-amd64-memstick.img            100%  744MB  53.1MB/s   00:14  
sftp> quit
This is a laptop, and my result look like bound to the underlying disk I/O speed, scp doesn't appear to add any significant overhead.
 
Thanx ASX. scp to localhost is very fast (as expected).
Might be the problem is not with scp but rather the network stack.
 
Those are depreciated. How much speed gains did you get?

300KB/s -> 2.7MB/s

I saw they were marked deprecated, but this was a management decision. In this case to solve similar problems one should install openssh from ports.
 
Glad to see that there are improvements, however I suspect there could be something else going on.

I don't use bhyve, but use virtualbox sometimes, and made a quick test: sftp from vbox guest to host (both freebsd 10.3) and can achieve 7 MB/s or 3.9 MB/s using compression, which in my opinion are low speeds. because I know that when running Linux / Linux on the same laptop the speed was around 20 MB/s.

Another quick test, sftp from guest to itself, show approximately 19 MB/s ...

Your case was more extreme (300 KB/s), while in my case I didn't noticed about that until now, I still think something is not right.
 
My case was from a high latency new-ish linux remote vm to our local net on various hosts. Anyways, scp seems to be running faster on linux. Also I tested the HPN settings in my local bhyve linux and it didn't recognize any of them. So I guess ssh in linux has a better way to auto-tune.
 
re-solved , also playing with :
Code:
sysctl net.inet.ip.intr_queue_maxlen=2048
sysctl net.inet.tcp.cc.algorithm=chd
sysctl net.inet.tcp.cc.algorithm=cubic
sysctl net.inet.tcp.cc.algorithm=htcp
sysctl net.inet.tcp.cc.algorithm=newreno
sysctl net.inet.tcp.recvbuf_auto=1
sysctl net.inet.tcp.recvbuf_inc=524288
sysctl net.inet.tcp.recvbuf_max=16777216
sysctl net.inet.tcp.recvspace=419430
sysctl net.inet.tcp.sendbuf_auto=1
sysctl net.inet.tcp.sendbuf_max=16777216
sysctl net.inet.tcp.sendspace=209715
help a lot
 
A simple and surprising workaround (I have not dived into the technical analysis) is when you set the MTU on the vmx interface to a higher value (e.g. 1600), it seems to work without problems.
 
Back
Top