NMAP OS Scan extremely slow

When running an nmap -O command on FreeBSD it is quite a bit slower than running it on any other OS. And when I say slower I mean by minutes rather than seconds.
nmap -O on Microsoft Windows 7 Pro (Nmap 6.40)
Code:
Nmap done: 1 IP address (1 host up) scanned in 2.30 seconds
nmap -O on FreeBSD 10.0 (Nmap 6.40)
Code:
Nmap done: 1 IP address (1 host up) scanned in 244.47 seconds
That's a difference of 4 minutes. Both the Windows box and the FreeBSD box are on the same subnet, and I am scanning the same IP address. Technically the FreeBSD server is connected to a 40Gb switch on the back end, while Windows is connected to a regular 1Gb port.

I have a Python script that I had written on a BusyBox server a while ago that I am attempting to port over to FreeBSD. The script will scan a couple hundred hosts and dump the results into a database table for storage. The BusyBox server was able to complete the full network scan in around 15 minutes. After 30 minutes on FreeBSD I decided to kill the script.
The script uses the following nmap switches:
Code:
nmap 10.1.2.3 -O -n -oX OS.xml
I did a search through the forums and I found a couple of other people running into the same problem.
viewtopic.php?&t=3279
This one described exactly the problem that I am having. But -A -T5 only cuts the run time down by a third (81 seconds compared to 244 seconds) on FreeBSD, but it did slow Windows down to the same scan speed that FreeBSD has :OOO
I also attempted to use --osscan-limit instead of -O and had the same slow > 200 seconds to run.

I'm not really sure how to continue to troubleshoot this problem. I think my next step would be to do a packet comparison between the Windows box and the FreeBSD box. But even if I do that, I'm not sure how I would go about resolving any issues.

Any suggestions?
 
I am also dealing with the same issue. It go so slow to run an nmap scan, that I'm just using a different OS when running a scan. So I really haven't had the time to investigate any solutions. However, I am also curious as to any solutions to this.
 
I have collected a packet capture from FreeBSD and one from my Windows box. The FreeBSD box towards the end of the scan appears to have some sequencing issues with the FIN's and ACK's. But I need to dig some more.
There are a bunch of TCP Retransmissions, TCP Previous Segment not captured, and TCP ACKed unseen segment. I'll have to look at the sequence numbers more closely to tell though.

Overall everything appears to be far slower with FreeBSD than with Windows. I was examining the sunrpc packets that were sent to the test box. Here are the numbers for the round trip time of packet sent, and reply packet received:
Code:
FreeBSD:  1.149739000 seconds
Windows:  .000379000 seconds
I think that I need to build a standalone test environment to run through different scenarios. Basically I need to compare 4 packet sniffs together rather than 2. The next time I run the test I will run a packet sniffer on both the sender and the receiver to try and determine if it is a network latency issue. But I doubt that is the case, as I can run the same test on a server that is on the same Host as the receiver and the numbers come out pretty much the same on Windows.

I'm not really sure what the proper troubleshooting steps should be to narrow down the issue. But I'll try to continue with the packet captures to see if I can find something. If I can't find anything there, then I may try setting up DTrace to see if I get lucky and find the cause.

Any advice or direction would be greatly appreciated.
 
Do you have a packet filter running on the FreeBSD host? Things like traffic normalization (PF's scrub for example) can have a big effect on how nmap performs. I would also try another NIC (one that uses a different driver) if possible to rule out any driver problems.
 
kpa said:
Do you have a packet filter running on the FreeBSD host? Things like traffic normalization (PF's scrub for example) can have a big effect on how nmap performs. I would also try another NIC (one that uses a different driver) if possible to rule out any driver problems.

This FreeBSD box doesn't have a firewall enabled or a packet filter that I know of on it.
It has the following installed:
Apache2
PHP55
Python33
NMAP
Git
MariaDB
Sudo
WGet

Everything else on the box is standard. Also this is running as a virtual machine inside of vmware. Tomorrow I will install nmap on one of my physical FreeBSD servers just to eliminate a possible problem with it being a virtual machine.
 
Looks like you and I actually have a very similar setup. I'm also running freebsd on a virtual machine.

I am actually running my FreeBSD server on a virtual machine under KVM. And I've read somewhere a couple weeks ago that the default FreeBSD network drivers actually can have more latency on virtual machines compared to other OS' such as linux or windows. One 'solution' is to switch to the VirtIO network drivers and that should help with the latency. I'm actually on the process of switching the default network drivers to the virtIO drivers on my freebsd VM. Once I do that, I'll once again run a scan to see if it's still taking long. That should at least rule out potential latency issues.
 
Alright, upon arriving at work I powered on my test FreeBSD box that is on my desk and connected it back to my network.
Code:
nmap -O 10.1.2.3
Has taken a little longer than it did on my virtual machine. As far as network goes, this physical box is plugged into the same switch that my Windows PC is that I have been running the tests against.

The physical server is just an old dell pizza box. It has a handful more things running on it, as it is my test server for migrating from Windows DHCP / DNS to FreeBSD BIND. It has a BCM5750 NIC.

Just as another test, I plugged that physical FreeBSD box back into my test network and ran the same nmap command (different IP) and it responded in just under a minute (56 seconds). This is a huge improvement over what I was getting before, although it is still 53 seconds slower than Windows.
My test network consists of 2 Windows domain controllers and this 1 physical FreeBSD box, plugged into a Cisco 2960 switch with no configs.

I should clarify my test scenarios a bit so that they make sense (sorry I have just been testing random things at random times).
Code:
FreeBSD Virtual Server will now be known as FreeBSD VM
FreeBSD Physical Server will now be known as FreeBSD PS
Original destination IP address is 10.1.2.3 running Windows Server 2008 R2 Datacenter with Exchange 2010 running in a VM
Secondary destination IP address 10.1.2.81 running Windows Server 2008 R2 Standard as a Domain Controller (DHCP / DNS) in a VM
Secondary destination IP address 10.1.2.82 running Windows Server 2008 R2 Standard as a Domain Controller (DHCP / DNS) as a physical server
Test Windows server in standalone test environment 10.2.2.81 running Windows Server 2008 R2 Standard as a Domain Controller (DHCP / DNS) as a physical server
Windows Server 2003 IP Address 10.1.2.18 (Nothing special running on it) in a VM
Windows Desktop running Windows 7 as a physical box
All systems have the same version of nmap (6.40) except the Windows Server 2003 (running nmap 5.0)
All production network physical servers are plugged into the same switch which is the same switch the Virtual Host plugs into
All test network servers are plugged into the same switch
Here are my results thus far using nmap -O:
Code:
FreeBSD VM to 10.1.2.3 (production network) takes around 220 seconds
Windows 7 to 10.1.2.3 (production network) takes around 2.3 seconds
FreeBSD PS to 10.1.2.3 (production network) takes around 239 seconds
Windows 2003 to 10.1.2.3 (production network) takes around 4.33 seconds

FreeBSD PS to 10.2.2.81 (test network) takes around 47 seconds
10.2.2.81 to 10.2.2.82 (test network) takes around 8.13 seconds
10.2.2.82 to 10.2.2.81 (test network) takes around 7.64 seconds
10.2.2.81 to FreeBSD PS (test network takes around 11.12 seconds

FreeBSD VM to 10.1.2.81 (production network) takes around 228 seconds
Windows 7 to 10.1.2.81 (production network) takes around 3.32 seconds
FreeBSD PS to 10.1.2.81 (production network) takes around 224 seconds
Server 2003 to 10.1.2.81 (production network) takes around 3.3 seconds

Windows 7 to 10.1.2.82 (production network) takes around 2.32 seconds
FreeBSD VM to 10.1.2.82 (production network) takes around 146 seconds
FreeBSD PS to 10.1.2.82 (production network) takes around 338 seconds
Server 2003 to 10.1.2.82 (production network) takes around 2.98 seconds
All of these tests were run in the same 30 minute time frame, and no two tests were run at the same time. The strange thing that I find from those numbers is that a VM to a VM is quite a bit slower than a VM to a Physical server. It appears that I may have a bit of a network issue that I need to track down, as there is a slow down of around 33% when going to a VM, whether it is from a physical box, or a VM.

But even if I have a network issue with my VM's, FreeBSD is still several magnitudes slower than the Windows nmap scans.
Hopefully this clarifies some stuff.
 
I've switched my FreeBSD VM to use use the VirtIO interface drivers, but still no luck. A simple nmap scan (with no flags), takes over 3 minutes compared to 1.03 seconds on other VMs (different OS) in the same network. I'm out of ideas here. :(
 
So I have made a little bit more progress. I found my 33% slow down on my network with VMWare, and I will be coming in this weekend to fix it. Basically all of my servers are using the legacy E1000 network interface. By changing the network interface to VMXnet3 the network slowdown goes away. I made this change on a couple of servers and verified up to a 25% increase in speed.
But this still did not solve the problem with nmap, it did make it slightly faster though, which is good.

I did however come up with some better Google searches, and it appears that this nmap problem goes all the way back to November 2003.
The culprit appears to be bpf and how it does not use BIOCIMMEDIATE. If I get this wrong the please feel free to correct me. BPF uses buffers or timeouts to process packets. So what is happening is nmap is buffering all of its packets using bpf. The problem is bpf is not releasing those buffered packets until either the buffer is full or the timeout expires on the packets.
What BIOCIMMEDIATE does, is that it makes the incoming packets readable immediately. There is no waiting for a buffer to fill or a timeout to expire. This is how Windows and Linux are processing the packets, they are processing them as soon as they arrive rather than waiting on buffers.
So in theory if BIOCIMMEDIATE was enabled in bpf/nmap our problems would magically disappear.

Not sure where to go from here, but I feel that I have made a little bit of progress, as in theory this is where the problem lies.
 
At this point I am over my head, but I am doing my best to try and figure things out.

The reason that I started to research into bpf.h was I found something online that said that nmap included pcap.h and that pcap.h included bpf.h. I am currently basing my troubleshooting on the assumption that this is correct.

From what I can tell bpf does have BIOCIMMEDIATE enabled.
/usr/include/net/bpf.h
Code:
#define BIOCIMMEDIATE   _IOW('B', 112, u_int)
After reading through the bpf man page it appears that u_int has to be passed to bpf to enable or to disable 'immediate mode'. Where I am stuck out now, is I do not know how or where to check if that u_int is being passed to bpf or not. If it is not being passed then that is probably the problem. If it is being passed, then I am back to square one as to what the problem could be.

Any suggestions on where I could check to see if the BIOCIMMEDIATE switch is being passed to bpf?
Also, should this thread be moved to the Installation and Maintenance of FreeBSD Ports or Packages forum? This appears to be more of a package problem then an actual network problem. I've vaguely been playing with the idea of opening another thread there, to see if I could get some help troubleshooting the package itself to figure out how it works and how it links with everything else.
 
Can you try with this sysctl: sysctl net.inet.tcp.delayed_ack=0
 
Thanks for the reply acheron.

I tried running sysctl net.inet.tcp.delayed_ack=0 and ran nmap -O 10.1.2.3 and it still took around 230 seconds to run. I tried rebooting the server and running it again, and I still have the same amount of time.

I posted this question on the mailing list as well, and I haven't received any responses yet.

I've actually been pulled off onto another project at work which has been consuming my life for the last 2 weeks, and will probably do so for another week or two. After I complete that project I will start working on this again to try and narrow down the problem and see if I can get it fixed.
 
It takes 10s on my 11-CURRENT machine, what version are you running ?
 
I was panicking there for a moment as I didn't know that 11 had been released. But after looking it hasn't been released yet, it is still in CURRENT. Kind of scared me as I just finished updating all of my servers to 10.0-RELEASE a couple of weeks ago.

uname -a
Code:
FreeBSD mon01.contoso.com 10.0-RELEASE FreeBSD 10.0-RELEASE #0 r260789: Thu Jan 16 22:34:59 UTC 2014     root@snap.freebsd.org:/usr/obj/usr/src/sys/GENERIC  amd64

While 10s is a huge improvement over what FreeBSD is currently running at, it is still 5 times slower than Windows or Linux. But I could so live with 10s.
 
So I have been perusing through the FreeBSD 11 release notes, and it appears that they are replacing bpf with netmap.

This may be why 11 runs quicker than 10. Which means there is a problem with the bpf BIOCIMMEDIATE.
Now the question is, do I wait for FreeBSD 11 to hit release status, install FreeBSD 11-CURRENT, or do I continue to beat the dead horse and troubleshoot bpf?
 
Back
Top