PF Trying to Fight SYN Flood

I've been getting hit a lot lately with syn flood attacks. I've started looking for ways to fight back and the single best way appears to be the syn proxy feature in pf. And it looks like I just have to use one word to enable it. There doesn't seem to be a hell of a lot written about the topic on the web, but what I have been able to find makes me think this is a great primary defense mechanism.

So I opened up pf.conf, and added `synproxy` to my rule that allows web and email traffic in. Here is the rule, I can post all my rules if that would be helpful (it's a pretty short file):
Code:
pass in quick proto tcp from any to any port { 25 80 443 587 993 } flags S/SA synproxy state

Unfortunately, this did not have the desired effect. All traffic on those ports was getting blocked. From what I've seen in examples, the syntax looks right. Any ideas what is causing the problem here? (As I type this, I'm setting up a dummy server I can use to play around some more)
 
Just a couple general suggestions: Look into syncookies() and limit retransmission of SYN/ACK packets. Unfortunately i wasn't able to locate any obvious setting to do that. It's likely adjustable with sysctl so maybe someone else can point you to the right variable which by default is probably way to high (i think it's something like 5 retries by default on Linux and i guess FreeBSD is similar - that can take MINUTES to finally timeout because of the backoff involved in retransmitting packets). Unless you are dealing with particularly flaky connection 1 retry should be more than enough. Even 0 retries should just work with any reasonably reliable conenction (little or no packetloss).

Edit: mark_j ninja'd me on the syncookies.

One more observation that might or might not apply in your case: Over the last years i've seen some weird traffic that would regularly result in smallscale SYN floods. I never really found out what it is about as there are not enough SYNs to cause any kind of damage but still enough to be highly annoying. Maybe it's some tool trying to use TCP for amplification and flood some other poor guy with SYN/ACK packets (yes, that's actually a thing and considering the massive amounts of retries many systems attempt before giving up it's likely not even that ineffective) but i am just guessing here. Anyways, those packets had a pretty noticeable pattern: They all have low source ports (i.e. < 1024 - mostly/always? even < 512) which is something you won't see for organic packets (well at least not from any even semi common OS) so that's a pretty good criteria to drop them. Why something would generate such obviously out of line packets is completely beyond me but i won't complain about how there is such an easy "opt out".

Another thing to watch out for is suspiciously low MSS values (if i remember correctly those often show up in packets relating to portscans). From the top of my head i don't know where exactly i defined the cutoff but i am also dropping packets based on this and if it seems to be applicable in your case i could dig up the relevant values.
 
Have a look at syncache(4), incidentally from work done by Lemon, who is mentioned in getopt's reference.
Syncache sounds exactly like what the synproxy keyword in pf is supposed to do, no?

I just set up a copy of my server here at home, to play around with. The exact same rule, in the exact same conf file, that seemed to cause plenty of problems on my production server, worked perfectly on my home dummy server. I guess I need to play around with this a little more when I have the time.
 
Here's another question. These countermeasures sounds like they will work great once they are properly configured. Given that my flood attacks are very random and sporadic, how can I know if it's actually working? Clearly I need to run my own syn flood on my home dummy server first, and then on my colocated server after notifying them that I'm going to run a test.
 
Syncache sounds exactly like what the synproxy keyword in pf is supposed to do, no?

I am not exactly a FreeBSD expert (yet) but syncookies is something that's usually turned on by the OS itself (for FreeBSD that means via sysctl). From a quick read up on synproxy it seems to be doing something similar (as in saving resources until it's known that the SYN packet comes from a genuine client) but implemented at a higher level while syncookies are implemented in the TCP stack itself. I really don't know what's the better approach here or if having both makes any sense but i'd tend towards synproxy only as this way you also save resources at the firewall level and it doesn't seem to be of any further help to also cache the SYN packets on the OS level after the firewall has already verified them but i don't have any experience here so i am just guessing.

Here's another question. These countermeasures sounds like they will work great once they are properly configured. Given that my flood attacks are very random and sporadic,

That sounds suspiciously like what i was experiencing. Basically a bunch of SYN packets out of nowhere with no obvious relation to anything. Usually just enough to keep maybe 20-40 connections in half open state continuing for something like between a couple hours and a day at most.

how can I know if it's actually working? Clearly I need to run my own syn flood on my home dummy server first, and then on my colocated server after notifying them that I'm going to run a test.

At least with the synproxy approach it's very likely that you will stop seeing half open connections in netstat since the lone SYN packets never reach the OS. With syncookies i am not entirely sure but i think you'd still see them. If i remember correctly syncookies are on by default in Debians Linux kernels and i very much saw that mess in netstat. FreeBSDs implementation might behave differently though. If the packets you are seeing follow some pattern (for example like the one i described in my last post) and you block on that the connections will be gone of course as again the SYN packets never reach the OS.
 
I spoke too soon in my last post. The `synproxy` keyword does seem to be causing strange behavior that I don't get without that keyword. Here are all of my rules:


Code:
table <VPN> const { 172.16.5.1/24 }
table <badhosts_a> persist
table <badhosts_b> persist

block in quick from <badhosts_a> to any
block in quick from <badhosts_b> to any

block in all

pass in quick proto tcp from any to any port { 25 80 443 587 993 } flags S/SA keep state
#pass in quick proto tcp from any to any port { 25 80 443 587 993 } flags S/SA synproxy state
pass in quick proto { esp icmp } from any to any keep state
pass in quick from <VPN> to any flags S/SA keep state
pass in quick proto udp from any to any port { 500 1701 4500 } keep state
pass out proto { tcp, udp, icmp } from any to any keep state

With the `synproxy` rule swapped in, things run funky. Sometimes connections work, sometimes they don't seem to.
When I swap that rule back to `keep state`, everything runs normally, but of course I'm still vulnerable to this attack.
 
I am not exactly a FreeBSD expert (yet) but syncookies is something that's usually turned on by the OS itself (for FreeBSD that means via sysctl). From a quick read up on synproxy it seems to be doing something similar (as in saving resources until it's known that the SYN packet comes from a genuine client) but implemented at a higher level while syncookies are implemented in the TCP stack itself. I really don't know what's the better approach here or if having both makes any sense but i'd tend towards synproxy only as this way you also save resources at the firewall level and it doesn't seem to be of any further help to also cache the SYN packets on the OS level after the firewall has already verified them but i don't have any experience here so i am just guessing.



That sounds suspiciously like what i was experiencing. Basically a bunch of SYN packets out of nowhere with no obvious relation to anything. Usually just enough to keep maybe 20-40 connections in half open state continuing for something like between a couple hours and a day at most.



At least with the synproxy approach it's very likely that you will stop seeing half open connections in netstat since the lone SYN packets never reach the OS. With syncookies i am not entirely sure but i think you'd still see them. If i remember correctly syncookies are on by default in Debians Linux kernels and i very much saw that mess in netstat. FreeBSDs implementation might behave differently though. If the packets you are seeing follow some pattern (for example like the one i described in my last post) and you block on that the connections will be gone of course as again the SYN packets never reach the OS.

I would think that doing the proxying at the firewall level would save overall resources since less is being done. Also I'd prefer to do it within `pf` if I can reliably since it's already running anyway, and in theory (though not in practice) I can turn it on by adding one simple keyword (hackers hate this one simple trick 😂)

Regarding the nature of the attack, I'm not sure what I'm getting is what you are getting. I mean, SYN packets out of nowhere describes every SYN attack ever. But I've very often seen 512 half open connections. This is when one of my networking interfaces goes down entirely. But what I'm saying is that it might happen in 5 minutes, or it might be fine for days. So if no half open connections show up in `netstat`, I don't really have any way to know for sure if things aren't happening because I protected myself, or if things aren't happening because things aren't happening. That's why, at least on my dummy home server, I should run some attacks on myself to make sure it is protected.
 
I would think that doing the proxying at the firewall level would save overall resources since less is being done. Also I'd prefer to do it within `pf` if I can reliably since it's already running anyway, and in theory (though not in practice) I can turn it on by adding one simple keyword (hackers hate this one simple trick 😂)

You are also already running your TCP stack ;) I understand what you mean though.

Regarding the nature of the attack, I'm not sure what I'm getting is what you are getting. I mean, SYN packets out of nowhere describes every SYN attack ever.

Yes, in general it does. Thing is i've seen this on quite random servers where an attack would make zero sense (nothing public on them) and since i use the IPs in question for years i think it can also be ruled out that it's supposed to hit some previous owner. Besides just opening 20-40 connections would be the most sad "attack" ever. They kept slowly accumulating so if i hadn't manually blocked them maybe there would have been more eventually but it would still be sad to the point where even mentioning the recurring source IP seems superfluous. That's why i am so puzzled about the traffic. It makes no sense besides maybe being part of a larger amplification attempt.

I spoke too soon in my last post. The `synproxy` keyword does seem to be causing strange behavior that I don't get without that keyword. Here are all of my rules:


Code:
table <VPN> const { 172.16.5.1/24 }
table <badhosts_a> persist
table <badhosts_b> persist

block in quick from <badhosts_a> to any
block in quick from <badhosts_b> to any

block in all

pass in quick proto tcp from any to any port { 25 80 443 587 993 } flags S/SA keep state
#pass in quick proto tcp from any to any port { 25 80 443 587 993 } flags S/SA synproxy state
pass in quick proto { esp icmp } from any to any keep state
pass in quick from <VPN> to any flags S/SA keep state
pass in quick proto udp from any to any port { 500 1701 4500 } keep state
pass out proto { tcp, udp, icmp } from any to any keep state

With the `synproxy` rule swapped in, things run funky. Sometimes connections work, sometimes they don't seem to.
When I swap that rule back to `keep state`, everything runs normally, but of course I'm still vulnerable to this attack.

I sadly don't have enough experience with pf to help you with the actual rules but maybe you could try to capture one (or a couple) of those SYN packets with tcpdump -vni ifX (ifX obviously being the network interface where you are seeing the traffic)? I think it would be interesting to take a closer look. Maybe there is something about it that sticks out.
 
One way to fight syn floods is through syn proxying, but another way appears to be tweaking the tcp settings for timeouts and retries of certain situations. But I'm having trouble finding a list of all of the tcp settings that explains in plain english what each setting is. Any tips on where to go on this front?
 
Check these settings:

Code:
net.inet.tcp.blackhole=2
net.inet.tcp.drop_synfin=1
net.inet.tcp.fast_finwait2_recycle=1
net.inet.tcp.fastopen.client_enable=0
net.inet.tcp.fastopen.server_enable=0
net.inet.tcp.finwait2_timeout=1000
net.inet.tcp.icmp_may_rst=0
net.inet.tcp.keepcnt=2
net.inet.tcp.keepidle=62000
net.inet.tcp.keepinit=5000
net.inet.tcp.msl=2500
net.inet.tcp.path_mtu_discovery=0
net.inet.tcp.delayed_ack=0
net.inet.tcp.recvbuf_inc=65536
net.inet.tcp.recvbuf_max=4194304
net.inet.tcp.recvspace=65536
net.inet.tcp.sendbuf_inc=65536
net.inet.tcp.sendbuf_max=4194304
net.inet.tcp.sendspace=65536
net.inet.tcp.syncache.rexmtlimit=0

Use this command to check what each setting does:

sysctl -d net.inet.tcp.keepinit
 
Unfortunately the -d flag is not giving me anything. It just repeats the key, no description :/
 
The man page has some clues
Rules with synproxy will not work if pf(4) operates on a bridge(4).
That's a simple cause if that's your network topology.

This is interesting
Because flags S/SA is applied by default (unless no state is speci-
fied), only the initial SYN packet of a TCP handshake will create a
state for a TCP connection. It is possible to be less restrictive,
and allow state creation from intermediate (non-SYN) packets, by
specifying flags any. This will cause pf(4) to synchronize to ex-
isting connections, for instance if one flushes the state table.
However, states created from such intermediate packets may be miss-
ing connection details such as the TCP window scaling factor.
States which modify the packet flow, such as those affected by nat,
binat or rdr rules, modulate or synproxy state options, or scrubbed
with reassemble tcp will also not be recoverable from intermediate
packets. Such connections will stall and time out.
The key part is that already-established flows will not recover if you add a synproxy rule, but "will stall and time out." I think what you're seeing is that effect. Flows that were established before you added the synproxy rule are timing out when you add it and do a pfctl -f /etc/pf.conf.

You could flush all states with pfctl -F all after you apply the rule. This will drop all existing connections and force everyone to reconnect.
 
# set to 0 to allow client retries or 1 for ddos protection or if behind load balancer
net.inet.tcp.syncache.rst_on_sock_fail=0
# harden syn flood protection, dont retry syn ack's, 0 for ddos, 1 for normal
net.inet.tcp.syncache.rexmtlimit=1
# enable syncookies, when under syn flood
net.inet.tcp.syncookies=1
# raise minmss to harden against small packet attacks
net.inet.tcp.minmss=1300

net.inet.tcp.drop_synfin=1

Also PF can do rate limiting, if you still stuck tomorrow, I can see if I can give you some example rules.

Essentially you need to do following.

Increase socket queue
Decrease syn ack's sent.
Decrease timeout's.
Increase socket capacity on server.
Expire sockets quickly.
Rate limit accepted syn's per ip.
Disable Sack
Decrease TCP buffer size.
Enable interrupt moderation on NIC if supported, if you have an ancient NIC the old polling feature can help you here.
Restrict logging.
Use MSIX on NIC

Some of the sysctl's posted above my post (by cybercreep) will help with some of this. https://forums.freebsd.org/threads/trying-to-fight-syn-flood.76879/post-479260
 
Also PF can do rate limiting, if you still stuck tomorrow, I can see if I can give you some example rules.
Much has transpired since this post began. Things got ok for a while but then they got bad again. I'm thinking this rate limiting might be something that can help me keep things reasonably sane. 10 months later, can you show me some examples? There is a surprisingly small amount of info about pf around the web. OR what may be more likely, because it's a two letter name, google indexes it very poorly. I dunno, either way,
 
According to wikipedia, source of eternal knowledge,
There are a number of well-known countermeasures listed in RFC 4987 including:

  1. Filtering
  2. Increasing backlog
  3. Reducing SYN-RECEIVED timer
  4. Recycling the oldest half-open TCP
  5. SYN cache
  6. SYN cookies
  7. Hybrid approaches
  8. Firewalls and proxies

I would first try some well chosen sysctl values...
Then limit the number of request from 1 IP address. How you do that i don't know. And hope that address is not spoofed.
 
Back
Top