PF submission stuck waiting for nearly 4 years

I consider "to demonstrate a use case for the patch" as lower priority, because simply throwing in the terms VoIP, SIP, WebRTC triggers enough attention?
Actually no. SIP is what is used for VoIP and the fact that it has problems with NAT is because it was never designed to work through NAT in the first place. SIP exchanges IP address information within the protocol and that is obviously a problem if this IP address is from a private IP address block. That does not implicate that there is anything wrong with NAT that needs to be fixed, also solutions to this particular problem (SBCs, B2BUAs, ALGs, SIP/RTP proxies) have been available in a wide variety for quite some time now.

As for WebRTC and other stuff that employs techniques like STUN/TURN/ICE etc, I consider any such techniques that leverage aspects of certain kinds of NAT implementations to make things work that were never intented to work this way in the first place as broken by design. Reading terms like 'UDP hole punching' alone certainly rings my alarm bell. State created by an outbound connection should never be abused to create something like a server/listening socket.

IIUC the manpage pf.conf(5) is unambiguous here: "A stateful connection is automatically created [...] as long as [the packets] are not blocked by the filtering section of pf.conf." I.e., the blocking filtering rules overrule NAT's packet translation. Again, we could/should create an external test (or set of tests) to ensure this? EDIT Yes, and additionally this could go into an code-internal assertion. My understanding is that the filters are applied 1st, then the NAT rules, then maybe additional filter rules? Are there such? In ipfw(4), the rules are numbered. How is the ordering handled in pf(4)? Or does the admin have to use netgraph(4) to apply stacking of rules?
Actually it's the other way around as stated in pf.conf(5):
Code:
     Since translation occurs before filtering the filter engine will see
     packets as they look after any addresses and ports have been translated.
     Filter rules will therefore have to filter based on the translated
     address and port number.  Packets that match a translation rule are only
     automatically passed if the pass modifier is given, otherwise they are
     still subject to block and pass rules.
The question is: Look at the scenario that a packet arrives from a so far unknown remote host on a port that HAS a mapping. Does the patch do the sane thing here, rewrite this packet, but still evaluate the rules (cause with the unknown remote host, it doesn't really match a state entry)? If yes, it would still be blocked if you have e.g. a "block all" rule, but would be routed and rewritten correctly if no filter rule blocks it.

Sure, this is one thing you must review when reviewing the patch. It should NOT introduce a hole in the stateful filtering mechanisms, and I'd say if it does, it is broken.
Talking about the behavioural characteristics of the so called 'full cone NAT' you will probably find the same information in many places on the internet including here, and the key aspect that just doesn't fly with me is:
Any external host can send packets to iAddr:iPort by sending packets to eAddr:ePort.
This is what RFC4787 terms as an enpoint independent filtering. While this will certainly help applications that rely on broken-by-design techniques such as STUN/TURN/ICE to function, from a security standpoint such behaviour is just not tolerable.

As I see it, there are two parts to this. The actual translation and the filtering which can be further subdivided into state matching and filter rule evalution, but then again these two parts seem to be tightly coupled. I believe the only solution is to make each and every aspect of NAT user configurable, so that everyone who badly needs to make applications work and understands the security implications that come with it may decide to do so, letting everyone else use whatever they deem appropriate for their particular use case. But that's just what I am not seeing here, how this could be sufficiently configured by means of a pf.conf(5), and forcing full cone NAT down everyone's throat surely isn't going to work for me and probably some other people out there.
 
This is what RFC4787 terms as an enpoint independent filtering. While this will certainly help applications that rely on broken-by-design techniques such as STUN/TURN/ICE to function, from a security standpoint such behaviour is just not tolerable.
The whole problem with this discussion is that filtering is an unwanted necessity for NAT, but intended behavior for firewalling. If these aspects aren't clearly separated then yes, you might open up unwanted security holes.

What I would expect (after a patch improving NAT as suggested here) with a pf.conf only containing
Code:
nat on $ext_if from $internal_net to any -> ($ext_if)
would be a best effort to also route incoming packets that don't match a "tracked" connection yet, while I'd expect the old behavior as soon as I add e.g.
Code:
block in on $ext_if
pass out on $ext_if

But given the amount of discussion and the explanations of work to do to make SURE this patch is safe and secure, it might not be worthwile. As explained earlier, you'd definitely want that functionality for CGNAT, but it's very unlikely ISPs would ever use pf on FreeBSD for that.
 
This is what RFC4787 terms as an enpoint independent filtering. While this will certainly help applications that rely on broken-by-design techniques such as STUN/TURN/ICE to function, from a security standpoint such behaviour is just not tolerable.
Stateful filtering is trivially defeated when you have a coordinator like a STUN server in the middle. Since UDP is not really stateful, all the endpoints have to do is start sending packets to each other to establish a "state". Sure there'll be a little packet loss at the beginning, but it's easy to deal with that.

Once you've established a peer-to-peer botnet, any node can act as a STUN server for newly-infected machines. And your firewall will be blissfully unaware because it will be keeping state for connections that look legit because they appear to have initiated locally.

This is no side-effect. This is the way this NAT traversal strategy is designed.

Edit: Nodes can't really be a STUN server for new infections, there has to be some external vector for infecting new machines. Any node can be the coordinator machine that sends marching orders to the botnet, though. One of the main strategies for taking down a botnet is to take down its coordinator, so botnets go to great pains to hide the IP addresses of the control servers. Imagine a botnet where any node can be the coordinator.

Also, having the infection vector move around is no big hurdle. It's something that's commonly done already using banner ads, for example.
 
Thank you all for discussing this in depth.
This seems to be a good example where it is probably the better alternative not to blindly follow the standard.
The risk-gain ratio seems not to be very good, especially when there exists ported software to add this functionality, if it is actually needed.

Also considering the necessary test efforts to make sure that nothing gets unintentionally leakier than the RFC proposes.
I guess this won't be trivial, and requires some sizable test setup.

Maybe it could be a good idea to add a comment to the PR, linking to this thread for detailed reasoning about this patch not being added, so this discussion doesn't unnecessarily repeat.
 
This seems to be a good example where it is probably the better alternative not to blindly follow the standard.
This is IMHO still the wrong conclusion, but it depends on your definition of "better". The standard in question aims to improve NAT (and *only* NAT). This is of course desirable. When there is a risk attached, it's for implementation reasons (when the same code is used for stateful filtering in general).

So, the conclusion would not be that it's "better" to not follow this standard, but it might be that it's not worth the effort, given you have to make sure stateful filtering used for firewalling purposes isn't affected. This could be a conscious decision, also taking into account that usefulness of NAT in general is decreasing with more and more IPv6 used. I'd also say this could always change if there is someone who desperately wants this and is willing to supply all the test cases necessary and so on.

Therefore, a comment on the review might be useful, it should just list what is necessary and what are the possible risks of doing it "wrong". For the simplest test case I could think of, see my post above – of course this wouldn't be enough, but could be a starting point. Yes, I doubt anyone will have a need for this strong enough to go all the way.
 
Maybe it could be a good idea to add a comment to the PR, linking to this thread for detailed reasoning about this patch not being added, so this discussion doesn't unnecessarily repeat.
Done. 1st activity of my Phabricator account. Let's see what else I can do. If only I could manage to waste less time hanging aroung here in the forum... Please stop posting interesting topics that distract my attention!
 
Apologies again for OT posting. Please skip tl;dr
Done. 1st activity of my Phabricator account.
First my impression was: why is that RFC not integrated, even though it would be beneficial for VoIP and much more.
Then the discussion spun. Learning what this unsolicited patch does, what implications it has on the unsuspecting user who doesn't know deep technical detail, which damage potential it has, and the difficulty to test that thoroughly.
Learning the core team's viewpoint from Kristof Provost's insightful comments, I can see that it has good justifcation imho, and personally I believe this patch does not have a real chance to be integrated.


The worst thing that can happen/be done is to push off willing potential contributors by frustrating them.
I have been through that experience, and my personal conclusion is that some grassroots means to integrate patches that fix bugs or add functionality are needed, which are practically usable for people whose daywork is not system programming/building.

Then you described the approach of using unionfs to achieve this.
Having a framework making this easy (a few scripts to set up, build/install and the like) would be essential for this.
With such a thing it would be far easier for "normal mortals" to voluntarily participate in field testing of patches and giving potentially useful feedback in, for example, phabricator.



As you correctly found out, I am a smurf, and I am quite sure I am not the only smurf on this forums. Smurfs are very individualist, usually highly intelligent and quite anarchic, hate being commanded like in a strongly organized and regulated bureaucracy, and usually have high respect for the unique skills and capabilities of every other smurf.

What I am dreaming of is finding successful ways how to make smurf cooperation productive. Because of your Kommunity thread I know you have similar thoughts. A project like ohmyzsh is a good example, 1800+ smurfs contributed to it. They seem to do it in a smurf-compatible way, and this is also the way I want to go with the postinstaller I am working on. Long ago an individual made the first step with ohmyzsh, and it became a successful big cooperation. For this reason I am collecting every contribution (e.g. all that helped, information, advice, suggestions, code) together with link as proof, so everybody can see it is not an one-man-show, but intended as a cooperative effort open for all smurfs who want to contribute. I can only say, without the help of these (currently ~20) contributors, the result won't be nearly as good. For this reason alone I feel I can no longer say "my postinstaller", as it actually grew from the input of every contributor, and the credits list grows longer and longer.

I think the suggestion you made in post #7 is extremely helpful for enabling more smurfs to join in working on FreeBSD. Not only for kernel/core stuff, but in particular useful also for assisting in improving the KDE port. What you described is what many people would love to have, but lack time and motivation to individually figure out how to set up and operate. Such a framework could be a basic foundation for successful Kommunity cooperation, helping improve FreeBSD without putting additional strain on the core team.

Please don't be offended... I believe you are a smurf, too 🍻
I'd be glad if you could write up a how-to, maybe also explaining its use in a concrete use case, like showing how to work on the KDE taskbar code, make changes, build them, and produce patch files to share.
As you correctly stated, it is not appropriate to demand other smurfs implement what one wants and is too lazy or lacks time/know-how to start himself, so I feel unable to request you to write such a howto. But what I definitely can say is that I guess that quite some people would tremendously appreciate such a guide...
 
Learning what this unsolicited patch does, what implications it has on the unsuspecting user who doesn't know deep technical detail, which damage potential it has, and the difficulty to test that thoroughly.
Just to make this as clear as possible: This patch "done right"*) wouldn't have ANY implications on "unsuspecting users" except for those that ONLY have a "nat" rule and somehow expect that to do "firewalling". You won't find such a configuration anywhere, even the handbook adds the most basic firewalling rules in its examples, so someone having ONLY nat in his config should probably know what he does.

The difficulty here is "only";) to make SURE this patch won't affect anything other than plain NAT.

---
*) I didn't read the whole patch and even if I did, I wouldn't be able to judge, knowing nothing about the current pf codebase. It's perfectly *possible* the patch is already "done right", and we just can't know as long as nobody did a deep review. Just mentioned for fairness towards the original author.
 
Strong disagree here. The first thing to accomplish with any patch is to convince people that what it tries to do makes things better. Once that's established we can argue about how to get there.
Throwing around terms doesn't accomplish that at all, unless you mean to claim that VoIP/SIP/WebRTC currently don't work (they do...).
I'm gonna butt in here real fast to try making a convincing argument for this.

The proposed feature (full cone NAT and its siblings) are used extensively in the game industry for high-performance peer-to-peer links. I can go into more details but I assume you don't need the technical details. One game that makes use of this is Fantasy Strike, which I acknowledge is not a high-profile game, it's just the one I ran into recently so it's the one at the forefront of my mind.

Without some way of punching through the firewall, Fantasy Strike just refuses to connect you to friends. There isn't a fallback, there isn't a low-performance option, you get an error message that's somewhat confusing. Hooked up to my generic cable modem it works just fine; hooked up to my generic cable modem behind OPNSense it doesn't work at all; hooked up to my generic cable modem behind IPFire, it works seamlessly.

I'll acknowledge that a possible fix for this is to manually add a firewall rule, and Fantasy Strike does support a static IP firewall hole to play, and I could have done that. But this is a bad fix for a few reasons.

* It requires that the user have significant technical knowledge when what they maybe really want is to just play the game.
* It requires that the user have the ability to manually change the firewall; what happens if, say, my kids want to play Fantasy Strike? What happens if I'm in a group home and the sysadmin isn't around?
* It has to be manually changed to point at a different computer whenever someone else wants to play.
* It is absolutely incapable of supporting two computers at once behind a single IP, regardless of how much firewall tweaking you do, because manual port assignment can only assign one computer per port, and Fantasy Strike has no option to choose a port manually. (I suspect there's at least a few games that do, but it's vanishingly rare in my experience.)

It is reasonable to say "well, that's just one game, how common is that". It is - in my experience - unfortunately common. Many games use this technique. But that's not even the biggest problem, because game *consoles* often use this technique, sometimes with a single port per console type.

So, I admit I haven't tested this due to not having the hardware, but I suspect that without firewall support for some kind of full-cone-NAT, you can't have two Nintendo Switches in the same household, playing p2p games online simultaneously, possibly even if they're different games. Not "without manual intervention"; you just can't do it, full stop. (And heaven help you if they want to play *with each other* - yes, in theory they could just connect directly, but in practice they're likely going to use full-cone-NAT with hairpinning to negotiate a connection through the modem itself, which I acknowledge is silly but is still common and still requires firewall support.)

Whereas it works just fine on my cheap budget cable modem, and it works just fine with a Linux-based firewall, both with zero manual configuration required.

Now, this is, as far as I know, *mostly* an issue with games, and yes, games should be able to just do UPnP requests to set up holes, and many of them don't, and that's kind of the fault of games. I don't know where you fall on the "games are a relevant target audience" spectrum and I don't know where you fall on the "they could fix it on their own, not our responsibility to support weird hacks" spectrum (and this is definitely a weird hack.)

But that's the feature missing, and that's why I spent a few hours trying to figure out the state of this functionality on BSD :)
 
Now, this is, as far as I know, *mostly* an issue with games, and yes, games should be able to just do UPnP requests to set up holes, and many of them don't, and that's kind of the fault of games.
IMHO, the actual "fault" is NAT 😈

These games just mostly behave as if there wasn't any NAT, with the least intrusive little modification in sending out a packet first on a socket before expecting other packets to be received from there. I think that's pretty sane, compared to things like UPnP.

I still don't like this picture of "poking holes". Unfortunately, a lot of "firewalls" mix up NAT with filtering, but that's IMHO the wrong approach. NAT can be modified to work correctly with applications behaving as described above. An administrator could still forbid this communication using filtering rules.
 
The current NAT behavior of Freebsd (and therefor pfsense and opnsense) is really a pain in the ass for SIP (also for other things). What is commonly recommended for SIP clients behind pfsense/opnsense is the "static port" option, but this can only be used for one client behind nat. Once you've got more than one per public IP you're screwed.

The current "Address and Port Dependent Mapping" vs. "Endpoint Independent Mapping" which most other systems do really does not increase security in any meaningful way, but it creates problems when two systems who are both behind NAT want to communicate without a central relay server in between. I guess "big relay" doesn't want freebsd to have a "normal" NAT implementation that allows machines to talk to each other like it was always intended by the IP protocol?

In the end, this will become obsolete with IPv6 anyway, but why not make v4 work as it's intended to do? The current behavior really serves no purpose and at very least a normal less restrictive NAT behavior should at the very least be an option! By the way - there is an AMAZING introduction to NAT and its problems regarding port/machine dependent/independent mapping available here: https://tailscale.com/blog/how-nat-traversal-works/ - it's a long article but really worth the time to read and understand.
 
Oh, so you're starting that "security argument" again, right? Then again, NAT never was a security measure. Public addresses for communication on the internet is how IP was designed from the very beginning.

The first "bad idea" in the 90s was developing networking software without security in mind. Actually, that bad idea is much older. It was a long and painful learning process.

The second "bad idea" back then was assuming privately operated (Windows) boxes wouldn't need any kind of firewall on the internet.

The answers to both bad ideas should be pretty obvious.

NAT is an answer to a very different problem, namely the shortage in the IPv4 address space. NAT tries to do its job imposing as little restrictions as possible on communication, while a Firewall is meant to impose these restrictions. Not improving NAT because people rely on restrictions it introduces as a side-effect is just bollocks. All these restrictions can be imposed explicitly and voluntarily by appropriate firewall rules if they are needed.

NAT in itself is a bad idea, a horrible hack, an unfortunate necessity.

Of course, you're invited to look for services on nexus.home.palmen-it.de (my desktop box). Spoiler: you won't find any, and that's not because sockstat -l6n would be empty on that box, it's because my firewall rejects any TCP/UDP connection attempt from the internet to any box in the LAN. There's no reason at all why some consumer plastic router couldn't do the same for IPv6 while delegating a public prefix to your LAN.
 
Of course, you're invited to look for services on nexus.home.palmen-it.de (my desktop box). Spoiler: you won't find any, and that's not because sockstat -l6n would be empty on that box, it's because my firewall rejects any TCP/UDP connection attempt from the internet to any box in the LAN.
One box on the Internet can be secured, therefore all boxes on the Internet can be secured. I can't decide if this is a hasty generalization or a garden-variety non sequitur.
 
So, seriously? Just because a simple thing any consumer plastic could do (and, probably does) makes less sense to you than relying on the side-effect of something that was never meant to secure anything?
 
There are no source committers on the forums. Ask on the mailing lists.
 
Bluntly, no.

Not without a much better documented use case for this patch, along with tests and some sort of indications that the author (or someone...) will maintain it. Right now it is abandoned, and doesn't even apply any more.

This patch makes fairly deep changes to the NAT code, changes which I currently do not understand and do not have the motivation or energy to study. If it gets committed and breaks something I'm going to be the one who has to fix it, so ... no, not unless someone can present a compelling case that this actually improves anything, that it is correct and that if there are issues they will work on them.
 
And with every year going by, the effort needed to do this makes less sense, as IPv4 finally is (albeit still quite slowly) dying.
 
Bluntly, no.

Not without a much better documented use case for this patch, along with tests and some sort of indications that the author (or someone...) will maintain it. Right now it is abandoned, and doesn't even apply any more.

This patch makes fairly deep changes to the NAT code, changes which I currently do not understand and do not have the motivation or energy to study. If it gets committed and breaks something I'm going to be the one who has to fix it, so ... no, not unless someone can present a compelling case that this actually improves anything, that it is correct and that if there are issues they will work on them.
How about pfSense guys :)
 
IPv4 is still the most important.
Most private customers (mobile and landline) don't even get IPv4 any more. They often don't notice it in typical consumer usage scenarios because they get some tunneled IPv4 with provider-side NAT (CGNAT) instead.

So, this statement is very questionable. It at least depends on what you're looking at.
 
Back
Top