FreeBSD polling issue,

I would like to revive this thread because I don't seem to be able to know when polling on an interface is enabled.

So, I enable polling with "ifconfig eth0 polling", but I see nothing change in the ifconfig output afterwards.

How does one know polling on an interface is enabled?
 
I would like to revive this thread because I don't seem to be able to know when polling on an interface is enabled.
Just ask your question, your question doesn't need to necropost a 12 year old thread (Thread help-freebsd-81-polling-issue.17240).

So, I enable polling with "ifconfig eth0 polling", but I see nothing change in the ifconfig output afterwards.
Not all drivers support polling. It also needs to be enabled in the kernel (GENERIC doesn't have it). See ifconfig(8) and polling(4).
Code:
     polling
             Turn on polling(4) feature and disable interrupts on the
             interface, if driver supports this mode.
 
It looks like you don't have to enable polling in the kernel anymore?
That was a way to enable/disable polling on interfaces. Which you can now do with ifconfig <interface> polling on an individual interface instead of an "all-or-nothing" sysctl.

Wait, I have to compile a custom kernel?
Yes.

Or I can somehow add a kernel option to be turned on during boot?
No. The option enables a bunch of code in various different places during compile time.

I'm pretty sure an actual FreeBSD developer has read it. Didn't you notice the "developer" label under his profile? There's a reason why it's not compiled in by default. You have to have a specific reason why you want to enable polling. So his question is more than valid.
 
I'm pretty sure an actual FreeBSD developer has read it. Didn't you notice the "developer" label under his profile? There's a reason why it's not compiled in by default. You have to have a specific reason why you want to enable polling. So his question is more than valid.
It's the same reason FreeBSD developers created it many years ago ;)

But I feel this should be compiled into the kernel by default. It's not like it's a feature no one security conscious would ever want to at least play with. Nobody wants the tail to wag the dog ;) ;) ;)
 
But I feel this should be compiled into the kernel by default.
Most installs will never need it. What's your argument for having the option compiled in by default? Gut feelings are not a valid argument. You need to put actual use-cases and test results on the table.

It's not like it's a feature no one security conscious would ever want to at least play with.
Sure. If you want to play with it, go ahead. Compiling a custom kernel is a valuable experience in and of itself, so you get that too.
 
I was just curious whether you actually have an overload of interrupt time or starved userland applications. For typical gear it seems unlikely given the lame network interfaces and fast CPUs.
 
I was just curious whether you actually have an overload of interrupt time or starved userland applications. For typical gear it seems unlikely given the lame network interfaces and fast CPUs.
@WORK I've been involved in drivers for ethernet devices, as interface speeds go up, one can get flooded with interrupts, so if you do the bare minimum at interrupt level, you wind up getting interrupt bound (effectively). As speeds go from 10M to 100M to 1G to 2.5G to 10G I think a lot of drivers wind up doing implicit polling: take the interrupt and pull in packets and refresh DMA rings for more than one packet, trying to balance out number of interrupts vs packets received and processed.

So at 10G, many small packets I can see potential for an interrupt storm, but without observable data I'd be hesitant to change anything (except for curiousity and that darn cat)

In this context, does polling mode on an interface really mean "no interrupts so something has to periodically recevice and process data so we don't run out of driver resources (like DMA ring buffers)"?
 
I was just curious whether you actually have an overload of interrupt time or starved userland applications. For typical gear it seems unlikely given the lame network interfaces and fast CPUs.
I don't have an overload of interrupt time or anything related to that. I am thinking entirely from the standpoint of security hardening.

Relegating any, any bit of control over CPU instruction flow to an externally connected network device is a security issue and can be exploited.
What's your argument for having the option compiled in by default?
So I feel it's something that might be more valuable than not to have it at everyone's fingertips.
 
Relegating any, any bit of control over CPU instruction flow to an externally connected network device is a security issue and can be exploited.
Holy crap, 99% of the internet are exploitable 😂

Seriously, that's nonsense. In the general case, there's a reason CPUs always had interrupt lines and peripheral hardware always signalled them: Polling a whole zoo of peripherals for any sort of I/O event is just wasteful as hell. That's true for every peripheral hardware as long as it doesn't have those events "all the time" anyways, then (and only then), polling might be more efficient, especially considering the extra "context switching" penalty on "modern" systems (with virtual address spaces and privilege rings).

Still, the "livelock" scenario described there is impossible to reach on your typical machine. It might be something to consider when your server has a whole bunch of high-speed NICs.
 
  • Like
Reactions: mer
Most I was thinking was denial of service attack. slam the interfaces, cause enough interrupts so the system becomes interrupt bound.
Now most "attacks" are in the payload of the packet, not the mere presence of the packet. If the system is interrupt bound, what happens to the actual processing of the packets?
"Sorry, can't hand you off to a process and let the process run because I have to handle another interrupt".
 

This is likely quite obsolete having been written over 20 years ago (and was probably only partially right even back then -- probably more useful for Rizzo's Netmap). If you poll (with HZ set to 1000), you can have up to a millisecond's worth of packets to process. That is about 81K packets at 1Gbps if all are max MTU (not jumbo frames). So you need very large buffer queues. If you up the HZ to 10K or higher, most of those timer interrupts will be wasted. When I was writing drivers, a long time ago, the idea was to process as many packets as possible once you take an interrupt (but upto some limit, so as not to let one interface hog the cpu). There are a bunch of competing interests here so I suspect this area has seen a few wheels of reincarnation. We used to use polling when there was a possibility that a (poorly implemented) controller may not interrupt you even when there was an event needing service!
 
I think this is an exploit, a backdoor, for this specific reason.
Your mission, should you choose to accept it, is to investigate and write a PoC that exploits this apparent weakness. As always, should you or any of your accomplices be caught or killed, the FreeBSD forum staff will disavow any knowledge of your actions. This post will self-destruct in 10 seconds.... 9... 8...
 
If you poll (with HZ set to 1000), you can have up to a millisecond's worth of packets to process. That is about 81K packets at 1Gbps if all are max MTU (not jumbo frames). So you need very large buffer queues.
That's another interesting aspect. Of course the theoretical issue of "clogging" your machine with IRQs persists anyways, and maybe disabling IRQ sources and increasing buffer space for them could help to "survive" that for some limited time, but in the end, only a CPU able to serve all I/O at maximum rate would help (which is normally more than given).

Yet another thing that caught my eye: Is context switching really such an additional burden as mentioned in that manpage? I didn't check code to verify, but my expectation would be that when there's another pending IRQ after finishing some ISR, the next ISR is executed without any additional switches? :-/

In any case, IRQ-driven I/O is certainly NOT a security hole, but just how I/O works most efficiently in almost every scenario.

edit: To understand that, you just have to look at what your typical dumb (flooding-based) DoS attack achieves: Either some secondary resource exhaustion (file descriptors, buffers in the OS or the application, etc), or, if that can't be achieved, simply clogging the network bandwidth with crap. Exhausting the target system's CPU time through excessive IRQs is, at least as far as I know, completely unheard of, although practically every OS out there serves IRQs from NICs.
 
Well, getting an IRQ for every single ethernet frame (max 1500 bytes without non-standard "jumbo frames") would indeed quickly create issues at today's network speeds. But that won't happen in reality, modern NICs can do DMA and "interrupt coalescing".
 
  • Like
Reactions: mer
But you understand that the interrupt doesn't execute instructions that come from the network packet, right?
As you noticed, at the very high level thinking it stands true no matter what that an external network connected device should not interrupt your CPU instruction flow, for top security. Yes, you can try to secure what happens in the scenario that you do interrupt and manage risks, but you open up a world of possibilities for someone and create security jobs for they-who-shall-not-be-named.

You stack overflow, overwrite the pointer to return back from the interrupted instruction, and you have a problem. Without an interrupt, there are no instruction pointers that a buffer overflow could affect predictably.
 
Back
Top