If you poll (with HZ set to 1000), you can have up to a millisecond's worth of packets to process. That is about 81K packets at 1Gbps if all are max MTU (not jumbo frames). So you need very large buffer queues.
That's another interesting aspect. Of course the theoretical issue of "clogging" your machine with IRQs persists anyways, and maybe disabling IRQ sources and increasing buffer space for them could help to "survive" that for some limited time, but in the end, only a CPU able to serve all I/O at maximum rate would help (which is normally more than given).
Yet another thing that caught my eye: Is context switching really such an additional burden as mentioned in that manpage? I didn't check code to verify, but my expectation would be that when there's another pending IRQ after finishing some ISR, the next ISR is executed without any additional switches?
In any case, IRQ-driven I/O is certainly NOT a security hole, but just how I/O works most efficiently in almost every scenario.
edit: To understand that, you just have to look at what your typical dumb (flooding-based) DoS attack achieves: Either some secondary resource exhaustion (file descriptors, buffers in the OS or the application, etc), or, if that can't be achieved, simply clogging the network bandwidth with crap. Exhausting the target system's CPU time through excessive IRQs is, at least as far as I know, completely unheard of, although practically every OS out there serves IRQs from NICs.