Looking for opinions on dyntick/tickless timer interrupt system

FreeBSD does not have this feature, as far as I know. I don't want FreeBSD to have it. Dynticks (dynamic-ticks) basically is a kernel patch to the interrupt handling in the kernel. In the regular tick kernel, the tick-timer code checks to see if there are any interrupts ever "N" miliseconds or so, but with dyntick, the kernel ticks based on an algorithm and handles interrupts that way. The number of ticks over a period of time is variable due to load, while with a tick kernel the ticks are constant from the value set at compile time.

I think dynticks are a bad idea:
  • it puts (or allows) the CPU to sleep more often times than a regular tick kernel
  • it saves power too, by the CPU sleeping when idle, but there is an associated latency when the system is sleeping so often. I'm a performance guy, I want the performance as high as I can afford to have. The higher the better. I don't want my processor sleeping. CPUs are not supposed to sleep. They are supposed to do work; crunching floats, or swap integers and ALU work. A sleeping CPU is me losing money.
  • I don't have any benchmarks, but theoretically this could decrease processor performance by the processor being put into sleep so many times by the algorithm. Yes, modern CPUs can wake up fast, but there is still a hit since they have to wake up versus already being awake.
I primarily use Linux, but I came on here to see what other opinions might be.
 
FreeBSD 9 does have a tickless timer interrupt system. Above layers are still tick-oriented, but if the CPU is idle and there are no events to handle, ticks are skipped. Now there is an ongoing GSoC project by davide@ to refactor callout(9) and above layers to also make them tickless.

Things you've told about performance are very questionable. Performance is tied to power consumption and heat dissipation now. By consuming more power when idle, you are reducing your performance where/when it is really needed. If you are a performance guy, you must know about Intel TurboBoost and AMD TurboCore technologies.
 
I disagree.

CPUs are not supposed to be sleep - even for a nanosecond. They should always be running. Scaled cores and turning off the other cores because one application needs one fast core. Then in my opinion, it's bad software design to let one program direct resources at the expense of others, simply because it's non-parallel. If 3.0ghz is not good enough for the app, then the app should be re-visioned to work on four cores, or at least not rob other processes of the other cores (by clocking then down).

A busy and loaded server running http service, database service, rsync service, and number crunching daemons should not be sleeping, or skipping ticks, or under clocking cores. This dyntick system came from the mobile platforms anyway; they don't belong on a server. That's like putting a mouse in monkey cage.

I don't think that reduced power is the only endgame planned on this road of tickless systems.
 
There is certainly a performance benefit to reducing the timer interrupt load. 1000 interrupts per second per core costs something, and the new structure gives most of those cycles back to the applications.
 
My systems run at 100 tick; which is the typical server tick. Normal desktop tick recommendation is 250. 1000 used to be the norm, but now it is recommended to be at 100 for servers for throughput and 250 for desktop for responsiveness to user applications. Like most things, there is a sweet spot.
 
You seem to be missing some fundamental thing about the sleep mode of processors.
They sleep if there is nothing to do, and when you have them do something, there is no use in interrupting them with a tick based timing when there is no use in the interrupt. Sure, this allows the CPU to sleep more often and longer, but it does not mandate it.

You may get some insight into this by using the processor performance counters to check what your processors are actually doing, then decide if you are on a sound basis with the assumption.

I would expect your cores to be at most 50% utilized even when under full load since bad branch prediction, cache misses et al will take the rest of the cycles. Inserting more cache misses, more TLB walks, more pipeline flushes, more instruction bytes into the workload for no use - that would be making things slow without benefit.

When interested in this topic, I would recommend reading this:
"Hennessy, John L.; Patterson, David A.. Computer Architecture: A Quantitative Approach. Morgan Kaufmann. ISBN 0-12-370490-1."

A volume of great weight, in more than one sense ;)
 
It will be easy to benchmark if/when FreeBSD gets tickless interrupts. Until then, it's kind of moot.
 
I didn't do any benchmarks.

There is this one guy (who did a benchmark of code) to see if battery life is helped by dynticks or not, on the Linux kernel. He said it did not help, but it did not hurt either. I'm too lazy to look for the link now.

As usual, it was fun to have a discussion on an issue that uses the brain.
 
Back
Top