ULE Scheduler Internals

I am in an Operating Systems class at my university, and I am working on a presentation about Unix process scheduling. I decided to focus on the ULE scheduler because I have been looking to move to FreeBSD for a while.

This is a deep dive into the scheduler internals. As such, I have a few questions.

I have both relevant man pages that I could find (sched_ule(4) and scheduler(9)) in front of me. Both give a high-level overview, and I need more detail. I am also trying to understand the source, but it may take me a while.

I also have the 2003 ULE paper. Is it still relevant, or is it out-of-date?

On top of that, I have been thinking about buying The Design and Implementation of the FreeBSD Operating System. Does it have a section about ULE's internals? Or is there another good guide to how ULE works?

Here are some general questions:
  • What is the overall Big O complexity? And if you have the figures handily available, what is the Big O complexity of each function?
  • What does the interactivity hint change? What does the scheduler do differently?
  • Is it capable of soft real-time scheduling? Hard real-time scheduling? And if so, how is it accessed?
  • How does it keep processor affinity? How does it change the assigned CPU in the cases when it is necessary to do so?
Thanks for any help you can give me.
 
I am in an Operating Systems class at my university, and I am working on a presentation about Unix process scheduling. I decided to focus on the ULE scheduler because I have been looking to move to FreeBSD for a while.

This is a deep dive into the scheduler internals. As such, I have a few questions.

Here are some general questions:
  • What is the overall Big O complexity? And if you have the figures handily available, what is the Big O complexity of each function?
  • What does the interactivity hint change? What does the scheduler do differently?
  • Is it capable of soft real-time scheduling? Hard real-time scheduling? And if so, how is it accessed?
  • How does it keep processor affinity? How does it change the assigned CPU in the cases when it is necessary to do so?
Thanks for any help you can give me.


As you probably know, all schedulers use basically the same flow control:

Timer interrupt fires.
Interrupt vectors to scheduler.
Scheduler performs full context switch to kernel internal.
Scheduler picks a new thread/process to run.
Scheduler performs full context switch to new process.
Return from interrupt.

That is the basic gist of it.

The best documentation is the source code which you can look at yourself at /usr/src/sys/kern/sched_ule.c. I found it mostly self-explanatory, but YMMV.

What is the overall Big O complexity? And if you have the figures handily available, what is the Big O complexity of each function?
I haven't seen anything indicating runtime complexity in any documentation.

What does the interactivity hint change? What does the scheduler do differently?
By 'what' I assume that you mean 'when'. From what I gather looking at the source code, threads that are scored as interactive are placed in the realtime queue and are not subjected to nice restrictions. I am not sure what 'interactive' means in this context, but I assume that the thread is a foreground thread that is interacting with the user. Threads that are scored higher than this have their priorities set by past behavior and are subjected to nice restrictions. So a thread that sleeps voluntarily will generally have a negative nice value, which will increase its run priority. So you have a base priority that is modified by the nice value which is in turn calculated by statistics which are collected that models that threads' past behavior. Realtime threads have the highest priority and the nice values do not apply.


Is it capable of soft real-time scheduling? Hard real-time scheduling? And if so, how is it accessed?
Looking at the code, there is a lot of capability of load balancing between CPUs. As already mentioned, there is a realtime queue. As for it being capable of soft or hard realtime scheduling, I do not know. But there are utilities that you can use to set the priority of a process. rtprio(1) is one that comes to mind.

How does it keep processor affinity? How does it change the assigned CPU in the cases when it is necessary to do so?
It is actually quite simple. There is a field in struct td_sched called ts_cpu that indicates the affinity. As to change the assigned CPU, it takes a thread off of the run queue of one CPU and puts it on the run queue of a different CPU, then updates the information of the thread accordingly. There are flags that indicate that a thread cannot be moved to a different CPU if that is needed for some reason.

Remember that stats are collected on each thread/process to model its load behavior. So it tries to balance the load on all CPUs as evenly as it can. If a CPU goes idle, it will try to move a running thread to that CPU, and it will probably be a CPU hungry thread. As you look through the scheduler source code, you will see that there are functions that return the most idle CPU, the most loaded CPU, and the hungriest thread. Threads that pull a lot of CPU time get their priorities reduced via the nice mechanism, unless they are realtime priority.

I was not involved in the development of the ULE scheduler.
 
  • Thanks
Reactions: gdh
What is the overall Big O complexity? And if you have the figures handily available, what is the Big O complexity of each function.

To be able to answer this you'd first have to define the size of the input, is is the number of active processes or something else? I'm pretty sure though that you won't find anything in such time critical sections that run in greater than linear time. I also bet that anything that can be optimized for run time sacirficing some space has been optimized by use of look up tables, hash tables etc.
 
Back
Top