Server vs Desktop difference and optimisation options

I am coming from Linux to FreeBSD.

In Linux distros there are some differences in Server version and desktop version.

For example in Ubuntu for Server the kernel timer interrupt is 100 Hz where as for Desktop its 250Hz for Ubuntu but many distros also use 300 or 1000 hz.

Is there any such differentiation in BSD world or best practices or optimizations for Server use and Desktop use? Any links to such reaources articles?

Could going forward FreeBSD have Server and Desktop as a separate download?
 
The FreeBSD base OS is mostly automatically tuned according to the amount of memory, CPU etc. But this is based on a fairly 'generic' use. It's up to you, as a user of that system, to tune it where necessary according to your use-case.

And for heaven's sake, don't tune for the sake of tuning(7). In most cases FreeBSD does a great job tuning itself.
 
I recall that it was recommended to change the kern.hz sysctl to a lower value within emulators or for power consumption but as SirDice suggested, unless you have a specific need, it is best to just ignore it.

A thread relating to kern.hz here.
 
Agreed, yes its a fact dont optimize or tune for the sake of it

but since I am coming from linux world where at least at the very basic there is this choice of timer frequency wanted to know of there is something similar in BSD and if its tunable or a recompilation of kernel is required.

Infact two other very basic things are CPU schedular and I/O Schedular which also I havent been able to find more info on.

Though I have seen powerd in action and it works well.
 
In most cases FreeBSD does a great job tuning itself.
I "upvote this 100".
Following are my opinions only (technical details are simplified), agree, disagree, no skin off my back.

One thing that may help in a desktop configuration is to limit the amount of memory that ZFS uses for ARC.
Why? By default, ZFS will attempt to use almost of free RAM, which can lead to memory pressure when opening applications. Now ARC is mostly "read cache" which is more server than desktop, so you may not hit it. ZFS ARC will also flush itself out under OOM conditions, but it takes time to free the RAM, so you may get intermittent failures on starting new applications. A solution to that is put hard limit on max amount of RAM that can be used for ARC.
I've been using FreeBSD as a desktop for quite a long time, the only thing I tune is max amount for ZFS ARC. I've played around with some sysctls for schedulers, but came to the conclusion "I can't tell a difference".
 
I recall that it was recommended to change the kern.hz sysctl to a lower value within emulators or for power consumption but as SirDice suggested, unless you have a specific need, it is best to just ignore it.

A thread relating to kern.hz here.
Thanks a ton

loved that it can be changed without a full kernel recompile as in linux.
 
I recall that it was recommended to change the kern.hz sysctl to a lower value within emulators or for power consumption but as SirDice suggested, unless you have a specific need, it is best to just ignore it.

A thread relating to kern.hz here.
/boot/loader.conf is empty in my installation.
 
loved that it can be changed without a full kernel recompile as in linux.
There's a huge amount of settings you can change "on-the-fly". Some settings can only be changed at boot but that's still a lot better than having to recompile the kernel.

Just have a look through sysctl -a. Some are always read-only (they're the 'informational' type), some can only be set at boot but most can be tweaked while the system is running.
 
I recall that it was recommended to change the kern.hz sysctl to a lower value within emulators or for power consumption but as SirDice suggested, unless you have a specific need, it is best to just ignore it.

A thread relating to kern.hz here.

Some hypervisors will take a lot of CPU on the host with Hz=1000. But only some, e.g. Parallels on Mac does not.

Always check top(1) every now and then. And i7z if your platform supports it.

In Linux you can switch the preemption model and they say that no preemption is "for servers". I disagree (on limited data, admittedly), a network server should be preemptable. Compute servers, aka things that just provide CPU power should probably be on no preemption. That kind of switch is not in FreeBSD, but I don't recall right now whether it is a big deal in Linux and really changes performance.

Likewise, Linux has a choice of I/O scheduler, but I don't know whether that is a clear win. Benchmarks welcome.
 
Why? By default, ZFS will attempt to use almost of free RAM, which can lead to memory pressure when opening applications. Now ARC is mostly "read cache" which is more server than desktop, so you may not hit it.
On what data are you making that assumption about ARC?

Based on that statement, which output do you think is from a desktop, and which from a live storage server?
Code:
ARC Efficiency:                                 8.17    m
        Cache Hit Ratio:                96.43%  7.88    m
        Cache Miss Ratio:               3.57%   291.26  k
        Actual Hit Ratio:               96.29%  7.87    m

Code:
ARC Efficiency:                                 47.77   b
        Cache Hit Ratio:                88.81%  42.43   b
        Cache Miss Ratio:               11.19%  5.34    b
        Actual Hit Ratio:               88.78%  42.41   b

second one is the server, so yes, ARC is usually very effective on desktops too

In fact ZFS scales very well for desktop AND server use nowadays. The old horror story (i.e. FUD) about "ZFS eats all your RAM" is completely bogus - I'm running ZFS even on smaller VMs with 2GB RAM and less. True, ZFS uses memory if it's available, that's what it is supposed to do and what *any* filesystem with proper caching will do. But it will scale down automatically on such systems and you wouldn't run any heavy loads like package builds or a fully fledged DE on such systems anyways...
I'm running ZFS on all my desktops and would never want to go back to any other FS. Memory pressure never was an issue because of ZFS - in fact the single most reckless and annoying piece of garb.. software in terms of memory use are browsers nowadays (and especially firefox...), and they wouldn't even care or step back if anything vital needs any memory. So if you are worried about memory pressure that leads to an almost unusable system: dont't use web browsers.


As for the original topic:
All my systems start out equally without any special 'tuning' or changing of defaults - regardless of the use case. There are some small differences like using gpart labels as primary disk identification on desktop or server with borked SES-devices that won't show the actual backplane. Also on jailhosts I sometimes use more FIBs than default, but all of those are no major changes or "tuning" to the base system or kernel. Why? Because it 'just works'™ the way it is. The small adjustments I may make come through special use cases or special problems I encounter, not because every system intended as a server or desktop "needs" them.
 
cracauer@ it always comes down to the actual workload and definitions.
"preemption" can mean a lot:
timeslicing userland (as in batch jobs) typically means "everything progresses a little at a time so aggregate time of everything is a little longer"

"run to completion/explicit yield of CPU" was an option in Linux schedulers at one point in time, which could lead to odd behavior. If you're a server, sure, "finish this job then start the next" is fine, but on a desktop, that hurts the interactive feel.
 
"run to completion/explicit yield of CPU" was an option in Linux schedulers at one point in time, which could lead to odd behavior. If you're a server, sure, "finish this job then start the next" is fine, but on a desktop, that hurts the interactive feel.


Well, I think it will hurt NFS or http servers, too. If I had to guess, I don't recall numbers gathered about it.[/user]
 
  • Like
Reactions: mer
On what data are you making that assumption about ARC?
because that is pretty much the definition? Adaptive Replacement Cache? Pretty much the definition of "read requested from disk, it winds up in ARC so reads of the same data complete quicker?"

Perhaps OpenZFS behaves differently, but by default ARC could potentially use a large portion of free RAM. It all depended on usage patterns; it wasn't guaranteed to use it but it could".

Perhaps my references are out of date, but vfs.zfs.arc_max is typically the high water mark for ARC, so ARC may grow to this before it starts agressively dumping.

You disagree with my opinion, cool. I disagree with your antagonism. I said "may", and as is typical for "cache" of any kind, it depends on usage patterns. If a system is continually serving the same 4MB of data in a read capacity, cache with likely grow to 4MB (plus/minus) and stay there. A system that has just writes? Probably very low read cache value. I don't think I ever said "it will" but I said "it could".

And yes I know ZFS can be used in low memory systems, heck pfSense has gone to ZFS based on 4GB systems because "ZFS BE's are an awesome way to roll back failed upgrades".
 
because that is pretty much the definition? Adaptive Replacement Cache? Pretty much the definition of "read requested from disk, it winds up in ARC so reads of the same data complete quicker?"
I was referring to your statement that ARC only makes sense on servers.
In fact I just pulled that data from the desktop I was currently sitting at and some random server I had a ssh session open to. Because I knew that I will get that data. I've always seen a much better ARC performance on desktops than on servers, especially "multi-purpose"-servers. My best guess is, that the access patterns on a desktop are much narrower and of a comparably small set of data compared to a server, where data is read much more random. So on most servers I've usually seen ~80-90% ARC hit rates, wherease on absolutely any desktop I always have ~95% and higher hit rates after a decent amount of uptime. Of course, if you overrun the cache with huge files, this will drop - but for normal day-to-day usage the ARC absolutely shines on desktops. Especially because the workload on a typical desktop is usually much more read-oriented than on most servers.
 
Likewise, Linux has a choice of I/O scheduler, but I don't know whether that is a clear win. Benchmarks welcome.
In a previous job, I had such benchmarks. It showed that the choice of IO scheduler has some influence when A SERVER is executing EXTREME WORKLOADS. For example, a server with several hundred disk drives attached, each disk drive having dozens to hundreds of pending IOs. For light workloads (servers with a small number of disks, total queue depths that are single digits, small number of processes that generate IO), the IO schedulers made very little difference, usually not measurable at all.

On the other hand, attempting to tune a lightly loaded system by changing the IO scheduler is a great way to shake loose all the bugs in the IO subsystem, causing lots of outages. In my (not humble) opinion, most performance tuning done by non-experts simply leads to self-inflicted injuries, not to performance improvement. This leads me to one of my favorite jokes: To administer a computer, you need a man and a dog. The man is there to feed the dog. The dog's job is to bite the man if he tries to touch the computer.
 
The FreeBSD base OS is mostly automatically tuned according to the amount of memory, CPU etc. But this is based on a fairly 'generic' use. It's up to you, as a user of that system, to tune it where necessary according to your use-case.

And for heaven's sake, don't tune for the sake of tuning(7). In most cases FreeBSD does a great job tuning itself.
Does the auto tune happen at install time only?

What if RAM is added or removed?

HDD is replaced by SSD

Old peripheral is replaced by a new peripheral on a newer bus (PCI to PCIe)

What if the intended use of the entire Hardware changes from say a Desktop to a small Home Server?

Auto Tune need a pointer to it as to who or what does it. Kernel at each boot?

Kernel on hotplugging or removal of a component?
 
ok, so that explains a slight start delay as compared to Linux till the login prompt or gui login
 
where exactly is the default kern.hz set in 13.1 FreeBSD

in my installation, /boot/loader.conf is empty file
/etc/defaults/loader.conf does not have kern.hz or kern.sched.*

but sysctl -a lists all of them
 
If I recall, it is read-only at runtime (i.e # sysctl kern.hz 100 or /etc/sysctl.conf) won't work.

Instead it should go in /boot/loader.conf

So if I added it to mine, the file would look like:

Code:
kern.hz="100"
beastie_disable="YES"
autoboot_delay="-1"
 
If I recall, it is read-only at runtime (i.e # sysctl kern.hz 100 or /etc/sysctl.conf) won't work.

Instead it should go in /boot/loader.conf

So if I added it to mine, the file would look like:

Code:
kern.hz="100"
beastie_disable="YES"
autoboot_delay="-1"


kern.hz worked for me after i added it to

/boot/defaults/loader.conf
 
In Linux you can switch the preemption model and they say that no preemption is "for servers". I disagree (on limited data, admittedly), a network server should be preemptable. Compute servers, aka things that just provide CPU power should probably be on no preemption. That kind of switch is not in FreeBSD
What would happen when you pull kern.sched.preempt_thresh down to 0? I didn't try, but shouldn't this effectively disable preemption?

Anyways, I agree that with your typical interactive services nowadays, completely disabling preemption doesn't sound like a good idea for servers, it will sure hurt latency. You might consider lowering the threshold if your server is very busy, to improve throughput of CPU work.

Likewise, you might consider tuning the threshold up for a desktop, to gain lower latency (which is arguably much more important on a desktop).
 
Back
Top