2024: The year of desktop FreeBSD?

I installed SDDM and Plasma.



Comparable, however packages in the GhostBSD area may differ from packages (for the same ports) that are provided by the FreeBSD Project.
Thanks for your help on several of my issues.
May I ask for a few pointers on how to install Plasma on GhostBSD?
I've already installed MATE, and to be honest, I don't like it much - it seems clunky - or maybe I've grown too used to Plasma.
Could I switch my current MATE for Plasma?
Thanks,
 
There’s a problem because not every application can be statically linked, especially desktop ones. Statically linked applications also lack desktop integration.
Mac OS X is doing it since the times of NeXTSTEP. And it is doing fine. Way better than Linux, actually.
Problem is not about lacking desktop integration, since there was always something like FreeDesktop to define a proper way to integrate applications, but even after establish it the two bloated and crap desktop enviroments (KDE and GNOME) still prefer to do things by their own.
The problem was:
  1. ideology (see Ulrich Drepper);
  2. libc (GNU Libc was designed to be static linking unfriendly for idelogy reasons. See #1).
And do you know what is ironic? Ulrich Drepper, who always talked bad about static linking, now is researching at the UKL project, to make UNIKERNELS based on Linux. And Unikernels are just the next evolution of static linking. LOL

Additionally, if a user needs to install GIMP and Firefox statically linked, they’ll download the same versions of libraries for both apps.
Static linking is NOT about linking the entire library into your binary, but it's about linking only the object code that contains the routine(s) from the library you need.
It's dynamic linking that forces you to reference the *entire* library. Always.

Unlike Flatpak, where all applications share SDK dependencies,
There's no problem in static linking, as I said.
Also sharing library is the EXACT reason why Linux sucks at desktop. I don't want to update the entire OS just to update a stupid application.
Flatpak approach is insane. It's the SAME horrible approach of WinSxS/Visual C++ Runtimes in Windows.
And it sucks. It always sucked.

provide desktop integration
As I said before, this is not related to dynamic or static linking.
It's a matter of API and ABI lifecycle. Something that in Linux lands no one seems to understand.
After all, they find funny to link a bloated libc with versioned symbols.

, and offer easy updates (via both GUI stores and CLI).
With a static linking binary you could just binary patch it, you know? Or even replace it entirely if you are lazy.
Are you saying that download ONE binary is more difficult than download a package, extract it and do something else?
And look, since statically linked binaries do not depend from the loader, you wouldn't even see something like your application crashing suddenly if you're upgrading it while running it.

The tar package approach has the same issue. One would need to extract over 10+ tar packages, which would take up more than 10+ GB, assuming they’re also available linked against musl libc distributions as used by Void and Alpine.
Linux distributions nowadays are huge as much as Mac OS X.
And Mac OS X forces you to static link your applications, or mixed link them if you're sharing code when you're distributing a lot of them in a single package, like an application suite.

Upgrading each package individually would be a hassle.
How so? I already answered you about that.
In fact, it's way more simple than with dynamic linking binaries, since you don't have to deal with things like dependency hell and circular dependencies.

One thing to understand is that Linux distributions, despite having a command-line interface, are essentially a set of bin/coreutils glued together to mimic MINIX (a Unix-like OS for Intel). Over time, GUIs have been added, but the system remains fragmented across distributions, leading to duplicated workloads. Everything is dumped into /bin, even in Gentoo (by default). And there's also disagreements on what UNIX is. What Open Group consider as UNIX isn't sufficient for alot of people to consider as UNIX (some wants everything as file, including USBs). I'm sure alot of people won't consider macOS as real UNIX but AIX, HP-UX will. I know a Red Hat chinese fork has UNIX 3.0 certification too.
This has nothing to do with static linking binaries.
In fact, it has to do with the shitty decision to dinamically linking them, since a statically linked binary can be distributed WITHOUT any worries in any Linux-based operating system out there, without any regards about dependencies or the libc it is adopting.
This is not valid for dynamically linked binaries, since they are TIED to the environment they were built.

Now application port wise, in BSDs, or even HaikuOS/RedoxOS, there’s a clear separation between the base system and userland.
This does not solve the problem.
In fact, it could be considered a weakness.
Think about having two copies of LLVM, one in base and another one as dependency of Mesa.
Are you saying that this kind of bloating is better than having a smaller and more compact statically linked binary?

The purpose of Flatpak and Snaps is to run GUI applications on top of a predictable unified base system. Sure it has pitfalls and bugs for now but last time I've checked, Silverblue, it was a solid system.
The purpose of Flatpak and Snap, just like the purpose of Docker, was to circumvent the dinamic linking problem.
And they are circumventing it in a HORRIBLE way, since packaging an ENTIRE environment just to be sure that your application wouldn't break is INSANE, and defeat the point of dynamically linking it.

Dynamic linking has only ONE sane use case: make plugins. STOP.
And that's why there is mixed linking that helps you to deal with something like that.
Other than that, it's just a technically inferior, unsecure and slower solution.

Proprietary applications were never the target, though it became a side effect.
Ideology is the main reason of these idiotic "solutions", like Flatpak.
Thanks God Rust and Go are inverting this stupid tendency.


 
Mac OS X is doing it since the times of NeXTSTEP. And it is doing fine. Way better than Linux, actually.
Problem is not about lacking desktop integration, since there was always something like FreeDesktop to define a proper way to integrate applications, but even after establish it the two bloated and crap desktop enviroments (KDE and GNOME) still prefer to do things by their own.
The problem was:
  1. ideology (see Ulrich Drepper);
  2. libc (GNU Libc was designed to be static linking unfriendly for idelogy reasons. See #1).
And do you know what is ironic? Ulrich Drepper, who always talked bad about static linking, now is researching at the UKL project, to make UNIKERNELS based on Linux. And Unikernels are just the next evolution of static linking. LOL


Static linking is NOT about linking the entire library into your binary, but it's about linking only the object code that contains the routine(s) from the library you need.
It's dynamic linking that forces you to reference the *entire* library. Always.


There's no problem in static linking, as I said.
Also sharing library is the EXACT reason why Linux sucks at desktop. I don't want to update the entire OS just to update a stupid application.
Flatpak approach is insane. It's the SAME horrible approach of WinSxS/Visual C++ Runtimes in Windows.
And it sucks. It always sucked.


As I said before, this is not related to dynamic or static linking.
It's a matter of API and ABI lifecycle. Something that in Linux lands no one seems to understand.
After all, they find funny to link a bloated libc with versioned symbols.


With a static linking binary you could just binary patch it, you know? Or even replace it entirely if you are lazy.
Are you saying that download ONE binary is more difficult than download a package, extract it and do something else?
And look, since statically linked binaries do not depend from the loader, you wouldn't even see something like your application crashing suddenly if you're upgrading it while running it.


Linux distributions nowadays are huge as much as Mac OS X.
And Mac OS X forces you to static link your applications, or mixed link them if you're sharing code when you're distributing a lot of them in a single package, like an application suite.


How so? I already answered you about that.
In fact, it's way more simple than with dynamic linking binaries, since you don't have to deal with things like dependency hell and circular dependencies.


This has nothing to do with static linking binaries.
In fact, it has to do with the shitty decision to dinamically linking them, since a statically linked binary can be distributed WITHOUT any worries in any Linux-based operating system out there, without any regards about dependencies or the libc it is adopting.
This is not valid for dynamically linked binaries, since they are TIED to the environment they were built.


This does not solve the problem.
In fact, it could be considered a weakness.
Think about having two copies of LLVM, one in base and another one as dependency of Mesa.
Are you saying that this kind of bloating is better than having a smaller and more compact statically linked binary?


The purpose of Flatpak and Snap, just like the purpose of Docker, was to circumvent the dinamic linking problem.
And they are circumventing it in a HORRIBLE way, since packaging an ENTIRE environment just to be sure that your application wouldn't break is INSANE, and defeat the point of dynamically linking it.

Dynamic linking has only ONE sane use case: make plugins. STOP.
And that's why there is mixed linking that helps you to deal with something like that.
Other than that, it's just a technically inferior, unsecure and slower solution.


Ideology is the main reason of these idiotic "solutions", like Flatpak.
Thanks God Rust and Go are inverting this stupid tendency.


macOS apps are not generally statically linked. They are dynamically linked. But Apple almost never breaks binary compatibility of the public APIs. Well they are dynamically linked but usually only to Apple provided libraries as far as I know. It's not like every app installs a dozen new .dll/.so to your system and that's what makes the big difference. It's what makes Mac apps (at least traditionally) standalone and executable from anywhere.

I fear however, that Apple is currently destroying this design philosophy with the sandboxing. At least the application data folders have become way more complex now and I wouldn't like to troubleshoot them anymore, something that has been always been super easy. Strong versioning is key for Apple. Only allow links to be redirected to bug fixes, never feature enhancements. If you wrote your code for 1.0, you'll continue to link against 1.0, barring some bug fix (1.0.1). Your code should never be silently upgraded to link against 1.1. Microsoft also gets this right with the global assembly cache. DLL hell was created by Szyperski, who still works for Microsoft

For my personal taste, I've only three options - use a Linux distribution that throw everything into /bin and /lib and hopefully dependencies resolve, or use Silverblue (or any immutable distribution) or use FreeBSD if hardware supports well and run apps on top of base system without tempering it, with snapshots. I chose the later one
 
macOS apps are not generally statically linked. They are dynamically linked. But Apple almost never breaks binary compatibility of the public APIs. Well they are dynamically linked but usually only to Apple provided libraries as far as I know. It's not like every app installs a dozen new .dll/.so to your system and that's what makes the big difference. It's what makes Mac apps (at least traditionally) standalone and executable from anywhere.
Mac OS X applications are statically linked or mixed linked (mostly static with dynamic linking exceptions).
When you have to distribute shared libraries in Mac OS X there is the Framework bundle, but as I said this was designed for multiple applications distributed in the same pkg/mpkg, like a suite of products.

Here the reference:

Fully dynamically linked executables were never the way to go on Mac OS X.

I fear however, that Apple is currently destroying this design philosophy with the sandboxing. At least the application data folders have become way more complex now and I wouldn't like to troubleshoot them anymore, something that has been always been super easy. Strong versioning is key for Apple. Only allow links to be redirected to bug fixes, never feature enhancements. If you wrote your code for 1.0, you'll continue to link against 1.0, barring some bug fix (1.0.1). Your code should never be silently upgraded to link against 1.1.
These kind of restrictions are the reason of the dynamic linking failure.
If I have to avoid to upgrade a library, why leaving it externally referenced in the first place?

Microsoft also gets this right with the global assembly cache. DLL hell was created by Szyperski, who still works for Microsoft
Archiving a copy of *every* shared library in use is just what portupgrade did when you upgraded a port that required a shared library, by saving it in /usr/local/lib/compat.
This is still dependency hell. But with even more complexity.
The real solution is to stop deploying libraries. Just deploy executables. Libraries, like headers, should be only installed when you want to develop/compile something.

For my personal taste, I've only three options - use a Linux distribution that throw everything into /bin and /lib and hopefully dependencies resolve, or use Silverblue (or any immutable distribution) or use FreeBSD if hardware supports well and run apps on top of base system without tempering it, with snapshots. I chose the later one
Silverblue is another poor attempt to solve this problem. A problem created by the same people that now are developing crap selling it as a “solution”.
 
"Windows are for desktops and unixies are for servers. People who use windows in servers are idiots, and people who use unixies in desktops are geeks."
IDK who sad it, but I like it.
Twas I. The exact line was

"FreeBSD and Linux are Servers. Windows and OSX are desktop operating systems. If you use windows or OSX as a server, you're a fool. If you use FreeBSD or Linux as a desktop, you're a geek"
 
One thing to understand is that Linux distributions, despite having a command-line interface, are essentially a set of bin/coreutils glued together to mimic MINIX (a Unix-like OS for Intel).
Actually Minux wasn't for Intel. It was freeware and Intel grabbed it for their spying issues.
But that's not the point.
Now what did you actually trying to say? Or maybe you got triggered because FreeBSD's only priority is server and everything else is second class? I'm not triggered, it is what it is. Server is a second class citizen in macOS.
I can't read that stuff anymore. You're not actually talking about desktop and server. Desktop on Unix was fine already back in 1992; it was called X-terminal, and there was no problem with desktop vs. server, because both blended together: the X-terminal was actually the server, and the application server was the client.

So, what You're actually talking about is not desktop (because desktop is still as fine as it was back then), but Laptop.

Because Laptop, to the contrary, comes with a lot of separate issues, before and behind the screen.
Behind the screen we talk about hibernation, bluetooth, touchpads, cameras, i.e. very specialized hardware demands. Furthermore, while standard computer hardware (disk, network, etc.) evolves comparatively slow and follows standards that build upon each other, these laptop hardware is continuously changing, doesn't seriousely follow standards, in fact every manufacturer hacks their own thing mc-cheapo time-to-market wise, writes a device driver and throws that at Microsoft.
Then also, standard computer operators who happen to run unix will usually configure their system - depending on skill from config files via writing scripts (rc.d) to patching the actual code. This is the difference before the screen: people who complain about desktop (and actually mean laptop) expect "integration" instead of doing it themselves.

So there are two major differences, and both are practically unmanageable (unless you have a staff like Apple and aim at making big money).

And honestly, I do not want such "integration", because it will almost necessarily change the system into a heap of unintellegible haywire, trying to support all kinds of strange hardware and at the same time trying to think for the user (there is already too much of that).
 
Actually Minux wasn't for Intel. It was freeware and Intel grabbed it for their spying issues.
But that's not the point.

I can't read that stuff anymore. You're not actually talking about desktop and server. Desktop on Unix was fine already back in 1992; it was called X-terminal, and there was no problem with desktop vs. server, because both blended together: the X-terminal was actually the server, and the application server was the client.

So, what You're actually talking about is not desktop (because desktop is still as fine as it was back then), but Laptop.

Because Laptop, to the contrary, comes with a lot of separate issues, before and behind the screen.
Behind the screen we talk about hibernation, bluetooth, touchpads, cameras, i.e. very specialized hardware demands. Furthermore, while standard computer hardware (disk, network, etc.) evolves comparatively slow and follows standards that build upon each other, these laptop hardware is continuously changing, doesn't seriousely follow standards, in fact every manufacturer hacks their own thing mc-cheapo time-to-market wise, writes a device driver and throws that at Microsoft.
Then also, standard computer operators who happen to run unix will usually configure their system - depending on skill from config files via writing scripts (rc.d) to patching the actual code. This is the difference before the screen: people who complain about desktop (and actually mean laptop) expect "integration" instead of doing it themselves.

So there are two major differences, and both are practically unmanageable (unless you have a staff like Apple and aim at making big money).

And honestly, I do not want such "integration", because it will almost necessarily change the system into a heap of unintellegible haywire, trying to support all kinds of strange hardware and at the same time trying to think for the user (there is already too much of that).
I’m not referring to desktop as in desktop computers (the physical, typically non-portable personal computers with standard components like monitors, keyboards, and towers, where the focus is on hardware form factor and performance for general computing tasks), but rather in the sense of general-purpose consumer use cases beyond just running a browser. This includes industry-standard tools like Adobe Premiere, Final Cut Pro, Pro Tools with professional audio interfaces, control surfaces, and plugins, found in macOS or Windows (consumer operating systems). These operating systems cover advanced applications like Microsoft Office (e.g., macros, complex Excel functions), engineering tools like ANSYS, COMSOL Multiphysics, or CATIA (for simulations such as finite element analysis and computational fluid dynamics), financial software like Bloomberg Terminal, Thomson Reuters Eikon, or FactSet (used in quantitative finance), and scientific tools like Schrödinger Suite, Gaussian, or proprietary molecular modeling and drug discovery software, including CUDA-based applications. These users are far from tech-illiterate; many run Alpine, Arch, Void, or Gentoo in virtual machines alongside their Windows or macOS systems, builds own applications with python, rust, C/C++ and bring these laptops into work environments. They choose these platforms for their broad software support, development tools, familiarity with Unix utilities like `ls`, `cd`, `grep`, `sed`, `awk`, `vi`, etc., and out-of-the-box hardware integration. How do I know? Because I used to be one of them and I know plenty of such people working in my workplace.

Personally, I stick with BSD systems mainly to avoid GPL licensing issues interfering with making technically sane decisions, as tools like ZFS on root work out of the box, and I don’t need extra repositories for media codecs or non-free firmware—everything is available in ports. I also prefer the separation of userland from the base system, something that Linux distributions are trying to replicate with immutable models. My point was: not everyone gets confused with personal choices with technical superiority. Not supporting commercial software is fine but not having laptop drivers that other BSDs have is a sign of something else. And I don't use BSDs on servers, because I don't have one - I use on bare metal, personal computers, with no dual boot, no secondary operating system. I am not religious about operating systems, programming languages or choice of software - value of a tool comes from usability and ease of use.
 
Actually Minux wasn't for Intel. It was freeware and Intel grabbed it for their spying issues.
But that's not the point.

I can't read that stuff anymore. You're not actually talking about desktop and server. Desktop on Unix was fine already back in 1992; it was called X-terminal, and there was no problem with desktop vs. server, because both blended together: the X-terminal was actually the server, and the application server was the client.

So, what You're actually talking about is not desktop (because desktop is still as fine as it was back then), but Laptop.

Because Laptop, to the contrary, comes with a lot of separate issues, before and behind the screen.
Behind the screen we talk about hibernation, bluetooth, touchpads, cameras, i.e. very specialized hardware demands. Furthermore, while standard computer hardware (disk, network, etc.) evolves comparatively slow and follows standards that build upon each other, these laptop hardware is continuously changing, doesn't seriousely follow standards, in fact every manufacturer hacks their own thing mc-cheapo time-to-market wise, writes a device driver and throws that at Microsoft.
Then also, standard computer operators who happen to run unix will usually configure their system - depending on skill from config files via writing scripts (rc.d) to patching the actual code. This is the difference before the screen: people who complain about desktop (and actually mean laptop) expect "integration" instead of doing it themselves.

So there are two major differences, and both are practically unmanageable (unless you have a staff like Apple and aim at making big money).

And honestly, I do not want such "integration", because it will almost necessarily change the system into a heap of unintellegible haywire, trying to support all kinds of strange hardware and at the same time trying to think for the user (there is already too much of that).

No. I was asking what you were trying to say. You have some unverifiable claims, and some are partially true.

Intel incorporated MINIX into their Management Engine (ME) for specific tasks, including security and remote management of hardware. This decision drew criticism due to potential privacy concerns, as MINIX runs at a low level on Intel processors, with the capability for remote access and control, which some feared as a backdoor.

There are industry standards for laptop hardware development, but they are guidelines rather than strict requirements.

It's debatable whether "integration" can lead to a complex and unmanageable system. It's not a scientific fact.

And none of these justify the lack of inferior hardware support, especially when OpenBSD, linux libre and NetBSD have support for the same hardware drivers and FreeBSD lags behind. And none of them disprove the claim that "Server is a tier 1 priority for FreeBSD.". Every inefficient point is handled with "Look at Open/Net/some Linux distributions, they are doing it worse"

I also don't know why people get triggered by the fact that Server is top priority for FreeBSD (both developer and community alike).

For me, true value lies not in the dogma of operating systems, programming languages, or software choices, but in the usability and ease that a tool brings to its user. After all, debating the merits of obscurity is much more productive than, say, actually solving a problem
 
Server is top priority for FreeBSD (both developer and community alike).
You have repeated this a number of times. Many of us agree (this is the way it should be) so we don't feel the need to comment. Loads of consumer platforms exist for desktop users; but relatively few are suitable for servers.

That said, many of us here also have enough experience (and buy correct hardware) to allow FreeBSD to also "serve" us as a decent workstation operating system too. This is how it always has been, and always will be. We are happy to let Linux experiment with trying to break through into a saturated (and dying) market and we are also happy to let it act as a driver dump where we can borrow some of the most common hardware support from. Everyones a winner.
 
Intel incorporated MINIX into their Management Engine (ME) for specific tasks, including security and remote management of hardware. This decision drew criticism due to potential privacy concerns, as MINIX runs at a low level on Intel processors, with the capability for remote access and control, which some feared as a backdoor.
I'm failing to see how running an entire Operating System in an area, like the CPU's chipset, that is way more privileged than the ring 0, where the kernel of the OS you installed is running, can be considered as a security feature.
 
I’m not referring to desktop as in desktop computers (the physical, typically non-portable personal computers with standard components like monitors, keyboards, and towers, where the focus is on hardware form factor and performance for general computing tasks), but rather in the sense of general-purpose consumer use cases beyond just running a browser. This includes industry-standard tools like Adobe Premiere, Final Cut Pro, Pro Tools with professional audio interfaces, control surfaces, and plugins, found in macOS or Windows (consumer operating systems).
Ahh yeah! Now these things have a name, they're called Workstations. And they are a traditional domain of Unix - from times when the Windows/PC stuff had neither the compute power nor the graphics to run them.

I am certain FreeBSD does nothing to push these use-cases away. The problem is with the providers of the software: as soon as Windows became powerful enough, and omnipresent anyway, they went the easy way.
For Digidesign (ProTools, nowadays Avid) it gets obvious from the history of the shop - they just didn't have a unix, neither had their customers.

There is also a certain design issue: a unix system does already manage itself, it is a real OS similar to a mainframe. Windows OTOH comes from a tradition of simple program loaders, it doesn't manage much and most is left for the application to manage. So if you develop something delicate, you have to do it all on your own - which can be an advantage because you do not need to cooperate with an OS that has it's own ideas of systems management.

Not supporting commercial software is fine
How would you support commercial software? Make the OS a closed shop where users are not allowed to become root?

but not having laptop drivers that other BSDs have is a sign of something else.
Which ones?
On my laptop I don't have drivers for camera (didn't bother to figure out yet), bluetooth (dto) and internal mic (sadly that one doesn't seem to exist). Everything else works. And it is not one of the laptops usually recommended here.
 
No. I was asking what you were trying to say. You have some unverifiable claims, and some are partially true.
That is commonly called "educated guesses". ;)

Intel incorporated MINIX into their Management Engine (ME) for specific tasks, including security and remote management of hardware. This decision drew criticism due to potential privacy concerns, as MINIX runs at a low level on Intel processors, with the capability for remote access and control, which some feared as a backdoor.
If this were serious, the thing could be switched off by those who do not need it. Fact is, it cannot.

So, educated guess again: the wet dream behind all this stuff is to take the user out of the control loop. In other words, have the hardware manufacturers, software manufacturers and content providers work together and the user have no influence on what the machine does (think "digital rights management" etc.)

There are industry standards for laptop hardware development, but they are guidelines rather than strict requirements.
Having made my own laptop operative (patches are in bugtracker), I know firsthand what these standards are worth.

It's debatable whether "integration" can lead to a complex and unmanageable system. It's not a scientific fact.
I see - nowadays people need "scientific facts" first to prove that they get their hands dirty when playing in the mud. Science is the religion of today, and you can't do anything that your priest has not condoned.

There were times when I happened to know every file on the system and it's purpose - and I consider that fact a valid measurement for manageability.
 
Intel incorporated MINIX into their Management Engine (ME) for specific tasks, including security and remote management of hardware. This decision drew criticism due to potential privacy concerns, as MINIX runs at a low level on Intel processors, with the capability for remote access and control, which some feared as a backdoor.
you got any links to back THAT up?


I also don't know why people get triggered by the fact that Server is top priority for FreeBSD (both developer and community alike).
I've said this before: BSDs are a DIY thing. If you want, you can set it up as a server. FreeBSD will run the software, and be limited only by the available hardware. If you want, you can set it up as a desktop - same thing. Yeah, it takes time and effort to set either one up. But no, that doesn't mean that 'server' role is a priority at all. If you bother to read what the Foundation publishes, you'll discover that 'server' as such is NOT a priority there - but flexibility is. Yeah, 'Power to serve' is FreeBSD's motto - but 'serve' does NOT necessarily mean 'Web Server' or 'File Server'. If a DE serves my needs, it serves my needs - and that's what 'serve' in 'Power to serve' FreeBSD motto is about.
 
Not supporting commercial software is fine but not having laptop drivers that other BSDs have is a sign of something else.

Which drivers are those? I am only aware of a wifi driver in OpenBSD.

WiFi is being pushed by the FreeBSD Foundation now, so there is progress expected there.
 
If you bother to read what the Foundation publishes, you'll discover that 'server' as such is NOT a priority there

FAQ: "FreeBSD is a versatile operating system that excels in various use cases. It is particularly well-suited for server environments, where its stability and performance make it a popular choice for web hosting, databases, and networking applications. FreeBSD’s robust security features also position it as a strong candidate for firewall and security appliance deployments. Beyond servers, FreeBSD can be tailored to function in specialized environments, including embedded systems and game console devices. Its adaptability, reliability, and open-source nature make FreeBSD a compelling choice for a wide range of computing needs."

Please note that there is no "desktop" in this paragraph.
 
And none of these justify the lack of inferior hardware support, especially when OpenBSD, linux libre and NetBSD have support for the same hardware drivers and FreeBSD lags behind.
I understand that you're frustrated by the lack of support from FreeBSD for some of your desktop hardware. That's fine, I'm also frustratred sometimes by the lack of software or hardware support by various developers and manufacturers.

Now, to put things in a broader context, please consider the following:

1. I use FreeBSD primarily as a desktop operating sytem, and have been using it as my platform of choice for years. Not OpenBSD, not NetBSD. Why? Not because I don't like them (okay, I don't really enjoy using OpenBSD, but would happily use NetBSD more often if I could). Mostly because both hardware and software support, for the machines I use and the software I want to run, have proven to be way better on FreeBSD than on any other BSD. I've tried many times to run OpenBSD, NetBSD, even DragonFly, and still give them a spin from time to time out of curiosity and to keep an eye on their evolution. None of them gave me a satisfactory desktop experience so far: broken or missing graphics drivers, higher CPU load and battery drain on laptop, lacking audio capabilities and drivers, a noticeably slower sytem, and about half the (free) software I'd like to run not available. Sure, there is the occasional piece of hardware that doesn't work on FreeBSD but works elsewhere (for example, OpenBSD and NetBSD support my laptop's SD card reader, but FreeBSD doesn't) and I'd really like to have OpenBSD's suspend to disk, however opposite situations are common and the advantages don't outweigh the disadvantages at all for me. So, I really wouldn't say that FreeBSD, as a whole, lags behind other BSDs. It lags behind Linux for sure, but you can't really compete when you only have a fraction of the human and financial resources available.

2. The FreeBSD Foundation is committed to improve desktop usability. They fund developers working on wireless and graphics drivers, for example. Just have a look at their Projects page:
The FreeBSD Foundation said:
Initiative to develop a graphical installation interface for FreeBSD
The FreeBSD Foundation said:
Enhancing FreeBSD's audio stack to improve the support for modern audio hardware and software applications
The FreeBSD Foundation said:
WiFi update - Intel drivers and 802.11ac
... etc. As you can see, pretending that the FreeBSD community doesn't care about desktop computing is just false.
 
I'm afraid I already know the answer, but please enlighten me: How do you think a CPU works if not with an ‘entire operating system’?
I think you should read carefully what are you quoting, and then trying to understand the meaning of the single words, before writing $RANDOM posts. It's not that difficult, you know. School should have already taught you that.
I'm not saying that there shouldn't be any software at CPU-level, I'm arguing that this is HARDLY can be considered a security feature.

If you read carefully the post you're quoting, you'll notice I enlarged only ONE specific word from the text I quoted.
 
Ahh yeah! Now these things have a name, they're called Workstations. And they are a traditional domain of Unix - from times when the Windows/PC stuff had neither the compute power nor the graphics to run them.

I am certain FreeBSD does nothing to push these use-cases away. The problem is with the providers of the software: as soon as Windows became powerful enough, and omnipresent anyway, they went the easy way.
For Digidesign (ProTools, nowadays Avid) it gets obvious from the history of the shop - they just didn't have a unix, neither had their customers.

There is also a certain design issue: a unix system does already manage itself, it is a real OS similar to a mainframe. Windows OTOH comes from a tradition of simple program loaders, it doesn't manage much and most is left for the application to manage. So if you develop something delicate, you have to do it all on your own - which can be an advantage because you do not need to cooperate with an OS that has it's own ideas of systems management.


How would you support commercial software? Make the OS a closed shop where users are not allowed to become root?


Which ones?
On my laptop I don't have drivers for camera (didn't bother to figure out yet), bluetooth (dto) and internal mic (sadly that one doesn't seem to exist). Everything else works. And it is not one of the laptops usually recommended here.
Well, both Windows and macOS can handle desktop use cases (browsing, listing music, watching videos etc) and Workstation demands (resource demanding tasks). And there's a lot of commonalities between Workstation and Desktops - both are targeted for individual uses (as oppose to servers and mainframes)

I know this. I just think this gatekeeping is non-technical
 
That is commonly called "educated guesses". ;)


If this were serious, the thing could be switched off by those who do not need it. Fact is, it cannot.

So, educated guess again: the wet dream behind all this stuff is to take the user out of the control loop. In other words, have the hardware manufacturers, software manufacturers and content providers work together and the user have no influence on what the machine does (think "digital rights management" etc.)


Having made my own laptop operative (patches are in bugtracker), I know firsthand what these standards are worth.


I see - nowadays people need "scientific facts" first to prove that they get their hands dirty when playing in the mud. Science is the religion of today, and you can't do anything that your priest has not condoned.

There were times when I happened to know every file on the system and it's purpose - and I consider that fact a valid measurement for manageability.
Sorry I won't continue any discussion this way if asking for reliable evidence is a discussion and educated guess is considered more reliable
 
I know this. I just think this gatekeeping is non-technical
what gate-keeping? Windows and Apple certainly are guilty of non-technical gate-keeping... Back in 2000s, it was impossible to install any kind of database application (other than MS Access) on consumer-level Windows or Mac. But install Linux or FreeBSD on the same hardware - and wow, you can install the whole A(pache)M(ysql)P(hp) stack on it (actually running the stuff was limited by hardware specs).

As things stand now, an up-to-date UNIX-based DE can provide the exact same functionality as Windows and Mac - browsing the web (even Youtube), listening to music, watching videos, and more. When it comes to doing the same basic tasks and providing convenience, Open Source has reached feature parity with Win/Apple awhile ago.

BSD's are small volunteer-driven projects, so they don't have the resources necessary to support 'every piece of hardware imaginable'. Yeah, even Bluetooth and wi-fi did not get the level of attention needed to make them viable - there were other things on the devs' plates to worry about, while not exactly getting paid for the efforts.
 
Which drivers are those? I am only aware of a wifi driver in OpenBSD.

Its strange, FreeBSD now has a much more sophisticated "layer" to allow for modern wifi hardware at highest speeds.

However OpenBSD has a "quick n dirty" driver that is working *now* but only at '802.11g' speeds.

Which approach is best? Still can't decide!
 
Back
Top