2024: The year of desktop FreeBSD?

Yeah, try finding SPARC and POWER - based stuff... Those architectures have been off the market for awhile. Even Xeons and Epycs are x86_64...
It's not (yet) that difficult to find them, actually. Go to the first bank out there. They still have them.
The market shrunk considerably, but it's not like they don't exist anymore.
SPARC is dead due to Oracle, but IBM still invests big bucks in POWER.

and aarch64, which is more efficient than SPARC/POWER, is so easily available, it's even in your phone, as well as Ampere servers.
Aarch64? In servers? These days?
Dude, stop wasting time with Steam and go to see a real data center, because you really don't know what are you talking about. Seriously.

If you run the exact same benchmark tools on SPARC/POWER (provided you can find the stuff at all) and on aarch64, the former would quickly say uncle.
ROTFLMAO
 
However, the opposite happened. Linux has faded into the tail end of my bootloader and I daily drive FreeBSD 14 with i3wm for all my general computing needs. I've found the experience extremely smooth and the performance snappy. And FreeBSD uses way fewer resources as well. I am naturally lucky in that all the requisite drivers came with the kernel so everything on the laptop worked out of the box.
you got lucky w/ all your hardware being supported out of the box, because that wasn't my experience at all...
Despite this, I've noticed that a lot of folks don't bother with FreeBSD as a general desktop OS. I would be very interested in hearing of your reasons why people do or don't daily drive FreeBSD. This question is both out of general curiosity and as background for the article. Are you double booting with something else, for example?
my guess it's due to a lot of software isn't being up to date..
 
you got lucky w/ all your hardware being supported out of the box, because that wasn't my experience at all...

my guess it's due to a lot of software isn't being up to date..
I've had a few machines with everything working "out of the box" with FreeBSD. I have one laptop where everything works even suspend and resume. I've also got external audio, web cameras, hdmi capture cards, gamepads, and other hardware that works nicely. But the biggest issues have been wifi so I just ordered a bunch of supported cards and pulled out the unsupported cards and replaced them. My R1 Alienware Steam Machine had supported wifi which was cool. :D
 
It's not (yet) that difficult to find them, actually. Go to the first bank out there. They still have them.
The market shrunk considerably, but it's not like they don't exist anymore.
yeah, fat chance a bank will actually sell that to you for your toy benchmarking project. They will want adequate compensation, such as your mortgage, your inheritance, and rights of your firstborn, and even that won't be enough to even sniff their asking sale price.

Aarch64? In servers? These days?
yep: Gigabyte has some nice new ones for sale, and RedHat/IBM do support those: https://www.gigabyte.com/us/Enterprise/ARM-Server
Dude, stop wasting time with Steam and go to see a real data center, because you really don't know what are you talking about. Seriously.

ROTFLMAO
what cave were you living in so far? Did you ever peek out? real world has moved on since SPARC/POWER was the hot stuff a generation or so ago. Oh, and I don't play Steam Deck - there's enough other stuff to keep me entertained. :P
 
yeah, fat chance a bank will actually sell that to you for your toy benchmarking project. They will want adequate compensation, such as your mortgage, your inheritance, and rights of your firstborn, and even that won't be enough to even sniff their asking sale price.
My toy benchmarking?
Look, man, the one who is talking about benchmarking while running low end hardware it's you, not me.
I don't need any benchmark to know you're talking about something you don't even understand.

yep: Gigabyte has some nice new ones for sale, and RedHat/IBM do support those: https://www.gigabyte.com/us/Enterprise/ARM-Server
Sure, sure. When you see them in production running something as serious as what POWER servers are currently running, like the entire stack behind most bank account services, let me know.

what cave were you living in so far? Did you ever peek out? real world has moved on since SPARC/POWER was the hot stuff a generation or so ago. Oh, and I don't play Steam Deck - there's enough other stuff to keep me entertained. :p
I live in the real world and I've already proven that I know what and how modern infrastructure are running. It's you who are thinking that your phone, that can't even run a proper application written in Java without draining battery, is faster than one of most scalable hardware in the market.
You're the one suffering from serious delusions, not me. Projecting your delusions to others is what Jung called transfert.


That said, I'm plonking you.
I've enough to read your crap, since you never took a step into a data center and you clearly don't have any real experience in mission critical workloads.
Go get a real job in IT and then you can talk about how the world works.
Until then, you will be just another armchair expert who talks big because he was able to install ArchLinux by following the wiki.
 
Sure, sure. When you see them in production running something as serious as what POWER servers are currently running, like the entire stack behind most bank account services, let me know.
Amazon Web Services, for starters, they have whole farms dedicated to Ampere/ARM64/aarch64 servers.
I live in the real world and I've already proven that I know what and how modern infrastructure are running. It's you who are thinking that your phone, that can't even run a proper application written in Java without draining battery, is faster than one of most scalable hardware in the market.
Older SPARC processors don't even run at more than 500 MHz, my phone's Exynos 9611 runs at around 2 GHz... oh, and take a look at the amount of RAM supported by the SPARC stuff that you remember from the stone ages when they were the hot new stuff. My phone has the same amount today, and it's faster RAM, too.
 
… Right now, if someone tries FreeBSD on their laptop and struggles with something basic like Wi-Fi or sound,

– or graphics.

Where there's a wish for quick, relatively non-complicated discovery of whether FreeBSD will suit someone's hardware, I have begun recommending:
  • 15.0-CURRENT.
they’ll move on to Linux or something else. …

A person who is pleased by CURRENT might choose to move down – to STABLE or RELEASE, with their relative complexities.
 
– or graphics.

Where there's a wish for quick, relatively non-complicated discovery of whether FreeBSD will suit someone's hardware, I have begun recommending:
  • 15.0-CURRENT.


A person who is pleased by CURRENT might choose to move down – to STABLE or RELEASE, with their relative complexities.
I would not recommend -CURRENT to any developers who are trying FreeBSD for the first time. -CURRENT is a development branch that can contain unexpected bugs and missing features, such as freebsd-update, which may create a negative impression for new users. Many system developers are accustomed to macOS or Windows, where things function as expected.

If a first-time user with a strong understanding of systems programming encounters hardware issues, they are likely to seek alternatives like Linux or other options. If they request assistance, I would advise them that the development branch is suitable only for checking hardware compatibility, not for daily use, as it may present unforeseen challenges.

But it also depends. Some developers are passionate about UNIX/BSD roots and may show willingness to burn their feet on larva. I know because that's what many did for Plan 9 (everything is file analogy), HaikuOS (beOS lovers) and RedoxOS (rust enthusiastic developers)
 
All this excitement makes a person impatient.

FreeBSD evangelism​

27th March:

… The whole thing will be available for free with a 9 month delay, at which point I'll provide a translation …

I'm quite certain that 9 is Finnish for 6, which would mean lifting the embargo tomorrow.

Not the entire issue; just your four pages ?
 
… -CURRENT … funny bugs …

For quick, relatively non-complicated discovery of whether FreeBSD will suit someone's hardware:
  • in my experience, CURRENT is better.


For the use case above, it no longer makes sense for me to recommend RELEASE. Absences of packages from quarterly (waiting periods of up to three months); provision of GPU-related packages that are officially not compatible with the officially-recommended version of the OS; … from the point of view of a newcomer, it's quite bonkers.

… bad impression to the first time users. …

For the use case above, far too frequently:
  • the bad first impression is from the combination of RELEASE + quarterly.
Reality check ?
 
For quick, relatively non-complicated discovery of whether FreeBSD will suit someone's hardware:
  • in my experience, CURRENT is better.


For the use case above, it no longer makes sense for me to recommend RELEASE. Absences of packages from quarterly (waiting periods of up to three months); provision of GPU-related packages that are officially not compatible with the officially-recommended version of the OS; … from the point of view of a newcomer, it's quite bonkers.



For the use case above, far too frequently:
  • the bad first impression is from the combination of RELEASE + quarterly.
Reality check ?
That's just anecdotal. According to FreeBSD, a -CURRENT user is on his own (not supported in FreeBSD forum either. Besides -CURRENT snapshots are built with a lot of debugging stuff enabled that will hurt performance a lot. This is suitable for anyone doing active development on base. It's not suitable for actually using the system. One can turn it off during world build but seriously, compiling the operating system from source to upgrade doesn't sound like a welcoming experience. It's rarely make any sense to use -CURRENT. GhostBSD uses -STABLE but then again, their forums support it (even provide binary updates)

Pre-RELEASE versions of FreeBSD, not intended for use in production environments:

CURRENT – the main branch, the core of development

STABLE – branched from CURRENT, long-term preparations for release engineering

release engineering – ALPHA, BETA, release candidates (RC) – branched from STABLE.

Uppercase has special meaning. For example:

a first beta release is not a (production) RELEASE.

The word CURRENT is sometimes a source of confusion:

if you are looking for the current version of FreeBSD, you most likely want a RELEASE version (see above) – not CURRENT – CURRENT has special meaning in the development process.
 
… GhostBSD uses -STABLE but then again, their forums support it (even provide binary updates)

GhostBSD began using packages for the OS long ago.

More recently, GhostBSD switched to pkgbase. <https://forums.freebsd.org/profile-posts/5557/>

compiling the operating system from source to upgrade doesn't sound like a welcoming experience.

Compiling is unnecessary.

CURRENT is packaged. Officially.

… debugging stuff enabled that will hurt performance a lot. … It's not suitable for actually using the system. …

I actually do use GENERIC, very frequently, with no noticeable drop in performance (compared to the packaged GENERIC-NODEBUG kernel).

I'll write more about this at a later date. Interim stuff will be elsewhere.

… According to FreeBSD, a -CURRENT user is on his own …

Not really.
 
yeah, fat chance a bank will actually sell that to you for your toy benchmarking project. They will want adequate compensation, such as your mortgage, your inheritance, and rights of your firstborn, and even that won't be enough to even sniff their asking sale price.
astyle: behave. Sam The Ripper has a point here, although he pushes it over the top. As long as I worked with the POWER machines -and I did so for twenty years- they were at least ten years ahead of what Intel does, hardware-wise.

Most people nowadays working with the PC-based technology, have never had a look into that kind of tech, which came down from the original mainframes, combined with what Seymour did, i.e. supercomputers. The supercomputers did then slowly move to Linux arrays with extreme parallelization - but that is not exactly what banks need.

And I was working on migrating the big banks from their mainframes to client/server tech (POWER/SPARC), so that they could go Internet. I have seen both worlds - and chances were, when I found a bug in the AIX, the same bug might have been in FreeBSD, usually fixed. But with the hardware it was the other way round - e.g. they had hotplug PCI and fully dynamic VMs long ago already.
How do you add/remove CPUs from your bhyve guests inflight, on demand? It doesn't work yet. It did work on POWER some 15 years ago.
 
The problem was:
  1. ideology (see Ulrich Drepper);
  2. libc (GNU Libc was designed to be static linking unfriendly for idelogy reasons. See #1).
And do you know what is ironic? Ulrich Drepper, who always talked bad about static linking, now is researching at the UKL project, to make UNIKERNELS based on Linux. And Unikernels are just the next evolution of static linking. LOL


Static linking is NOT about linking the entire library into your binary, but it's about linking only the object code that contains the routine(s) from the library you need...
Would you statically link libc?
 
GhostBSD began using packages for the OS long ago.

More recently, GhostBSD switched to pkgbase. <https://forums.freebsd.org/profile-posts/5557/>



Compiling is unnecessary.

CURRENT is packaged. Officially.



I actually do use GENERIC, very frequently, with no noticeable drop in performance (compared to the packaged GENERIC-NODEBUG kernel).

I'll write more about this at a later date. Interim stuff will be elsewhere.



Not really.
If I go -CURRENT, how can I stay up to date and which port to use? Latest?

  • Simple installations of FreeBSD-CURRENT and FreeBSD-STABLE can be updated with pkg – there's no longer a requirement to build from source.
Okay, this is new information. I think I might go for it myself too on my desktop. Probably terrible idea but okay
 
And I was working on migrating the big banks from their mainframes to client/server tech (POWER/SPARC), so that they could go Internet. I have seen both worlds - and chances were, when I found a bug in the AIX, the same bug might have been in FreeBSD, usually fixed. But with the hardware it was the other way round - e.g. they had hotplug PCI and fully dynamic VMs long ago already.
How do you add/remove CPUs from your bhyve guests inflight, on demand? It doesn't work yet. It did work on POWER some 15 years ago.
Now that is actually interesting... but y'know, I think that swapping hardware CPUs in a hotplug architecture is only possible in a dual-socket setup... I know there are some SuperMicro boards on the market these days that have this capability. Yeah, they cost at least $500 USD, more than twice what you'd pay for a single-socket board (unless you're building around a Threadripper), but if hotswapping is a required capability, be prepared to pay for it.

Well, if nobody knows how to handle the hotswapping right, the guy with the money just might decide to spend the money on what's actually available and will have a decent ROI, even if technically, it's different from how things were done in the past.
 
Sure. In fact, libc should be the first library to be statically linked.
I tend to static link in projects by default. Though I came across this the other day:

Warning
Applications linking static libraries from the GNU C library (glibc) still require glibc to be present on the system as a dynamic library. Furthermore, the dynamic library variant of glibc available at the application’s run time must be a bitwise identical version of the one present while linking the application. As a result, static linking is guaranteed to work only on the system where the executable file was built.
I just put it down to one of the many philosophical "fibs" that accompanies a lot of open-source software.

Same with this section from the same link:

Static linking appears to provide executable files independent of the versions of libraries provided by the operating system. However, most libraries depend on other libraries. With static linking, this dependency becomes inflexible and as a result, both forward and backward compatibility is lost. Static linking is guaranteed to work only on the system where the executable file was built.
This is trying to suggest that if you statically link against a library that has other versioned dynamic dependencies, that will only be able to link against them and their specific version. This is not at all how static linking works. Another slight "fib".

The only library I tend to dynamically link for closed source projects is SDL2, so that users can replace it with "hacked" versions if they have any particularly novel hardware. It also allows them to replace it with newer versions as Wayland support continues to mature or if Windows users want that terrible Windows game overlay stuff.
 
In many ways, that kind of is the deciding factor for popularity (on the desktop at least). Hardware support doesn't really matter quite so much so long as a platform has a good catalog of programs and *some* hardware that is known to work.

The driver treadmill that some people expect FreeBSD to sprint on, is counter-productive. But I do understand that this is likely a symptom of context switching between what people new to open-source are used to with commercial products (particularly Windows).

Agreed. It's unfortunate because FreeBSD has so many good features I believe applications developers can leverage. Despite all the good Apple Silicon goodness; I do prefer horizontal integration with hardware. Like having a ZFS based workstation connected a thunderbolt DAS, based on ZFS... man. Also, should an M series SoC GPU become outdated; I can't replace it. Inherit to ARM. The pros outweigh the cons (ie. less flexibility) though IMO.
 
I tend to static link in projects by default. Though I came across this the other day:


I just put it down to one of the many philosophical "fibs" that accompanies a lot of open-source software.
Yes, it's because of dlopen(3), which is enforced by glibc even in statically-liked binaries. GLIBC is really a cancer.

Read the comment of vadaszi at the bottom of the following page about this toxic behavior.

It's one of the reasons I said to use a saner libc than GNU's. If you're running Linux, Musl is the way to go.

Same with this section from the same link:


This is trying to suggest that if you statically link against a library that has other versioned dynamic dependencies, that will only be able to link against them and their specific version. This is not at all how static linking works. Another slight "fib".
LOL That was a massive lie, made just to mask the real problem that is GLIBC. Good work, Red Hat!
A statically linked binary has only one requirement: kernel ABIs.
And Linux didn't changed the binary format since 2.6.32, so there is something like 15 years of binary compatibility guaranteed.

UPDATE: I'll explain better what I mean.
When I say guaranteed I mean that you can run without any issue a binary compiled for versions >= 2.6.32.
But that doesn't mean that a binary compiled for an earlier version won't work. The game Uplink, for example, can still work in modern distros.


The only library I tend to dynamically link for closed source projects is SDL2, so that users can replace it with "hacked" versions if they have any particularly novel hardware. It also allows them to replace it with newer versions as Wayland support continues to mature or if Windows users want that terrible Windows game overlay stuff.
Yes, mixed linking can be a valid approach. And it's way saner the fully dynamically linked binaries.
 
Agreed. It's unfortunate because FreeBSD has so many good features I believe applications developers can leverage. Despite all the good Apple Silicon goodness; I do prefer horizontal integration with hardware. Like having a ZFS based workstation connected a thunderbolt DAS, based on ZFS... man. Also, should an M series SoC GPU become outdated; I can't replace it. Inherit to ARM. The pros outweigh the cons (ie. less flexibility) though IMO.
Yes, but professionals, especially companies, don't care about replacing a single part, since in many countries there are tax reductions and vendor discounts when they buy brand new workstations and servers.
 
Yes, but professionals, especially companies, don't care about replacing a single part, since in many countries there are tax reductions and vendor discounts when they buy brand new workstations and servers.

Completely different user demographic. Regular users care. I long for modular Apple Silicon, but their unified memory architecture doesn’t allow for that flexibility.
 
Yes, it's because of dlopen(3), which is enforced by glibc even in statically-liked binaries. GLIBC is really a cancer.

Read the comment of vadaszi at the bottom of the following page about this toxic behavior.

It's one of the reasons I said to use a saner libc than GNU's. If you're running Linux, Musl is the way to go.


LOL That was a massive lie, made just to mask the real problem that is GLIBC. Good work, Red Hat!
A statically linked binary has only one requirement: kernel ABIs.
And Linux didn't changed the binary format since 2.6.32, so there is something like 15 years of binary compatibility guaranteed.

UPDATE: I'll explain better what I mean.
When I say guaranteed I mean that you can run without any issue a binary compiled for versions >= 2.6.32.
But that doesn't mean that a binary compiled for an earlier version won't work. The game Uplink, for example, can still work in modern distros.



Yes, mixed linking can be a valid approach. And it's way saner the fully dynamically linked binaries.

I ran Alpine Linux on my desktop. It offered a great OpenBSD-like experience with the benefits of Linux. But many large open-source projects don’t compile on musl without custom patches. Rust with aarch64 on musl throws a bunch of strange errors.

I dislike the entire Rust ecosystem as much as I appreciate the design of the language. I made the mistake of following their recommended installation method. They created a script that installs rustup, which polluted my environment variables in over 10 places. After a failed rustup installation, my shell kept showing “file not found” errors from the $PATH environment variables. I had to manually locate these files and remove the lines it added to .bashrc, .profile, .config/fish, etc. Rust doesn’t even support aarch64 properly. I can't just use rustc by itself (maybe there's a way); I'm forced to use Cargo.toml and configure everything there just to compile a simple program with a basic syscall. Then, there’s the linker issue with LLVM lld. I ended up having to use the libgcc runtime and GNU linker and be done with it. It was f*ckin ridiculous amount of work, or maybe just maybe I suck at it.

Note: Apologies for my words, but I really needed to vent. I despise __GLIBC__ with my passion but it's probably too late
 
Back
Top