2024: The year of desktop FreeBSD?


Then even better. Why would I bother with a gaming PC if I don't care about gaming? I use a ThinkPad X220 and a T480—both run Plan 9, no problem.

My problem isn’t that "X doesn’t support Y for Z reason." It’s the people who act like these issues are features and refuse to engage in any conversation about changes that could improve the FreeBSD desktop experience (and servers, for that matter), let alone actually making any changes in code. Whether my solutions are subpar or wrong is a different conversation entirely.
 
I wouldn't be so sure. One off pops of money like this have happened in the past. Whilst they are absolutely great, they can only go so far. And then finding developers with the specific skills to give the money to is the *actual* challenge. :/

The focus of FreeBSD is not based on philosophy (we all would love a great desktop experience to be available to users if it were possible). Decisions (based on past observations) tend to be purely based on technical and resource limitations.
 
Why would I bother with a gaming PC if I don't care about gaming?
For the hardware specs.

Just because a machine is marketed towards gamers, that doesn't mean a thing if you can use the power for other kinds of computing. GPUs are popular as a means of mathematical computations and 3d modeling, CPUs do need to be beefy if you want to compile stuff like www/firefox, having a ton of RAM doesn't hurt in those scenarios, either.

Hell, I have a gaming machine (Asus ROG Zephyrus AMD Advantage, and it did great in all that stuff (even though I had to run FreeBSD inside a VM to make it happen on the Zephyrus metal!)
 
For the hardware specs.

Just because a machine is marketed towards gamers, that doesn't mean a thing if you can use the power for other kinds of computing. GPUs are popular as a means of mathematical computations and 3d modeling, CPUs do need to be beefy if you want to compile stuff like www/firefox, having a ton of RAM doesn't hurt in those scenarios, either.

Hell, I have a gaming machine (Asus ROG Zephyrus AMD Advantage, and it did great in all that stuff (even though I had to run FreeBSD inside a VM to make it happen on the Zephyrus metal!)
I was replying to the other person in some other reasons. I am the one here advocating hardware support for enhanced user experience ;))
 
For the hardware specs.

Just because a machine is marketed towards gamers, that doesn't mean a thing if you can use the power for other kinds of computing. GPUs are popular as a means of mathematical computations and 3d modeling, CPUs do need to be beefy if you want to compile stuff like www/firefox, having a ton of RAM doesn't hurt in those scenarios, either.

Hell, I have a gaming machine (Asus ROG Zephyrus AMD Advantage, and it did great in all that stuff (even though I had to run FreeBSD inside a VM to make it happen on the Zephyrus metal!)
I'd never buy a gaming machine if I don't have to play.
Desktop hardware is not reliable enough for serious workloads.

It's better to buy a workstation, which has server-class motherboard and CPU(s), with way more reliable storage, GPU (NVIDIA Quadro is way better than GeForce for GPU-enhanced computing) and memory.
 
I'd never buy a gaming machine if I don't have to play.
Desktop hardware is not reliable enough for serious workloads.

It's better to buy a workstation, which has server-class motherboard and CPU(s), with way more reliable storage, GPU (NVIDIA Quadro is way better than GeForce for GPU-enhanced computing) and memory.
This is where you kind of have to know enough to be able justify the high cost of an Epyc/Xeon when compared to a high-end desktop/laptop. How do you know that an extra PCIe lane or two will make a difference for the render speeds of projects you routinely do? Is your job performance review riding on whether that render completes on time? Or do your paying customers want to switch to a public host for your code repo because of your bandwidth limitations? Then yeah, paying 5x more for enterprise-grade hardware with appropriate specs jacked up as much as practical - that can be a good investment, because you'll get a good ROI.

These days, an AMD Instinct card can be had for about as much as a high-end Radeon, and actually have similar measured performance. If you want better performance for a specific task, do your research. That high-end Instinct card can very well turn out to be a waste of time and money if you don't use it right.

I've seen machines with very similar specs get marketed towards both gamers and devs. For admins, real compute power is elsewhere anyway.
 
This is where you kind of have to know enough to be able justify the high cost of an Epyc/Xeon when compared to a high-end desktop/laptop. How do you know that an extra PCIe lane or two will make a difference for the render speeds of projects you routinely do? Is your job performance review riding on whether that render completes on time? Or do your paying customers want to switch to a public host for your code repo because of your bandwidth limitations? Then yeah, paying 5x more for enterprise-grade hardware with appropriate specs jacked up as much as practical - that can be a good investment, because you'll get a good ROI.

These days, an AMD Instinct card can be had for about as much as a high-end Radeon, and actually have similar measured performance. If you want better performance for a specific task, do your research. That high-end Instinct card can very well turn out to be a waste of time and money if you don't use it right.

I've seen machines with very similar specs get marketed towards both gamers and devs. For admins, real compute power is elsewhere anyway.
That's what I was saying. Supporting better laptop tech like Wireless, Bluetooth etc on FreeBSD would pull in more developers who want to use FreeBSD on the go, not just on servers. Right now, if someone tries FreeBSD on their laptop and struggles with something basic like Wi-Fi or sound, they’ll move on to Linux or something else. By improving these areas, FreeBSD would become a real option for devs using laptops, and those devs could bring their skills and contributions back to the project. More hardware support = more people willing to stick around and help out.

I'm planning to rewrite some of the OpenBSD and NetBSD driver stacks in Rust this winter and see how it goes, haha. Memory management isn’t exactly my strong suit
 
More hardware support = more people willing to stick around and help out.
Or more people to put additional stress on existing developers and generate fatigue.

de Raadt fom OpenBSD even said in one of his past interviews that the project doesn't want users, it only cares about developers. To an extent, most non-consumer operating system projects are in the same boat.

But of course I don't disagree with you. More hardware support, the better!

Then even better. Why would I bother with a gaming PC if I don't care about gaming? I use a ThinkPad X220 and a T480—both run Plan 9, no problem.
Slightly unrelated but ironically my X220 is my "gaming" machine (my X230 onwards has a crap keyboard). Though granted, I only really play a bit of Half-Life these days.
Part of my thesis was to develop a distributed implementation of OpenGL on Plan 9. Having a "weak bean" laptop was actually quite a good test bed.
 
This is where you kind of have to know enough to be able justify the high cost of an Epyc/Xeon when compared to a high-end desktop/laptop.
This is not a cost. It's an investment.
If you're doing serious business buying the appropriate hardware is the natural thing to do.

How do you know that an extra PCIe lane or two will make a difference for the render speeds of projects you routinely do?
If you're talking about rendering the difference in terms of speed and reliability between a Quadro and a GeForce is like heaven and earth.

Is your job performance review riding on whether that render completes on time? Or do your paying customers want to switch to a public host for your code repo because of your bandwidth limitations? Then yeah, paying 5x more for enterprise-grade hardware with appropriate specs jacked up as much as practical - that can be a good investment, because you'll get a good ROI.
It's always a good ROI if you're doing serious work. The initial cost of that workstation will pay for itself.

These days, an AMD Instinct card can be had for about as much as a high-end Radeon, and actually have similar measured performance.
Sorry but that's not like this. And look that workstation-class graphics card also have ECC capabilities in their VRAM, so they are not prone to corruptions. And corruptions cost money.
Also, high end products certifies their tools for workstation-grade graphics card. See Autodesk, for example.

If you want better performance for a specific task, do your research. That high-end Instinct card can very well turn out to be a waste of time and money if you don't use it right.
The waste of money is running consumer hardware for workload designed for workstations.
And if you think your high end gaming desktop is as powerful and reliable as a workstation you're seriously deluded.

I've seen machines with very similar specs get marketed towards both gamers and devs. For admins, real compute power is elsewhere anyway.
Clock speed is not the only measure unit. When you're working, reliability and optimizations matter as much as the clock speed.
 
The waste of money is running consumer hardware for workload designed for workstations.
And if you think your high end gaming desktop is as powerful and reliable as a workstation you're seriously deluded.
That only matters if you have any money riding on the outcome.

Especially if you're not spending your own money on the high-end hardware.
 
de Raadt fom OpenBSD even said in one of his past interviews that the project doesn't want users, it only cares about developers. To an extent, most non-consumer operating system projects are in the same boat.
In my experience this means "more laptop hardware support needed", though.
 
Or more people to put additional stress on existing developers and generate fatigue.

de Raadt fom OpenBSD even said in one of his past interviews that the project doesn't want users, it only cares about developers. To an extent, most non-consumer operating system projects are in the same boat.
Hm. That might raise the question, for what purpose do the developers actually develop? Certainly they can develop for their personal satisfaction, that is perfectly legal, and irrelevant for anybody else.

There is then an -entirely unrelated- problem, that some people need a working OS to do whatever job is required. And certainly these people can then buy AIX or Solaris or HPUX - only they don't get a sourcecode to adapt the system to their needs. But that is their problem alone.

Once upon a time, after a few guys had grabbed the NET/2 tapes and made them a functional OS for the PC, we were a community. Then at some time the developers locked themselves up in an ivory tower and declared a difference between developers and users.
But that is much analogous to a general social development. I remember a time when handling a computer was something exotic and exceptional, where only few people were involved with, in varying degrees. Nowadays everybody has a computer, but only a small group decides what these computers are supposed to do, while the majority of people needs a computer for the sole purpose so that money can be pressed out of them.
 
This is not a cost. It's an investment.
If you're doing serious business buying the appropriate hardware is the natural thing to do.
That is true, if you are businessman with no clue about how the hardware works, neither what your requirements precisely are. Then you have to listen to the marketing babble and believe that they provide the "solutions" they promise.

That is a big difference to other businesses. When you buy a wood harvester or a printing press, then you exactly know your demands, and how such a machine is supposed to work.

Once upon a time, before the separation, we Berkeley users shared the knowledge about how the hardware works.
 
Hm. That might raise the question, for what purpose do the developers actually develop? Certainly they can develop for their personal satisfaction, that is perfectly legal, and irrelevant for anybody else.
Potentially just to scratch a personal itch. I.e most of my hobby software I write for myself, not for random users. I think that this is fairly common. Though certainly some developers may also write free software in a way to prove a point that the commercial alternative is crap. These projects are likely to be more "user-friendly".

Once upon a time, after a few guys had grabbed the NET/2 tapes and made them a functional OS for the PC, we were a community. Then at some time the developers locked themselves up in an ivory tower and declared a difference between developers and users.
The difference between a "user" 20 years ago is very different from the general masses that use a computer today. I would want my tower to be solid iron to keep out the mass population of mouth breathing flat earth twits you find online these days.

But that is much analogous to a general social development. I remember a time when handling a computer was something exotic and exceptional, where only few people were involved with, in varying degrees. Nowadays everybody has a computer, but only a small group decides what these computers are supposed to do, while the majority of people needs a computer for the sole purpose so that money can be pressed out of them.
Indeed. I live by the rule of: If I wouldn't engage with a certain type of person in real-life; why the hell would I online? Yes, extracting money from them is one reason, that the internet "appears" to cater to them. But free-software communities don't particularly have to play ball with that idea.

(There are two main companies involved in the FreeBSD space that would benefit from monetizing "the majority". I imagine this is swaying the foundation's focus somewhat. Is this healthy and benign? Time will tell)
 
That only matters if you have any money riding on the outcome.

Especially if you're not spending your own money on the high-end hardware.
It matters when you have to decide if putting your business at risk is worth the money you believe you’re saving by buying something not suitable for the task.
 
Potentially just to scratch a personal itch. I.e most of my hobby software I write for myself, not for random users. I think that this is fairly common.
Yeah, and that is totally valid an can be real fun.
But then also, FreeBSD is a project that has some history (and if you add the Berkeley prequel, then it gets a really relevant tradition), so my question is indeed on a literal level: for what purpose is it done?

My strong personal opinion (but that is certainly argueable) is this: supporting complex professional installations, with large storage arrays, high availability, single-sign-on, elaborate security realms, and hierarchical networking on one hand, and the vast and ever-evolving variety of personal-device gadgets and accompanying ease-of-use demands on the other hand - trying to do both in the same codebase, that will not turn out well.

That is the practical consideration here. The other ...

(There are two main companies involved in the FreeBSD space that would benefit from monetizing "the majority". I imagine this is swaying the foundation's focus somewhat. Is this healthy and benign? Time will tell)
... goes more into this direction. But that is more difficult to pronounce.

And then there is a third aspect, rather independent of this current strain of discussion:

The difference between a "user" 20 years ago is very different from the general masses that use a computer today. I would want my tower to be solid iron to keep out the mass population of mouth breathing flat earth twits you find online these days.

Besides the skillful ones and the crazy ones, there is probably a third group of people, who are generally witted and willing and able to learn, but had no specific reason or personal interest yet to engage in computers. These are now basically left alone somewhere in between the flat earth babble and what can be obtained from reddit (at best).

Having participated in the build-up of all this, from back somewhere 1986 (in my case), one might feel some kind of responsibility for these. But I'm lacking an idea on how one might put such into action...
 
That is true, if you are businessman with no clue about how the hardware works, neither what your requirements precisely are. Then you have to listen to the marketing babble and believe that they provide the "solutions" they promise.

That is a big difference to other businesses. When you buy a wood harvester or a printing press, then you exactly know your demands, and how such a machine is supposed to work.

Once upon a time, before the separation, we Berkeley users shared the knowledge about how the hardware works.
Even if you'd know about how hardware works, you still should choose what it really matters to you: saving money in the short term or running a business properly.
This choice could greatly impact the health of your company.

Think about how IT works: nobody buys big irons nowadays.
SPARC and POWER server market is shrinking every day more, and everyone's just buying x86 servers that are way inferior, but they cost a fraction.
Now, we're seeing an increasing number of VMs under a load balancer to cope with the fact that they are not that reliable and performant.
We're running Kubernetes to cut costs but we need to put it on top of VMware because running it in bare metal is not as cheap as running it in VM. Servers are so unreliable that no one cares to debug them anymore. They are considered cattle, not pet, so they are replaceable just like your smartphone, and their uptime is even more short than my laptop.
Something is wrong? Ok, destroy that server and reprovision it.

But, hey, we're saving money, right?
Well, no. Now, we have a stack that is 3x/4x more complex than before, just to run the same workloads that yesterday could run with an IBM POWER server with a couple of LPARs managed by the VIOS.
And companies are spending money in consultancy because the stack is so complicated that upskilling people and keep their knowledge up to date costs a lot.
You're dealing with latency and slowness of the traffic? Well, lucky you. You just have to check: the physical network, the virtualized network, the container network, the service mesh and maybe (and I said MAYBE) you can find the problem.
And you know what? Your workloads are so modern and fantastic that you can't even decide to configure multiple network interfaces, because CNIs just don't support it. So every kind of traffic will go through the management network.

This is what happens when you think you're saving money by running something cheap.
There is no free meal, out there.
 
Think about how IT works: nobody buys big irons nowadays.
SPARC and POWER server market is shrinking every day more, and everyone's just buying x86 servers that are way inferior, but they cost a fraction.
Now, we're seeing an increasing number of VMs under a load balancer to cope with the fact that they are not that reliable and performant.
We're running Kubernetes to cut costs but we need to put it on top of VMware because running it in bare metal is not as cheap as running it in VM. Servers are so unreliable that no one cares to debug them anymore. They are considered cattle, not pet, so they are replaceable just like your smartphone, and their uptime is even more short than my laptop.
Something is wrong? Ok, destroy that server and reprovision it.

But, hey, we're saving money, right?
Yes, we're saving money.

Well, no. Now, we have a stack that is 3x/4x more complex than before, just to run the same workloads that yesterday could run with an IBM POWER server with a couple of LPARs managed by the VIOS.
Yes, I did run these. And then they were replaced by Linux. It did not only cost a fraction, but a good magnitude less.

And companies are spending money in consultancy because the stack is so complicated that upskilling people and keep their knowledge up to date costs a lot.
Consulting was once. Then came outsourcing. And now it is cloud. Nobody knows what's going on anymore? Well then we can just roll out better methods. DevOps etc. Big market.

Concluding: Your arguments are totally valid, and nobody gives a damn.

But I think You're bringing two things together here. Cheap redundant stuff can be quite cool. Think RAID: redundant array of inexpensive disks. If it fails, just replace it, no harm done. And this is not really something new either: it was done on airplanes long before, there using excellent hardware, but that was still not good enough when lifes are at stake.

The other aspect: yes, we're going to end up sitting on a stack where nobody any longer knows how it relates. Now we can create a dark apocalyptic vision about how this all will fall apart. Might bring along some death toll, might then result in something alike to Butler's Jihad - that's the fun part (if you like prophetic visions).

But this cannot be stopped, anyway. There is a simple rule: if it can be done, it will be done.

You're dealing with latency and slowness of the traffic? Well, lucky you. You just have to check: the physical network, the virtualized network, the container network, the service mesh and maybe (and I said MAYBE) you can find the problem.
And you know what? Your workloads are so modern and fantastic that you can't even decide to configure multiple network interfaces, because CNIs just don't support it. So every kind of traffic will go through the management network.
This is the old story: we engineers could create something real fancily good and thought-through, but when seeing the price-tag, it will be cancelled by management, with the simple argument: the cheap stuff does also work.
And You are right: until it doesn't.
 
It matters when you have to decide if putting your business at risk is worth the money you believe you’re saving by buying something not suitable for the task.
ooh, ooh:


Think about how IT works: nobody buys big irons nowadays.
SPARC and POWER server market is shrinking every day more, and everyone's just buying x86 servers that are way inferior, but they cost a fraction.

Reconcile THAT for me, please! ;)

My phone today can run the stuff that would make an Ultra SPARC workstation crash and emit smoke from the PSU. A $500 computer today can run stuff that was a pipe dream for $5000 workstations of 1980s and 1990s. ?
 
Yes, we're saving money.
No, you're not. That's the problem.
You think you're, but in reality you're spending even more money than before.

Yes, I did run these. And then they were replaced by Linux. It did not only cost a fraction, but a good magnitude less.
Yes. Now, calculate how much you are spending in VMware, OpenShift, APMs etc. and then came back here to tell me how much you're saving. I'll tell you: you're not saving anything.

Consulting was once. Then came outsourcing. And now it is cloud. Nobody knows what's going on anymore? Well then we can just roll out better methods. DevOps etc. Big market.
Unfortunately no, companies are still paying consultants because porting things in cloud requires knowledge, especially since technology is nowadays so complicated that staying up to date is becoming a full time job. Then they pay outsourcing companies to manage the day by day operations in their tenants.
They pay twice: for the project and for the run.
And on top of that, they pay for the costs about resource sprawling, the costs about workloads that cannot be completely optimized for a scale to zero behavior and the costs about cloud storage which naturally increase every day more. And finally, they buy FinOps solutions just to ask AI why they're spending so much.
But hey what's important the most is believing that all of this costs less than a big iron, even when it's not.

Concluding: Your arguments are totally valid, and nobody gives a damn.
I know and it's fine.
My job is selling this crap, after all.

But I think You're bringing two things together here. Cheap redundant stuff can be quite cool. Think RAID: redundant array of inexpensive disks. If it fails, just replace it, no harm done. And this is not really something new either: it was done on airplanes long before, there using excellent hardware, but that was still not good enough when lifes are at stake.
Well, enterprise disks are not that inexpensive, though. Companies spend big bucks in storage.
I didn't see any big company buying crappy disks, since most of the time they have to store data for a long time to comply to the regulations.

The other aspect: yes, we're going to end up sitting on a stack where nobody any longer knows how it relates. Now we can create a dark apocalyptic vision about how this all will fall apart. Might bring along some death toll, might then result in something alike to Butler's Jihad - that's the fun part (if you like prophetic visions).

But this cannot be stopped, anyway. There is a simple rule: if it can be done, it will be done.

This is the old story: we engineers could create something real fancily good and thought-through, but when seeing the price-tag, it will be cancelled by management, with the simple argument: the cheap stuff does also work.
And You are right: until it doesn't.
Right.
 
Reconcile THAT for me, please! ;)
My phone today can run the stuff that would make an Ultra SPARC workstation crash and emit smoke from the PSU. A $500 computer today can run stuff that was a pipe dream for $5000 workstations of 1980s and 1990s. ?
Are you really comparing a modern era hardware with a workstation from the '90? Are you serious? It's like boasting that your modern citycar is faster than a carriage from the '800.
Try comparing the "gaming" crap you're buying with a SPARC and POWER of nowadays, and then came back here to tell me what machine is emitting smoke after a day of operations.
 
Are you really comparing a modern era hardware with a workstation from the '90? Are you serious? It's like boasting that your modern citycar is faster than a carriage from the '800.
Try comparing the "gaming" crap you're buying with a SPARC and POWER of nowadays, and then came back here to tell me what machine is emitting smoke after a day of operations.
Yeah, try finding SPARC and POWER - based stuff... Those architectures have been off the market for awhile. Even Xeons and Epycs are x86_64... and aarch64, which is more efficient than SPARC/POWER, is so easily available, it's even in your phone, as well as Ampere servers.

If you run the exact same benchmark tools on SPARC/POWER (provided you can find the stuff at all) and on aarch64, the former would quickly say uncle.
 
Back
Top