Software Bloat


Thanks, I guess that I must build with X11MON=on, however I suspect that it will not work as expected …

<https://www.freshports.org/deskutils/recoll#config>

1635747393784.png
 
Last edited:
The magic of NPM and sugarcoating bullshit strikes again, this time:

GitHub’s commitment to npm ecosystem security​

You'll find there much buzzwordy stuff about commitments and rolling out 2FA.

The juicy tidbits though are these:

Second, on November 2 we received a report to our security bug bounty program of a vulnerability that would allow an attacker to publish new versions of any npm package using an account without proper authorization.

Now this sounds like major fun! Updating any existant package by using an account with proper authorization, who would not like to abuse that?

This vulnerability existed in the npm registry beyond the timeframe for which we have telemetry to determine whether it has ever been exploited maliciously.

Even more fun! In other words: they cannot be sure that before September 2020 nobody exploited that! The sensible thing would be to shut down the repository immediately, and check for breaches. Are they doing that? No. What they are doing is rolling out 2FA as lame excuse for that, framing crisis management as achievement.

 
I'd like to counter that... Today the approach to building a 'small' (in terms of functionality) program is sadly more often than not: "lets use this framework, which needs that ecosystem with this interpreter and drags in those few hundred libraries and dependencies and needs exactly *this* one version of that graphical framework and exactly *that* version of this obscure library someone abandoned in 2005"

From what I have seen it is down to middleware.
I'll agree with both of these and posit they are actually almost the same thing.

Frameworks are nice, can be helpful, but you get locked into doing things their way. That may not fit your use cases, so you contort things to fit the framework.

Another fun thing about frameworks that I've run into? Writing abstraction layers around and over them "because we may want to change the framework in the future" or "yes we hired good people but we think they aren't smart enough to actually use the framework correctly so we write abstractions that become a framework for the framework".

Never once have I seen a framework change.
 
Another fun thing about frameworks that I've run into? Writing abstraction layers around and over them
That is a good point. Abstraction layers over abstraction layers is quite wasteful (and messy) and yet seems to be very common.

In fact one example is Bjarne Stroustrup in his book, where he writes a weird incomplete abstraction layer over FLTK rather than using it directly (or safely).
 
20 years ago the most advanced games console I can think of was the PlayStation 2, this ran on a remarkable 32mb of Ram (with a small amount of other ram, and when I say small I mean less than 10mb). Today it is common to see a single program use that amount for an extraordinary small task. This was a machine that could play sound, moving video, accepted input and did networking, all in a remarkably small amount of Ram.

How is it hardware has got so sophisticated but software has become so large and required the utilisation of this.

To put it into perspective the worst machine I can find on Amazon right now has 2gb ram. That's 62.5x the amount the Ps2 had.
It is the virtualization levels. People started programming simple things in Javascript on Node inside a VM inside a docker inside a VM on top of a framework inside a runtime inside a VM ....
So to calculate 1+1 you have to fire up and boot up a thousand abstraction layers first.
Today I installed vscodium and I noticed at least 5 different package systems layered upon each other while installing. I hate when people start inventing and encapsulating their own package management systems sidetracking the OS, examples are plentiful:
- node
- perl
- python
- eclipse
are some of which I have encountered.
Any product that has its own "marketplace" or "app store" or "updater service" or "download/update/installation manager" bloats the package management of your system and should be boycotted.

And while virtualization is good as a tool in our toolbelt, people have misused it for so long to write inefficient software that could be way faster and consume way less memory if it were implemented on a lower level.

That said, I am guilty of this sin myself. I program in Java.
It's the good tooling support and the productivity that seduces us to bloat a few more MB from your memory for the sake of bringing the app faster to the user.

A college professor of mine said - you can program efficient (or inefficient) code in any language (we asked about Java being so much slower than native code back then). So first thing, he said, make sure your algorithm is correct and efficiently implemented. Then try to optimize performance via native methods, Assembler or whatever if need be.
 
That is a good point. Abstraction layers over abstraction layers is quite wasteful (and messy) and yet seems to be very common.

In fact one example is Bjarne Stroustrup in his book, where he writes a weird incomplete abstraction layer over FLTK rather than using it directly (or safely).
That's probably because programs are written mostly for people to read. So sacrificing performance for the sake of much better maintainability in future could be worth it.
 
I recently came acrosss the following. To compile this program you have to make use the intellij editor ...
An editor becomes a build system.
 
Even FreeBSD's own Ports tree?
Of course not, that's my point exactly.
The OS should be managing the packages, that's its job. The ports tree should be the only package manager. What I dislike is when I install node.js via the ports, and then start using npm to manage a complete marketplace of scripts the OS knows nothing about.
This opens a big security whole for all kinds of malicious code, by the way.
 
Of course not, that's my point exactly.
The OS should be managing the packages, that's its job. The ports tree should be the only package manager. What I dislike is when I install node.js via the ports, and then start using npm to manage a complete marketplace of scripts the OS knows nothing about.
This opens a big security whole for all kinds of malicious code, by the way.
Ahh, one of my complaints about the ports tree is actually the proliferation of perl, php, python, ruby, r-cran ports. What's in the ports tree is only a subset of the language's repo. The language's repo may have fresher stuff than the ports tree - think the port maintainers have the time to keep up?
 
Ahh, one of my complaints about the ports tree is actually the proliferation of perl, php, python, ruby, r-cran ports. What's in the ports tree is only a subset of the language's repo. The language's repo may have fresher stuff than the ports tree - think the port maintainers have the time to keep up?
Good point! Still, doesn't any other lib do the same? Take x11/libinput as an example. It installs it's own headers into /usr/local/include, shared objects into /usr/local/lib etc.
Should C/C++ as a language platform maintain its own package manager independently from FreeBSD? Why should Python or Perl be treated differently?
The libraries are building blocks, they are software packages like anything else. So in that sense, the port maintainer for Python should not necessarily be responsible for all packages developed in Python - true. But each Python library should have its own port maintainer and the user should not be coralled into using several independent package managers possibly creating conflicts between each other.
Also, another monstrosity: when products start baking their versioning schema into the package names. For instance: php6, php7, php8, .... and then you install stuff like nextcloud-php7-mysql105 and figure out... oh sh*t, I also need that other package that uses a different mysql version and I can't have both simultaneously.... And there you have it - it's one big mess.
Versioning I do like: just name your package "php" and then have an adequate versioning and dependency strategy.
Or is it a shortcome of the FreeBSD packaging system? I don't know. Maybe the architects could have thought of a streamlined way to reflect multiple support branches of a package, instead of merging it into the package name.
Portage in Gentoo for example has the concept of "slots". So for PHP for example you have a 7 slot and an 8 slot and you can have multiple versions installed at the same time. I find this neat. You need then means of switching between them in runtime.
 
I set default version in make.conf.And should i need conflicting versions , i would run it in a different jail. There is imho no real need for slots.
 
Portage in Gentoo for example has the concept of "slots". So for PHP for example you have a 7 slot and an 8 slot and you can have multiple versions installed at the same time. I find this neat. You need then means of switching between them in runtime.
FreeBSD does allow for different versions of Python to coexist - py27, py36, py 37, py38, py39... And they have something similar for Ruby (ruby27, ruby30) Dunno if they have something similar for PHP.
 
I would pointing out a contradiction...

Hardware is cheaper; software is expensive.

This is not actually true, making hardware is more expensive than programming, it requires more workers, professionals, energy, materials etc, rather than programming software which is can be doing even with very old hardware.

However bloated software help to sell newer hardware hence the deal is made; for me the real problem here is, especially with closed software, because programming time costs more than computing time, commercial software is burdening by legacy code that is hidden under the mat and end users cannot see it (but they can feel the side effects).
 
I would pointing out a contradiction...



This is not actually true, making hardware is more expensive than programming, it requires more workers, professionals, energy, materials etc, rather than programming software which is can be doing even with very old hardware.

However bloated software helps to sell newer hardware hence the deal is made, for the real problem is, especially with closed software, because programming time costs more than computing time, commercial software is burdening by legacy code that is hidden under the mat and end users cannot see it (but they can feel the side effects).
Depends on where you look. SSD's and RAM are actually dropping in prices, while Adobe Creative Cloud is getting more and more expensive.
 
Depends on where you look. SSD's and RAM are actually dropping in prices, while Adobe Creative Cloud is getting more and more expensive.
Depends on how you look at it. Making one piece of hardware is terribly expensive, so it must be mass-produced in the millions of units just in order for manufacturers to recover their cost of goods sold.

Per unit price to consumers, on the other hand, is relatively inexpensive, but only because the cost of both development and manufacturing is spread over millions of consumers, facilitated by assembly line cost savings, automation, and exploitation of unprotected foreign nationals as laborers.
 
FreeBSD does allow for different versions of Python to coexist - py27, py36, py 37, py38, py39... And they have something similar for Ruby (ruby27, ruby30) Dunno if they have something similar for PHP.
Yes, but by means of putting the version number inside the package name. I don't like the idea of versioning the same product in multiple packages like that.
Not bashing on the porters here, I understand that they don't have a choice. Just wondering what would be a clean solution like.
 
The problem I see on everything having their own package manager, isn't so much about the package manager it's self. It is more of the various package managers don't limit to just their stuff. An prime example, would be pip/pypi. On pip you can easily install meson, cmake and ninja. Those 3 programs aren't python libraries, nor the bindings; but the build agents themselves. That by it's self wouldn't be so bad, but the package manager defaults to wanting to install system wide; overwriting what the system had installed. I know some upstreams are resorting to using like pip and other language package managers over the system one; more of because it's the only way to have a common base between various platforms/distros.
 
The problem I see on everything having their own package manager, isn't so much about the package manager it's self. It is more of the various package managers don't limit to just their stuff. An prime example, would be pip/pypi. On pip you can easily install meson, cmake and ninja. Those 3 programs aren't python libraries, nor the bindings; but the build agents themselves. That by it's self wouldn't be so bad, but the package manager defaults to wanting to install system wide; overwriting what the system had installed. I know some upstreams are resorting to using like pip and other language package managers over the system one; more of because it's the only way to have a common base between various platforms/distros.
Yeah... Shouldn't the FreeBSD version of pip/pypi check for presence of stuff like ninja, cmake and meson on the system? I would think that the ports system (not the pkg) would be smart enough to pull them in as deps, instead of letting pip/pypi pull that in from Python repos. Sometimes, those language package managers really reinvent the wheel.
 
"sometimes"...
Indeed. Their train of thought is always the same.

1) I have a nifty, easy to use language
2) Ah, to do anything useful I need to call into a native binary
3) Ah, calling into native binaries isn't easy because my language doesn't have the ability to parse header files unlike C, C++ and Obj-C
4) I'll create a binding generator
5) Ah, turns out this is a big faff to use as part of a workflow
6) I'll create some tooling that fetches pre-generated bindings as dependencies
7) Holy shite, there are millions of GB worth of these things! Turns out my language really just glues together C libraries.
8) I'll develop a package manager to deal with them all.

Rust is doomed to fail in the same way unless they bolt on a small C compiler frontend and properly solve the issue around #3.
 
Indeed. Their train of thought is always the same.

1) I have a nifty, easy to use language
2) Ah, to do anything useful I need to call into a native binary
3) Ah, calling into native binaries isn't easy because my language doesn't have the ability to parse header files unlike C, C++ and Obj-C
4) I'll create a binding generator
5) Ah, turns out this is a big faff to use as part of a workflow
6) I'll create some tooling that fetches pre-generated bindings as dependencies
7) Holy shite, there are millions of GB worth of these things! Turns my language really just glues together C libraries.
8) I'll develop a package manager to deal with them all.

Rust is doomed to fail in the same way unless they bolt on a small C compiler frontend and properly solve the issue around #3.
Old Fart C Programmer may have a point here (here's looking at you, Geezer ;) )
 
Top