Software Bloat

As a programmer of various application software I can assure you that talking with the users of that software is very useful. Think of design flaws like implicit assumptions that turn out to be incomplete or even completely wrong. Users can tell you. It will usually lead to substantial improvements.
That's not the point. In software development, if you're not talking to the user, you're doing everything wrong. But that's about requirements, UX, etc. A "user" telling you he knows better about technical matters? Ridiculous.
 
Software bloat is irrelevant to me. Having said that, I always have new hardware and that hardware is almost always high end, because I can afford it. That’s not everyone’s situation and I understand that. I do see plenty of use cases where older hardware can be used so in those cases, using FOSS and a limited set of applications can work.

The web is terrible in my opinion. JavaScript is useful for devs and “can” make web applications more useful but can also ruin the experience and bring new hardware to its knees.
I hope WASM will ease this, it is increasingly common to see transpilers for JS too, but yes the web is a monster.
 
20 years ago the most advanced games console I can think of was the PlayStation 2, this ran on a remarkable 32mb of Ram (with a small amount of other ram, and when I say small I mean less than 10mb). Today it is common to see a single program use that amount for an extraordinary small task. This was a machine that could play sound, moving video, accepted input and did networking, all in a remarkably small amount of Ram.

How is it hardware has got so sophisticated but software has become so large and required the utilisation of this.

To put it into perspective the worst machine I can find on Amazon right now has 2gb ram. That's 62.5x the amount the Ps2 had.
Forgive me if this has been raised before (I've not read all the other replies), but cpu architecture has something to do with it, as well as how some operating systems handle different architecture, even 'simultaneously', for example universal binaries on Apple and even WOW64 on Windows. This leads to hefty sizes.

With the architecture, the transition from 16 bit to now 64 bit as the standard PC platform cpu has of course inevitably lead to larger binary executable files. Even so, a lot is hidden from the user when dynamic/shared libraries are called. So a program might look small(ish) in a directory listing but be very large once running and after pulling in massive libraries.

Finally, a lot of programmers are lazy. This is primarily the fault of OO languages and the indiscriminant pulling in of libraries to perform a simple task. Yes kpedersen , I read your comment and you are 100% correct in using the "correct wheel" rather than just the "wheel" provided by a large library.

More programmers should probably take a course on embedded programming... ;)
 
The thing is: Would you prefer to pay someone for weeks of "optimizing" they could also spend to actually satisfy business needs?
This is my experience - the customers want to pay as little as possible for as much "working" code as possible and they want it as soon as possible.

Do they want to pay me for the smallest, fastest, cleverest code? If NASA or medical equipment then probably yes. For knocking up some website functionality, then no. If it works and performs well enough, it's good enough - for them (paying the bills).

I think maybe the argument overall is too simple - there's different types of programming, different types of programmers, vastly different sets of requirements (embedded, gaming, graphics, desktop, server, OS, internet-facing, websites, short-lived, long-lived, etc, etc).

One size doesn't fit all, not every job needs a hammer.

For the issue of e.g. why do we have three browsers? It's usually backwards compatibility. The tax department site of country Z won't work with browser Y.Y but works with browser X.X. Which is easier, ship X.X and Y.Y or get the tax department to change their website?
 
vastly different sets of requirements
Note you talk about non-functional requirements here. Typically, there aren't many of them. Maybe you have a few about security or performance. Most of the time, NOT about memory consumption (because, yep, often that just wouldn't make sense. Every requirement, you will pay for, NFRs tend to be especially expensive).

Of course there are scenarios where an NFR about memory consumption makes sense.
 
For the web part, I am surprised by all this react/angular/next/... developers that use ton of js instead using html/CSS.

JavaScript is "transpiled" (via Babel or Typescript) just because web developers refuse to write correct JavaScript for browsers...

I try to defend the "progressive enhancement" philosophy, but this is hard.

So I hope Svelte will be the next funny framework. At least Svelte let the CPU of the end user cold.
 
because web developers refuse to write correct JavaScript for browsers...
Were you around in the old Internet Explorer/Netscape/Opera days?

"Correct" JS was next to impossible - so many rules, broken things, version dependancies. The frameworks were a way out of that hell - you could concentrate on making the site "do stuff" instead of trying to fix some IE4 issue. You were a lot more productive, and time is money.

It's gone a bit bonkers the other way with e.g. USB support in browsers.
 
Were you around in the old Internet Explorer/Netscape/Opera days?

"Correct" JS was next to impossible - so many rules, broken things, version dependancies. The frameworks were a way out of that hell - you could concentrate on making the site "do stuff" instead of trying to fix some IE4 issue. You were a lot more productive, and time is money.

It's gone a bit bonkers the other way with e.g. USB support in browsers.
I wrote js in this old time. And with some really easy rules it is possible. Today, that is call "progressive enhancement".
But I was in the same time a backend PHP dev and a CMS (with framework) Dev. And I learn some best practices in high school with the Eiffel Lang.
Today, I am a full stack lead dev + software architect. But it is the same job. The main difference is the resistance to test if a function exist before using it, always have acceptable fallback and so on (test variable and result in function, as I learn it in Eiffel "by contract" now it is called functional programming...)
Even UI/UX refuse fallbacks. You must have the same website in a 34" with 1440 px height and a 13" with 760px and (yesterday case) no scrollbar in 13" if there is no scrollbar in 34"...
The complexity is really easiest to handle today, but developers made more crap. Why ?
 
Just now read this comment on Hacker News:
The one thing you should understand is the average programmer is not a good programmer.
The reason is simple.

The average programmer is a newbie with little to no experience.

It's like a pyramid or a triangle. There's more area or volume at the bottom than at the top.

Every year, more new people arrive at the scene.

New programmers are easily fooled by "shiny" objects.

So, whatever place you join, there's a high likely hood that the culture at the place is dominated by what is considered popular, with little to no regard to what actually works well.

It's different at each place, but almost every company I've been too has things setup in a way that's very painful and frustrating to work with. Every thing takes many steps. "Compiling" javascript takes 3 minutes. "Hot Module Reloading" takes 30 seconds and refreshes the page 5 times. You have to jump between 4 different repositories to fix a small bug. etc etc etc.

If you are experienced and notice that things at your company are broken, you either try to advocate for fixing things or just leave out of desparation. So the organizational culture continues to be dominated by people with very little experience.

If you are not experienced, you may just think that the "suck" you have to deal with on a day to day basis is just what progamming is like, and you might well decide to quit programming. It's hard not to think so when you have never seen a better version of how things can work.
 
Were you around in the old Internet Explorer/Netscape/Opera days?

"Correct" JS was next to impossible - so many rules, broken things, version dependancies. The frameworks were a way out of that hell - you could concentrate on making the site "do stuff" instead of trying to fix some IE4 issue. You were a lot more productive, and time is money.

It's gone a bit bonkers the other way with e.g. USB support in browsers.
UNIX philosophy has very much gone out the window with respect to browsers... They are basically operating systems within operating systems at this point.
 
Hm, "still" is not the correct word. Yes, browsers nowadays are "application platforms". You could argue whether this is a good approach. It makes deployment super simple, the user doesn't have to "install" anything but can use the application right away, but complexity of the browser itself is the price. Chromium takes a lot longer to build than the whole FreeBSD system. There are, of course, security implications.

I personally still prefer the "old" approach, at least for your typical "desktop app": Build it for every target platform, use toolkits/frameworks (e.g. Qt) so you don't need platform-specific code paths. IMHO, "web applications" are best where they were coming from: form-based and backend-driven.
 
Hm, "still" is not the correct word. Yes, browsers nowadays are "application platforms". You could argue whether this is a good approach. It makes deployment super simple, the user doesn't have to "install" anything but can use the application right away, but complexity of the browser itself is the price. Chromium takes a lot longer to build than the whole FreeBSD system. There are, of course, security implications.

I personally still prefer the "old" approach, at least for your typical "desktop app": Build it for every target platform, use toolkits/frameworks (e.g. Qt) so you don't need platform-specific code paths. IMHO, "web applications" are best where they were coming from: form-based and backend-driven.
I meant "still" in reference to my personal awareness, rather than to any absolute judgment of what is the best, or "most cross-platform compatible" methodology for deploying software. In my retirement, I don't do enough research to justify making any such absolute judgments. I don't really know what the best way is, but still nevertheless am curious, and have opinions about it.

This is "still" the best method I employ, and am aware of. I'm curious about any, potentially newer, alternatives. Browsers are complex, but most computers already have them anyway, so one big advantage is that I don't have to provide any client workstation installers or configuration guides.

I target individual platforms as little as possible, but "still" write software that can run on FreeBSD, Linux, MacOS, and Windows workstations equally well. Making users install nothing, or as little as possible, on individual client "workstations" was a primary motivation for the original endeavor. Typically our customers had Windows 98 or Windows XP computers sitting on their desk tops. These, running PowerTerm or PuTTY, were rapidly replacing dumb terminals as our customers' primary workstations. But we were also trying to move away from character-based applications, and into the strange new world of graphical user interfaces. For awhile we wrote Microsoft Windows targeted applications written in Borland C++, but they lacked the un*x-like file-sharing and other multi-user features we needed. We also messed around with SCO's Tarantella and Sun's Java applets, but gave up on Java following the Microsoft vs. Sun Java wars of the late nineties and early naughts.

The approach I now use requires combining PHP, SQL, HTML, EcmaScript, and CSS to run applications in browsers. It's a trade-off between the complexity of platform-specific backends and the complexity of targeting browsers with multiple scripting languages. It might or might not be the best approach, but it does work well.
 
Because they present standardized OS service interfaces, browsers still provide the most cross-platform-compatible mechanism for deploying GUI applications that I'm aware of.
Probably true, but then people I work with who do this web design thingy-stuff are constantly swearing at the various nuances of each of the browsers and they have only to support 4, 1 of which used to receive the most abuse: IE or now Edge.

You might be 90% there but that last 10% can cause you all sorts of nightmares, so the default position is: specify a limited number of browsers (eg 1, aka Chrome). So now you're back to coding for a specific OS, basically.

It's a fool's game.
 
Probably true, but then people I work with who do this web design thingy-stuff are constantly swearing at the various nuances of each of the browsers and they have only to support 4, 1 of which used to receive the most abuse: IE or now Edge.

You might be 90% there but that last 10% can cause you all sorts of nightmares, so the default position is: specify a limited number of browsers (eg 1, aka Chrome). So now you're back to coding for a specific OS, basically.

It's a fool's game.
I used to support IE, Safari, Firefox, Chrome, and Opera. Edge had not yet been introduced at that time. It was indeed a nightmare; I'd first make a thing work on Firefox, and then often spend more time coercing it to work on the other browsers than it took to implement it in the first place. Now I only support Firefox, which is easy enough to install if you don't already have it, and readily available on every platform.

Out of curiosity I did look at Edge when it was relatively new and it seemed to be the worst behaved yet, worse even than IE in some ways.
 
they have only to support 4, 1 of which used to receive the most abuse: IE or now Edge.
It was true a while back but now its at minimum. Regarding IE / Edge they are completely different browsers. You are right about IE because MS tried to implement their own standards (these kind of dick moves was pretty popular back in the day). Edge is based on chromium. Now there are mainly 3 browser engines chromium based, firefox and safari. So MS is out of the game for now. To be honest they are pretty compatible across OSes and devices.

You might be 90% there but that last 10% can cause you all sorts of nightmares,
Previously equation was almost 10% to 90% (Edit: for software development) so we should be really thankful how web apps work these days.
Pretty much same across devices computers/phones/tablets. You had to be million $$ company (if not billion) to be able to support that kind of infrastructure.
In my opinion XMLHttpRequest object was break through, which led programmers (and applications) to reach masses (among other techs during same period).
And not to mention cut back on maintenance cost and man power.

It's a fool's game.
So as i tried to explain briefly, it is not anymore.
 
Out of curiosity I did look at Edge when it was relatively new and it seemed to be the worst behaved yet, worse even than IE in some ways.
Probably what you tested was "Microsoft Edge Legacy" which used MS engine.
So it was completely different browser.
New Edge browser is also available on Linux platforms and is chromium based.
 
Since EDGE 12+, the render is the same as Chromium.

And no, front Dev do not write js for browser today. They write "meta js" and use transpiration tools like Babel.
Some library exists only to not learn how to use js (loadash if it is the right name) and thanks to Microsoft, there is Typescript, the decision that all this nightmare need to be ended.

But for me, vanilla js is not that complex.

The same thing about shell script. I rewrote all bash script in sh.
 
Out of curiosity I did look at Edge when it was relatively new and it seemed to be the worst behaved yet, worse even than IE in some ways.
Well, my friend, that's because Microsoft programs it. If there's one sure way to screw something up, MS will do it. Look at Windows 10 for a good laugh. 😁
 
ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"
You never tried out Node.Js, did you? Where already left-padding is too much for most programmers, so when the author of the left-padding module - 13 lines - pulled it off, it broke literally much stuff. And exactly this is what I do mean with lazy programmers. +

Here you've got an own eco system mainly based around people basically not knowing much about what they are doing at all. And on top of it now a GUI with Electron.

 
ah yes, python... "I learned programming with python... this is so easy! just import these 83 libraries and you can print 'hello world' on the screen with just 2 lines of code! lets use it for everything!"
That is too funny!
:)
 
You never tried out Node.Js, did you? Where already left-padding is too much for most programmers, so when the author of the left-padding module - 13 lines - pulled it off, it broke literally much stuff. And exactly this is what I do mean with lazy programmers. +

Here you've got an own eco system mainly based around people basically not knowing much about what they are doing at all. And on top of it now a GUI with Electron.

And the key to that is the quote in the article:
That code can be used to add characters to the beginning of a string of text, perhaps a zero to the beginning of a zip code. It’s a single-purpose function, simple enough for most programmers to write themselves.

But because they're lazy they don't. That's why I say javascript in particular is a pox on the world and especially the internet.
 
You never tried out Node.Js, did you? Where already left-padding is too much for most programmers, so when the author of the left-padding module - 13 lines - pulled it off, it broke literally much stuff. And exactly this is what I do mean with lazy programmers. +
First of all: LOL.

And then, sure, there's some "lazy programmer" involved somewhere down the line. Looking at this code, it's ridiculous to build a package from it in the first place, and then the code looks far from optimal (although I'm not sure you can do better in Javascript).

But the real problem are ideas like npm. A central package repository, builds pulling them in dynamically, encouraging people to pull in whatever and creating ridiculously deep dependency graphs that are ever changing (try to port some software using this mess, it's horrible) – there's just no way to keep that stuff under control.
 
NPM is just dependency hell in perfection, yes. What's even more concerning is that there are now some software projects around which do have Nodejs as part of their build infrastructure, like e.g. Firefox or Chromium.
 
Back
Top