Software Bloat

I think it's the novelty effect. We used to have stable and well QA-ed tools before that didn't get updates often, didn't look fancy, but did the job quite well. I'm talking both hard and software. Now delivering the whole package in less than 3 minutes, the quick gratification, is a number one priority. At the same time bringing quality takes time and very soon someone else publishes a similar piece of software and public interest shifts there, because new. Also due in part to effort justification effect authors who produce things quickly can just as quickly and easily abandon those 3-minute projects and move on to something else. So in essence it's a trade off between time spent and code quality.

On the corporate side, being quick to market brings more money than being quality-oriented, as you can always buy your way out with endless excuses, happy people graphics and smilies and promises of newer and greater version that is coming soon. The one that's usually even more buggy than the original. Our expectations have surely went down over the years, especially in regards to stability.
 
Also due in part to effort justification effect authors who produce things quickly can just as quickly and easily abandon those 3-minute projects and move on to something else.
I'd never heard of that! It explains a lot of the pathologies I've seen in the software world. People think that it's no good if it doesn't take any effort. That's why upgrade treadmills are so popular. The Python Steering Council feels like it's keeping the language "relevant," and hordes of pythonistas justify their salary with all the "maintenance" they have to do to keep up with latest stupid breaking changes.

I suspect that's one of the reasons Freebsd has so little traction in the Enterprise and elsewhere. The few times I've managed to talk a boss into letting me set up some Freebsd machines, they worked so well and were so stable that we plain forgot about them. There always was some whiny NT or Linux server that needed the latest security patches or some new whiz-bang offering from Microsoft that some idiot executive just had to have. Those platforms get all the attention, and therefore all the money. All you get when you point out that Freebsd mail exchanger you set up has 3 months of uptime and has needed 0 security patches is a "what's Freebsd again?"
 
There always was some whiny NT or Linux server that needed the latest security patches or some new whiz-bang offering from Microsoft that some idiot executive just had to have. Those platforms get all the attention, and therefore all the money.
Reminds me of a TV car commercial that showed a guy talking about how much he loved the model car he bought. He bought a new one every three years. "If they weren't so great, why would I buy so many of them?"
 
For web, an another reality is that all the code will go in trash after 1,2 or 3 years. So invest in quality is not a priority.
 
For web, an another reality is that all the code will go in trash after 1,2 or 3 years. So invest in quality is not a priority.
I must be misreading this. I have no idea what you mean. If what you say is true, then all the people who use Wordpress are sure going to be surprised. The PHP, CSS, and ECMAScript code I wrote 15-20 years ago runs better and faster now than ever, and has required surprisingly little maintenance over the intervening years. It probably helps that none of it incorporates node.js, npm, or any other such 2nd-hand code libraries.

These software technologies have been, and seem sure to be, around for a long long while.
 
For web, an another reality is that all the code will go in trash after 1,2 or 3 years. So invest in quality is not a priority.
That's actually quite a general statement. Most software has a surprisingly short shelf life. If you look at software that is actively supported and being maintained (which in professional production environments is the rule), most lines of code get touched every few years. And I don't mean that whole products get obsoleted and thrown away: Within an existing system of programs, most lines of code get touched and worked on regularly.

Yes, there are exceptions, including "dusty decks" that the source code has been lost for, and where compiled executables are being run, often on emulated systems. But nearly all software had a thorough once-over for Y2K, so for most of it the maximum age is about 21 years. If we were to assume a uniform rate of change, that would put the average at 10.5 years. But given that the amount of software development has been growing exponentially, the average software is relatively young.

On a previous project that I worked on a few years ago (about 5M to 10M lines of code), the oldest source code was from the mid 1980s, or about 30 years old at the time. We had some files with copyright dates from back then; in some cases, the original authors were no longer alive. But we only had hundreds of lines of code that were this old; most of the millions of lines were much more recent. Another project I worked on (which had grown to a measured 17M lines in the early 2010s) had no line of code that was older than 12 years. And I had left the company about 2 years after the project started (it was less than 100K LOC when I left), and according to the source control, there were no lines of code that I wrote left.

So having established that code changes relatively quickly, one might jump to the conclusion that quality doesn't matter. I think that this is in general utterly wrong, and extremely dangerous. What has to be of very high quality is the overall architecture, the big design choices, and the style. Otherwise enhancement and maintenance become prohibitively expensive or outright impossible. Each individual piece or module is replaceable; the overall artifact is a big investment.
 
I must be misreading this. I have no idea what you mean. If what you say is true, then all the people who use Wordpress are sure going to be surprised. The PHP, CSS, and ECMAScript code I wrote 15-20 years ago runs better and faster now than ever, and has required surprisingly little maintenance over the intervening years. It probably helps that none of it incorporates node.js, npm, or any other such 2nd-hand code libraries.

These software technologies have been, and seem sure to be, around for a long long while.
Sorry I wrote web but that is only true for the front dev in JavaScript (that was what I have in mind)
CMS (like Typo3) have a very good code quality and exists for years.
But the JavaScript that interact with a "click to chat" button has a shot live.
I observe nevertheless that this quick and dirty approach contaminated node.js development (In my little experience, that is not a general overview).
 
Sorry I wrote web but that is only true for the front dev in JavaScript (that was what I have in mind)
CMS (like Typo3) have a very good code quality and exists for years.
But the JavaScript that interact with a "click to chat" button has a shot live.
I observe nevertheless that this quick and dirty approach contaminated node.js development (In my little experience, that is not a general overview).
Sorry if I misunderstood your post. I write all my javascript from scratch, often following the example code snippets from w3schools, but every line I write is my own. Javascript manipulation of the DOM and XMLHttpRequest are essential components for exploiting the clients' browsers to their fullest potential, and for providing smooth and dynamically changing user interfaces, with minimal network I/O. Without javascript I could offer nothing much better than static HTML forms.
 
...are essential components for exploiting the clients' browsers to their fullest potential, and for providing smooth and dynamically changing user interfaces...
What even is "the fullest potential"? Is it maybe the reason we need to upgrade to latest XY-core Ryzen each year just to view animations that serve nothing but entertain at best and create an annoyance at worst? A great website in my view, is the one that has 1. The original information or research presented in 2. Accessible format. The rest is just fluff. You can easily do that with CSS. 5% design, 95% content.
 
XMLHttpRequest are essential components for exploiting the clients' browsers to their fullest potential
XHR is only for sending requests to the server and nothing beyond that. It's one function call and nothing more. Let's not give it any more credit than that.
 
And it's a misnomer from times when "everything" was XML. Nowadays, it's mostly used to consume REST APIs talking JSON… :rolleyes:
 
What even is "the fullest potential"? Is it maybe the reason we need to upgrade to latest XY-core Ryzen each year just to view animations that serve nothing but entertain at best and create an annoyance at worst? A great website in my view, is the one that has 1. The original information or research presented in 2. Accessible format. The rest is just fluff. You can easily do that with CSS. 5% design, 95% content.
I refer to the browser's "fullest potential" for sharing the work between client and server. I don't display animations. Without XHR and DOM manipulation, the server has to do the lion's share of the work. Let me show an example. Here's a customer maintenance form with a scrollable customer search box, full of test data, which can be indexed by name, phone number, or customer number. If I change the search order by clicking on one of the search box's column headings, it visibly resequences the items in the search box.

Screenshot at 2021-10-23 10-45-27.png

To do this without using XHR and DOM manipulation, you'd have to submit the whole form from the client to the server. The server would then have to reformat the entire web page, with the search box items in the new sequence, and send it back to the client. The browser would then have to redisplay the whole page, and there would be a visible "stutter" on the display.

But in this example, XHR lets you send a simple, relatively much shorter request, from the client to the server. The server then sends back only the resequenced items in the search box. These are then displayed, not by redisplaying the whole form, but rather, by smoothly updating the document object model inside the browser, using ECMAScript. The display will not stutter, and there will have been the minimum necessary network I/O required to implement the entire transaction.

This doesn't require the latest Ryzen XY-core. I can run both the client and server ends of this on my 32-bit Dell Latitude laptop, and without any appreciable strain whatsoever on the processor.

XHR is only for sending requests to the server and nothing beyond that. It's one function call and nothing more. Let's not give it any more credit than that.
One other noteworthy feature of XHR is that it can be done invisibly, "behind the scenes" so to speak, without disturbing the browser's display, or causing it to "stutter."

And it's a misnomer from times when "everything" was XML. Nowadays, it's mostly used to consume REST APIs talking JSON… :rolleyes:
True enough I suppose. I believe the name XMLHttpRequest was originally coined by Microsoft, so there's that. But if it wasn't a useful idea, I doubt it would have ever gone any further than Microsoft.

The data transmitted via XHR doesn't have to be in XML format, although it can be, and sometimes is.

I'll take your word for the REST API's and JSON since I'm unfamiliar with all that.
 
Let's not give it any more credit than that.

I think we should. Here is a simple scenario, you have a web based software. Lets call it "courier/shipping software" , courier company has 100 customers and customers shipping pkgs. So they have to invoice them periodically. Say each customer ships 100 pkgs p/w. Its monday and we gonna charge all those shipments, prepare invoice for them and send them invoice + notifications.
Without XMLHttpRequest:

Load the list of customers who would be charged today to a web page with basic info (shipment count, total amount, surcharge amount etc...)
Now we can not load all this info on a single page (for 10000 records. 100 customers x 100 shipment) and operate on them unless we play around with script memory limit, script time limit, max body etc... (even than it is pretty hard)

We can spit it to 10 customer per page so we'll have to load 10 pages to operate on 100 customers.
Now we are on 1st page with 10 customers loaded we selected 6 of them and click on calculate button (we have to review numbers before we actually charge them)
so we are dealing with 600 record at this point ( 100 pkg x 6 customers ) say each shipment takes 0.5 second to be calculated (calculate dims, vol weight, find zone, find rates, apply them etc...)
Once we click on calculate button we have to wait 300 sec ( 100 pkg x 6 customers x 0.5 sec) . Of course we can not expect accounting person to wait 300 secs just to calculate charges for 6 customers.
So we decide to use dom and iframe. We'll split the load to chunks and give interactive impression to the person who is using our app. Once he clicks calculate we'll send 1st customer accnum to iframe (with Dom) call our script and our server side script will do the calculation and show results. Once server side script finished loading we'll call parent window to send the next number and repeat the process. (without Dom we are completely screwed)

At this point we lost all info (visually) related to previous results (There are workarounds but thats not the focus).
Also don't forget user will still be looking to a empty iframe for (0.5 per shipment x 1 customer x 100 shipments = 50 seconds)

I wont continue with rest of the process but just to mention i didn't go into traffic load (you have to load same page over and over) and processor load (you have to render same script over and over) and other factors to keep this post brief.

(Also before anyone throws you are a bad coder a calculation can not take 0.5 sec argument change it to 0.1 or 0.01 if it makes you happy. But it wont change the fact)

With XMLHttpRequest:

Load the list of customers who would be charged today to a web page with basic info (shipment count, total amount, surcharge amount etc...)
We load 20 of them if needed user can click on "load more" or "next" button to get rest of the list (we are still on same page)
We select 20 of them and click on calculate button. As soon as we click on calculate we start to see results:
Shipment 1: 10$ transportation charge 2$ fuel charge Zone 1 etc.....
So while we are waiting we are seeing almost real time results. (No more time outs, traffic load etc....)
Also during this time we are doing other things with our software within another DIV on same page while we are keeping eye on results.
And don't forget we can select all 100 of customers and do calculation at once we don't have to go 6 or 10 customer per click.
(Which is practically almost impossible without XMLHttpRequest)

And more importantly:

Other than technical advantages (bandwidth, processing etc...) XMLHttpRequest give users the impression as they were using a desktop application.
After XMLHttpRequest companies started to look at web as software platform and started delivering their products as web based instead of desktop/compiled based.
And If you dont know anything else just know this if you are able to use QuickBooks online or Office suite online thats because of XMLHttpRequest

So yes, XMLHttpRequest deserves a lot of credit.
 
The browser would then have to redisplay the whole page, and there would be a visible "stutter" on the display.
This AJAX approach does avoid visual stutter. However it *is* more complex in how it works and browser requirements. You are making a trade for additional complexity vs "visual experience".

It is not a trade that I would particularly disagree with given your requirements but bloat appears when more and more of these compromises are made. From this point, there may now be two type of users:
  1. Someone like myself who actually doesn't care about visual stutter and often has such shaky internet that a page refresh is nicer than a potential AJAX request timeout.

  2. A "cool" user who might not want to have the default HTML buttons but instead have cool glowing buttons that fade in and out when hovered over.
Most projects will always try to please user #2 because... "progress".
 
and add "bells and whistles" if the browser supports it
I would say that my browser supports pretty much everything. Yet I still would rather the web pages were simple (X)HTML. I wish there was a browser setting akin to DoNotTrack that says "keep your tacky gimmicks to a minimum please".

Absolutely agree that once a framework is involved, the developer basically gets strung along by their nose.
 
Here's a customer maintenance form with a scrollable customer search box
Well, that is fair, when there's literally no other way to do your presentation and folks really don't like refreshes. But I was thinking about the general web, not corporate backends. Of course there are other, way better solutions than a webpage with javascript in it, just much harder to program. Like a custom solution written in an OS-oriented language, that is querying an SQL database.
 
This AJAX approach does avoid visual stutter. However it *is* more complex in how it works and browser requirements. You are making a trade for additional complexity vs "visual experience".

It is not a trade that I would particularly disagree with given your requirements but bloat appears when more and more of these compromises are made. From this point, there may now be two type of users:
  1. Someone like myself who actually doesn't care about visual stutter and often has such shaky internet that a page refresh is nicer than a potential AJAX request timeout. ...
Thank you for reminding me of the term AJAX, because I had forgotten it. Having just now looked it up again, I suppose that the term could be applied to the approach I gradually developed out of my own 2004 era research, but I've never described my approach that way, or applied that term to it.

The last time I recall encountering the term was probably around 2011. At that time, the references I read associated it with the use of JSON, which I rejected out of hand after about 4 hours of experimenting with it. I do not use JSON, jQuery, ASP.NET, Ruby on Rails, or any other so-called AJAX "framework" and rejected them all on first encounter. I write all my javascript from scratch, and most of what I know about XHR technology comes from what I read on the w3schools.com site.

I don't use XHR indiscriminately, or for everything, but I could not implement the search box shown in my example above properly without it. That test customer table contains 100,000 records, but I can scroll through it from top to bottom in a number of seconds, on any of those three indexes, or switch indexes with the click of a column heading. I can also pop a search box up dynamically on the screen with the click of a button, to wit:
Screenshot at 2021-10-23 15-40-42.png


Screenshot at 2021-10-23 15-41-04.png


I find the XHR approach to be much faster, rather than slower, than a total screen refresh, and it requires considerably less network traffic, because the total amount of network I/O required is much lower. I can't find any evidence that XHR GETs and PUTs are inherently slower on a networking level. If you can show me such evidence I'd love to review it. What I have read suggests that such delays are probably due to ( 1.) overuse, ( 2.) concurrent requests, and ( 3.) some of those AJAX frameworks that I don't use in my own implementations.

...
2. A "cool" user who might not want to have the default HTML buttons but instead have cool glowing buttons that fade in and out when hovered over.
Most projects will always try to please user #2 because... "progress".
Besides being "cool" (lol), different browsers and operating systems have their own particular button appearances and sizes. CSS buttons provide the same sizes and appearances in a cross-platform compatible way.

Other "cool" features which don't always involve XHR, but do require well-written, unclunky, and non-bloated javascript include drag-and-drop features, time-out avoidance, and progress bar/counter displays.

Time-out avoidance? Time-consuming processes like end-of-day updates, end-of-month updates, end-of-year updates, backup, and restore typically require so much server time to process that some kind of additional measures must be taken to avoid getting HTTP 408 "Request Timeout" errors in the browser. One good way to avoid such errors is to run a job control process in javascript, breaking the overall process into smaller chunks, which can be sent, one request at a time, to the server, using XHRs, while displaying either a progress bar or progress counter of some kind in the browser between requests.

You can avoid that "trade" by employing progressive enhancement (in a nutshell, deliver a nice backend-driven web application e.g. following ROCA principles and add "bells and whistles" if the browser supports it). But once you're in SPA land, using "great" frameworks like React, Angular, whatnotever, everything is lost.
These applications are intended for office personnel who have capable web-browsers. They can't be run on smart phones or PDA devices. Those are complications I've never had to deal with and don't intend to deal with.

Well, that is fair, when there's literally no other way to do your presentation and folks really don't like refreshes. But I was thinking about the general web, not corporate backends. Of course there are other, way better solutions than a webpage with javascript in it, just much harder to program. Like a custom solution written in an OS-oriented language, that is querying an SQL database.
My applications do query SQL databases, but customized OS-specific solutions are precisely what I'm trying to avoid, and, on one level or another, have been trying to avoid for the past 20 plus years.
 
What even is "the fullest potential"? Is it maybe the reason we need to upgrade to latest XY-core Ryzen each year just to view animations that serve nothing but entertain at best and create an annoyance at worst? A great website in my view, is the one that has 1. The original information or research presented in 2. Accessible format. The rest is just fluff. You can easily do that with CSS. 5% design, 95% content.
Well, that would be a dream come true! The reality is much, much worse!

When Electron was younger, around in 2017, people reported CPU usage of up to 15% on an idle Electron Desktop App. The app was Visual Studio Code from Microsoft.

Somebody looked at it and found out, that it takes 13% of CPU usage to render the blinking cursor. No joke! The reason why is because it depended on Chromium to do that stuff, which had that inefficiency itself.

Of course that's not the end of it, a bug report from 2020: Moving the mouse increases CPU usage on renderer process to 7-10%. That's what I call a highly consistent framework in terms of delivering shit performance!


And the standard excuse from the Electron guys then is always "Not our fault, this comes from Chromium." WRONG! It's their fault because they have chosen a crappy software as foundation for their own stuff.
 
As an aside, or maybe not, even using Electron doesn't guarantee the end users get all the benefits, like portability to run on toasters. We've been waiting for years, literally, for Deezer to publish their awesome Electron-based app to run on Linux. Well years later some folks came and repackaged it, but still no official word from Deezer. As you can imagine most of what this app is doing is displaying a Deezer website page inside a window.
 
Back
Top