Web is the cloaca where worst coders end

M-x rant-mode
I close a damn web browser window, not the application, onle one of the 4 browser window i keep open usually and I save 5 GB ram! WHAT THE F. ! In those pages there were only Google Documents and some random silly page. And my G. documets just have some bold and highlight into them, no pictures!

Conclusion. Also at Google, web programming must be made but the worst coders on Earth. I write web stuff myself, it is not so crappy if you do it decently. These are just bad programmers. In some way they land all on the web.

Conjecture. Why is it so? The first idea i get is this. From the web you get all for free and you expect to receive a right amount of crap in exchange. So, if your editor just sucks, you don't complain much, you know, if you want thing well done you just use/buy a desktop application.

Side thought. If i could exchange Google Documents with something similar I would do it stright away. I hate that i don't have my documents source code, i hate that the editor eats gigs to do trivials. On the other side I can edit from phone, ipad, mac, freebsd and linux, seamlessely. That is priceless. I can pay for the service, but I want something that works well.

M-x text-mode
 
How much memory takes the javascript dosbox on archive.org for games there? That sounds like some serious caching going on.
 
How much memory takes the javascript dosbox on archive.org for games there? That sounds like some serious caching going on.

My Ghost'n Goblins days are long gone but that thing is really nice !

I tried to run Arkanoid, in a single Chrome(ium) window. killing the window I get back 0.25G in RAM.

Then, I tried to open a single window and load a GoogleDoc of 50 pages, no pictures, only text, scroll the doc up and down a bit. Killing the windows I get back ~ 0.54G in RAM.

Mesurements come indirectly from the header output lines of `top -d1`
 
It used to be the case, back in The Old Days, that applications were built from purpose-written code that glued together functional parts stored in libraries. Today, if you look at the FreeBSD ports system, most of the "library" code is not from libraries at all, it's cannibalised from other applications, tweeked and patched, and stitched in hoping that nothing will break. The result, too often (Firefox is a horrible example), is Frankencode: grossly bloated, badly stitched together, leaky, and always on the verge of collapse.

What's needed is a working group dedicated to creating a set of libraries, possibly by snipping out the pieces that are being used now, but doing a good cleanup and proper documentation so that, instead of the ports maintainers doing the same work over and over again release after release because they're using someone else's code that's changing on an ad-hoc basis, they tap FreeBSD's stable porting libraries for the functionality.
 
The result, too often (Firefox is a horrible example), is Frankencode
As a talented Frankencode reviewer you just should give us a few examples to look on. So don't be shy an post some from 'horrible Firefox' with your valuable comments we all could learn from.
 
As a talented Frankencode reviewer you just should give us a few examples to look on. So don't be shy an post some from 'horrible Firefox' with your valuable comments we all could learn from.
If you believe that I'm wrong, what evidence can you offer?
 
If you can't see the problem for yourself, I'm sure you'll continue to sleep well at night. I'm not interested in trying to meet whatever needs prompted your initial response, so if you'd like to declare victory please feel free.
 
Side thought. If i could exchange Google Documents with something similar I would do it stright away. I hate that i don't have my documents source code, i hate that the editor eats gigs to do trivials. On the other side I can edit from phone, ipad, mac, freebsd and linux, seamlessely. That is priceless. I can pay for the service, but I want something that works well.
If you want ease of editing things, especially if it's a document from someone else or something you need revised, it won't be something that doesn't eat hundreds of megs of RAM. Even a dedicated standalone application like Abiword or Gnumeric uses a fair bit of RAM (if ~500MB of RAM is a problem, you've likely already considered upgrading your hardware when possible).

Personally, I have a single directory on my FreeBSD machine talking to Dropbox using net/rclone (net/rclone-browser if you want a GUI). I can pull my data from Dropbox over my Android phone using an app; my Chromebook has File System for Dropbox installed, so I can use the native Chrome OS file manager to access such files; a native Dropbox client is available on most Linux distros, and plugins exist for GNOME's Nautilus and KDE's Dolphin file managers.

Oh, and I use AsciiDoc as my go-to lightweight markup language, so you'd see a lot of .adoc files in my $DROPBOX/docs/src directory, and they get output by textproc/asciidoc or textproc/rubygem-asciidoctor to $DROPBOX/docs/render. If I need additional formatting capabilities (rare for me, but you might need them more often than I do), I might use Markdown with some custom HTML as necessary and a CSS print stylesheet instead of AsciiDoc, then I can output to PDF using the "Save to PDF" feature of Firefox or Chrome/Chromium to get things looking exactly as intended.

It's very DIY, and there's probably a better option, but it works for me. I hear some writers actually use LaTeX rather than something like MS Word or Apple's TextEdit, so maybe that's an option as well. However, that also may require a significant learning curve even more than that of vi/vim, depending on your needs. Caveat emptor. My choice means nobody can revise my documents, and I can't revise theirs, so if that's a problem for you, you'd do well to stick with more traditional collaborative tools like Google Docs.
 
I will add a few considerations.

hardware. unfortunately I need to run FreeBSD in a virtual machine, VMWare, in a MacBook Pro 2014. 3 Ghz i7, 16 GB Ram. Of these the FreeBSD gets 2 processors and 8 GB ram. The maxium VMWare reccomends.

preamble. Web browsing in FreeBSD has been always kind of painful for me. Also in Linux, of course. Because they are bound to live in the VMWare virtual machine. But, there are a lot of advantages I am not willing to give up with this configuration. So, i can tolerate sub optimal web experience, this is not my rant.

Getting more specific. I take notes into GoogleDocuments, several documents, several topics. Let's focuse on one document titled "FreeBSD Notes", it is 60-ish pages in a small Roboto-Mono font. No pictures. When I download the file it is 118K as an *.odt file. Small, as it should be, it is text and tags.

I open the document in GoogleDocument, in Chromium. The editing of course is a bit painful, it lags etc. But I always survived that. What I can't accept is that, If I close the Chromium tab with this little document I go from 1.85G free to 2.48G free RAM (I re did the experiment just now). The data fluctuates a bit, but the idea is that this document is taking more the 0.5G in RAM. That is, the space taken for the file in when it is loaded in RAM it is more that 1 thousand times the space it takes on the disk .

From this, my (instinctive) conclusion is that this software is badly written.

memreflect, I tried several options to circumnvent GoogleDocs but they all have severe shortcomings. I have a "pro" account on DropBox, Dropbox offers a good service for a price that is acceptabe, still it lacks on the editing side. For I while I tried to put some of my notes file as .odt in DropBox and edit them whith LibreOffice. Even forgeting the fact that I can't edit them from mobiles (iphone / ipad), It was a disaster. If my file is open-for-edit in computer A, i can't open-for-edit it in computer B. I am the only one accessing my files, but I access them from several devices.

About Overleaf, that is webified LaTeX, i considered it. The problem is that even thinking of editing a minor thing in LaTeX on the phone ... nightmare. I never even tried to switch to that solution.
 
I am a web dev.
Since 10 years each time I offer a nice solution, a thirs party JS ruin all my effort.
I am convinced by the progressive ehancement. But it is very simple to copy/paste dummy js in a web site.
If you know a little about js, there is a very strange mutation (jsx/ts/npm/yarn/docket/kubernete/...)
Fronts developers use very sophisticated tools to not use html/css standards.
A simple lib has 10 dependencies that import a lot of unknow js in projet and multiplie the maintenance and the final js size.
All this stuff is because they refuse to admit that a popup is lighter than a js Promise.
Web become a mess because the more complicated tool is always the must have technology.
 
Getting more specific. I take notes into GoogleDocuments, several documents, several topics. Let's focuse on one document titled "FreeBSD Notes", it is 60-ish pages in a small Roboto-Mono font. No pictures. When I download the file it is 118K as an *.odt file. Small, as it should be, it is text and tags.
If all you're using Google Docs for is note-taking, you'd definitely benefit from switching to a more lightweight markup language or even a lighter app such as Google Keep Notes. There exist Markdown webapps like Dillinger or even dedicated mobile Markdown apps with the capability to sync to Dropbox, etc. If your files are 60-ish pages as you state, I would still expect some slowdown, especially in a VM or on a mobile device. I feel it's a bit of an extreme case to compose such long documents in a webapp as you do, but perhaps that's normal these days?

memreflect, I tried several options to circumnvent GoogleDocs but they all have severe shortcomings. I have a "pro" account on DropBox, Dropbox offers a good service for a price that is acceptabe, still it lacks on the editing side. For I while I tried to put some of my notes file as .odt in DropBox and edit them whith LibreOffice. Even forgeting the fact that I can't edit them from mobiles (iphone / ipad), It was a disaster. If my file is open-for-edit in computer A, i can't open-for-edit it in computer B. I am the only one accessing my files, but I access them from several devices.
In other words, there's a mandatory lock on the file if it's open on any given machine... You could switch storage providers, or take an extra step to push/pull changes to/from a Git (or other VCS) repo hosted somewhere (possibly Dropbox, if it lets you). Support for even common Git operations like pushing to a remote varies on mobile devices, so you're better off using a terminal environment like Termux that allows you to install git itself and set up a couple of shell aliases (or functions) to pull and push changes. I'm not sure how well this would work with Dropbox, given how you described its file locking behavior, but I never had any problem accessing my KeePass DB using KeeWeb via Google Drive when I used it. Otherwise, there's always something else like GitHub. Like I said, it's an extra step, but you may find it to be more pleasant than dealing with Google Docs.

I open the document in GoogleDocument, in Chromium. The editing of course is a bit painful, it lags etc. But I always survived that. What I can't accept is that, If I close the Chromium tab with this little document I go from 1.85G free to 2.48G free RAM (I re did the experiment just now). The data fluctuates a bit, but the idea is that this document is taking more the 0.5G in RAM. That is, the space taken for the file in when it is loaded in RAM it is more that 1 thousand times the space it takes on the disk .
No, that's the space for the document and the JS code that:
  • automatically synchronizes your changes,
  • allows collaboration with others who have the same document open,
  • provides familiar keyboard shortcuts that make it feel more like a normal word processor,
  • manages changes in text styles from one text fragment to the next, and
  • expresses layout info for document fragments (e.g. horizontal alignment of a line).
And don't forget the browser-side memory management needed to handle all of that functionality (and the sandboxes that each browser tab run in). It may be a poorly written webapp, but its functionality and availability make its slow performance an arguably negligible trade-off considering how much it offers despite being a webapp. Again, 60-ish pages is kinda extreme for a webapp, and I'm not surprised you experience such tremendous lag or extreme memory usage. That said, I experience issues with lag even for a document with a single line of text, so I can't say that your performance issues are entirely related to your usage, but the memory usage itself is not surprising considering what you and your platform choice use it for.

About Overleaf, that is webified LaTeX, i considered it. The problem is that even thinking of editing a minor thing in LaTeX on the phone ... nightmare. I never even tried to switch to that solution.
I wouldn't try it myself either if any other lighter-weight solution to my problem existed; I merely brought it up as a possibility, ridiculous though it may be.
 
Honestly, if possible, I think you should ditch that phone for computing tasks. At least until smart phones become more than fiddly little web browsers ;)

Oh, I rarely type on phone/ipad, they are mostly for reading. But sometimes i need to correct a typo, correct a command that is inaccurate etc. These things must be done as you see them or you will forget. At least, I discovered for me it works like that.

I wait impatientily the moment when phone will project a real keyboard on my most nearby table so i could finally happly type from everywhere :)
 
memreflect , I will keep my reply slim:

Google Keep. No that is a toy like Apple Notes. I need something powerful. Many of my note files inevitably become little books over time.

GitHub. No, categorically. This tool is way too complex. I am kid of forced to used it at work and I regret it. It creates more issues than benefits for me. Remember, i work alone, in the sense that others do not edit my files. [ i have a GoogleNote file also for Git procedures, doh :( ]

Replacing Dropbox. This could be. Do you have any suggestion? Dropbox keeps files history, I like that a lot. Because mistake happens. I pay about ~ $130 per year for, I think 100G. I would like to have ssh access, seamless integration with macOS file system ... well you know, what dropbox offers more ssh, if possible.

bye
n.
 
It used to be the case, back in The Old Days, that applications were built from purpose-written code that glued together functional parts stored in libraries. Today, if you look at the FreeBSD ports system, most of the "library" code is not from libraries at all, it's cannibalised from other applications, tweeked and patched, and stitched in hoping that nothing will break. The result, too often (Firefox is a horrible example), is Frankencode: grossly bloated, badly stitched together, leaky, and always on the verge of collapse.

What's needed is a working group dedicated to creating a set of libraries, possibly by snipping out the pieces that are being used now, but doing a good cleanup and proper documentation so that, instead of the ports maintainers doing the same work over and over again release after release because they're using someone else's code that's changing on an ad-hoc basis, they tap FreeBSD's stable porting libraries for the functionality.
I have to disagree with this: the part about library code not being libraries at all.

First off, I am not entirely sure I am interpreting what you're saying correctly. The libraries are often the real problem. The need to bring in such huge amounts of overhead just to use one routine is mind-numbingly stupid.

This means the port requires compiling some massive library (I recall back in the day compiling X apps, and pango/jango/dango whatever it was took forever).

It seems this mess is a direct consequence of open source, licensing and perhaps even poor design and programming.

Open source
By its nature people use routines to perform a task rather than write it themselves. While it's extremely simple to write a JSON parser, for example, most will not bother and instead pull in some JSON library with 10s of useless functions given all they want is a parser. So, the compile time and overall package size of their port becomes bloated when a little effort on their behalf could have radically reduced size and maintenance.

Obviously there will/may be times when your time/expertise makes that not an option, but, I'm not totally sold on that argument. Get others to help if it's a good project.

Licensing
The licensing of most of the software, at a guess, seems to be GPL, especially some of the big libraries. By its nature, you cannot pull a piece of GPL software out and include it in your software if you don't have a compatible (aka able to be subsumed by GPL) license. More bloat.

Poor design/programming
I will just leave it as an exercise for others, but I've seen my fair share of open source code when auditing it for use at work to know I wouldn't write such junk and would probably be sacked if I did. Most of them are security flaws, simple buffer overflows, stack abuse and poor memory management (alloc/no free or freeing the wrong pointer) and so on.

So, taking into account poor design/programming, a lot of these library writers pull in other libraries to achieve certain things not realising or blatantly ignoring the security issues of doing so. They assume it's ok and that's a recipe for disaster.

Now in saying all this, most of these problems from coding the stuff will probably result in the thing just segfaulting and if you're running Linux, systemd will just mindlessly restart it for you and your problem's solved (isn't it?)... ;)

API design
People write libraries then suddenly change the API/calling routines and *BAM* if you use that library you need to re-write your code.
Now you must specify using version 1.1.2.3-4.5_12839 because it's not the same as 1.1.2.3-4.5_12001. Another library the same; almost.

Interdependence
I was compiling an ARMv7 port using poudriere on an x64 and it had a dependency of GCC9! Not 8, not 7, but 9. Not Clang, not lcc, not using the compiler installed, NOPE, GCC9. So you have to wait 4 days for that piece of bloatware to compile so one library in a port can be compiled with gcc9. Oh Lord!



Summary
If I do agree with you it would be ideal to have a body that sets standards for open source software (in reality an impossibility by the nature of open source). It isn't going to happen. The bloat will grow (as it has in the past) and we will eventually implode due to library usage in our software.
 
GitHub. No, categorically. This tool is way too complex. I am kid of forced to used it at work and I regret it. It creates more issues than benefits for me. Remember, i work alone, in the sense that others do not edit my files. [ i have a GoogleNote file also for Git procedures, doh :( ]
That's unfortunate. When it comes to managing personal Git repos, all I usually need is git pull, git commit, and git push. There's no need for complicated operations like bisect, merge, etc. if I'm the only person who uses the repo. :p

Replacing Dropbox. This could be. Do you have any suggestion? Dropbox keeps files history, I like that a lot. Because mistake happens. I pay about ~ $130 per year for, I think 100G. I would like to have ssh access, seamless integration with macOS file system ... well you know, what dropbox offers more ssh, if possible.
You might consider Amazon S3; there are countless search results for using S3 to store Time Machine backups. However, there can be hidden costs, such as expedited retrieval from Amazon Glacier and paying to store multiple versions of a file in full, assuming your bucket has versioning enabled (unfortunately, deltas of the file's changes aren't stored; how do you do a delta of a video file, for example?) That said, the cost per GB is so low that it still appears to be a significant bargain (e.g. several versions of a file with sizes that ultimately add up to 1 TB can be as much as ~$24.57/month for storage alone, i.e. not counting requests). There are also similar small costs to make requests for operations on objects like uploading (PUT), downloading (GET), and listing (LIST) versions of files, and the costs depend on the particular storage class for each particular object, some storage classes even having rules of their own.

A 12-month free trial with 5 GB of storage is currently offered, with 20000 GET requests and 2000 PUT requests to give you some example costs if you decide to use the costs calculator to estimate your own costs (make sure to click the S3 button/tab on the left; it defaults to EC2), so you might consider trying it out, even if it is a fair bit complex in how the billing works. If you have more questions about it, you can always have someone contact you.
 
I have to disagree with this: the part about library code not being libraries at all.

First off, I am not entirely sure I am interpreting what you're saying correctly. The libraries are often the real problem. The need to bring in such huge amounts of overhead just to use one routine is mind-numbingly stupid.

This means the port requires compiling some massive library (I recall back in the day compiling X apps, and pango/jango/dango whatever it was took forever).

Yeah, I can recall with sadness and loathing some kitchen-sink libs, where whoever assembled them had been unclear on the concept. But nearly all the ones that got used regularly seem --in my memory, at least-- to have been better than that. And we had a separate linker that knew how to prune out routines that didn't get called. But I don't recall there being a lot of dynamic linking going on then, it was all static. Maybe that made the difference.

When I compile a FreeBSD port these days, I see stuff being fetched that certainly doesn't follow lib naming conventions. I'm too busy to research it in detail, but my impression has always been that they're chunks of purpose-written source that get dragged over and approximately stitched in.

It seems this mess is a direct consequence of open source, licensing and perhaps even poor design and programming.

Open source
By its nature people use routines to perform a task rather than write it themselves. While it's extremely simple to write a JSON parser, for example, most will not bother and instead pull in some JSON library with 10s of useless functions given all they want is a parser. So, the compile time and overall package size of their port becomes bloated when a little effort on their behalf could have radically reduced size and maintenance.

Obviously there will/may be times when your time/expertise makes that not an option, but, I'm not totally sold on that argument. Get others to help if it's a good project.

I'm a contemporary of the dinosaurs, so I always default to writing my own stuff, and view innovation as just another opportunity to chase bugs.

Licensing
The licensing of most of the software, at a guess, seems to be GPL, especially some of the big libraries. By its nature, you cannot pull a piece of GPL software out and include it in your software if you don't have a compatible (aka able to be subsumed by GPL) license. More bloat.

I've only ever paid attention to licensing at a contract level, so I didn't know there are issues with linking in GPL'd routines.

Poor design/programming
I will just leave it as an exercise for others, but I've seen my fair share of open source code when auditing it for use at work to know I wouldn't write such junk and would probably be sacked if I did. Most of them are security flaws, simple buffer overflows, stack abuse and poor memory management (alloc/no free or freeing the wrong pointer) and so on.

So, taking into account poor design/programming, a lot of these library writers pull in other libraries to achieve certain things not realising or blatantly ignoring the security issues of doing so. They assume it's ok and that's a recipe for disaster.

Now in saying all this, most of these problems from coding the stuff will probably result in the thing just segfaulting and if you're running Linux, systemd will just mindlessly restart it for you and your problem's solved (isn't it?)... ;)

API design
People write libraries then suddenly change the API/calling routines and *BAM* if you use that library you need to re-write your code.
Now you must specify using version 1.1.2.3-4.5_12839 because it's not the same as 1.1.2.3-4.5_12001. Another library the same; almost.

Interdependence
I was compiling an ARMv7 port using poudriere on an x64 and it had a dependency of GCC9! Not 8, not 7, but 9. Not Clang, not lcc, not using the compiler installed, NOPE, GCC9. So you have to wait 4 days for that piece of bloatware to compile so one library in a port can be compiled with gcc9. Oh Lord!

You're describing what I really mean by the "no libraries" rubric: no standards. No coherence. 10 different compilers, imported code pieces that want 3 different versions of Python and 2 of sqlite (tho why anyone wants sqlite at all is beyond me), loose pointers, sketchy documentation, and conflicts that must be reconciled over and over again til the sun goes nova. Which really boils down to No Common Libraries. Everybody rolls their own, locally. And then the ports guys have to figure out how to reconcile all the disconnects.

Summary
If I do agree with you it would be ideal to have a body that sets standards for open source software (in reality an impossibility by the nature of open source). It isn't going to happen. The bloat will grow (as it has in the past) and we will eventually implode due to library usage in our software.

Having an agreed set of standards would for sure be best. But even having a set of porting libraries would go a long way toward de-facto standards as FreeBSD porters/maintainers up at the sharp end experience the world.
 
Back
Top