Proposal. For Browser survival in FreeBSD

Well, started to build www/firefox last night but realized it requires pulseaudio. Not sure if build or runtime requirement but I thought I read it is a build requirement only. I don't have anything against it per se, but was trying to avoid it because frankly, sound works just fine without it and I didn't see the point to it. I even have
Code:
OPTIONS_UNSET = PULSEAUTIO
in my /etc/make.conf but this didn't matter - apparently a hard requirement?

I may just build firefox anyway. My beef with the author of pulseaudio is with his other (unnamed) project, not pulseaudio.
 
What about Seamonkey makes it so attractive to you? Familiarity, perhaps? Put a different way, what is lacking in other implementations you've tried?
Yes familiarity is what I find pleasing.
When I go to Firefox it seems they change the security settings location with every version.
I get the feeling that they don't want you to find the settings.

I am sorry to vent about my problems publicly and I am glad nobody gave me any flack.
If I was capable I would re-invigorate the port. It was de-orbited on July 3, 2019 and I miss it.
Obviously the browsers backend has changed since 1994, but visually it is close to the original implementation.

I also like SeaMonkey because it is a small group of volunteers that parse from the Firefox codebase and keep everything on the same preferences pages. No radical changes. XUL being removed from Mozilla has caused troubles.

There is no way you could compare SeaMonkey to Firefox. Mozilla has over 80 employees and SeaMonkey has 2 or 3 voulnteers.
Yahoo paid Mozilla $375 million dollars a year to make it the default search engine. Mozilla has over $400 Million in yearly revenues.
SeaMonkey only gets user donations.
To compare the security vulns of Firefox versus SeaMonkey is absurd. Firefox has a faster release cycle and larger development team.
 
You could maintain a port of a static build of e.g. firefox. Mozilla still distributes static builds for Linux, so there should be some instructions on the WWW somewhere...

As said, I am not a black belt when it comes to C++, Makefiles, Poudriere etc. I am very week on these technologies, I never have occasion to work at that level. Rarely I write some C, but to be true, it is mostly for Arduino so it is just another story.

Anyhow it is good to know that there is still a binary package around.
 
Yes familiarity is what I find pleasing.
When I go to Firefox it seems they change the security settings location with every version.
I get the feeling that they don't want you to find the settings.
Firefox Quantum (i.e. Firefox 57+) was a huge redesign, and that extended to the browser's about:preferences page as well as under-the-hood performance improvements. For example, preferences are now grouped into five primary categories, which each have one or more sections:
  • General
  • Home
  • Search
  • Privacy & Security
  • Sync
As you can see, the security section is immediately available. I personally set the history preference to "Never remember history", which enables permanent Private Browsing mode. Bookmark management and saved passwords are still functional; it simply means no history, cookies, or other site data are saved.

Maybe you'll like it, maybe you'll hate it. Either way, I can't get the deleted SeaMonkey 2.49.4_27 port to even build on 12.1-RELEASE-p2; it doesn't even finish make fetch due to a long list of vulnerabilities, and it was revision 505753 that deleted the port, citing security reasons. Of course, I could ignore those vulnerabilities and build anyway, but it just doesn't seem right to do so, though this may just be a personal preference.

There is no way you could compare SeaMonkey to Firefox. Mozilla has over 80 employees and SeaMonkey has 2 or 3 voulnteers.
...
To compare the security vulns of Firefox versus SeaMonkey is absurd. Firefox has a faster release cycle and larger development team.
That may be true, but I'd liken SeaMonkey to a ship full of holes: it eventually starts to feel like a lost cause to continue plugging those existing holes when more holes appear out of nowhere. Mozilla's primary focus was originally on security. If I recall correctly, that was what originally sold Firefox to the Windows crowd who was tired of fighting viruses, worms, etc. entering the system through Internet Exploder's nearly legendary number of security exploits.

The volunteers working on SeaMonkey, however, are prioritizing porting the functionality of Firefox ESRs into new versions of their software, and if there are so few volunteers as you say, there's simply no room to think about security. An exploit arises, and they [may] patch it, but an official patched release isn't made available terribly quickly, leaving users of the software vulnerable to those exploits until they upgrade.

And with the FreeBSD port gone, you're effectively stuck at a partially patched version of 2.49.4 unless you download and compile the 2.49.5 source yourself from Mozilla's FTP site, leaving you vulnerable to any unmitigated security issues. If I knew anything about maintaining ports and Seamonkey's build process, I'd consider volunteering to make 2.49.5 available, but the idea of maintaining a sinking ship makes me feel like I'd be wasting my time that could be spent on other, more actively maintained ports. :-/
 
[...]
Maybe you'll like it, maybe you'll hate it. Either way, I can't get the deleted SeaMonkey 2.49.4_27 port to even build on 12.1-RELEASE-p2; it doesn't even finish make fetch due to a long list of vulnerabilities, and it was revision 505753 that deleted the port, citing security reasons. Of course, I could ignore those vulnerabilities and build anyway, but it just doesn't seem right to do so, though this may just be a personal preference.


That may be true, but I'd liken SeaMonkey to a ship full of holes: it eventually starts to feel like a lost cause to continue plugging those existing holes when more holes appear out of nowhere. Mozilla's primary focus was originally on security. If I recall correctly, that was what originally sold Firefox to the Windows crowd who was tired of fighting viruses, worms, etc. entering the system through Internet Exploder's nearly legendary number of security exploits.
[...]

I may have been looking in the wrong repository but I recall last year looking at seamonkey and the source hasn't been touched since 2017.

Again, I may be wrong.
 
3] static linking. I understand that may not be fashionable. But that is the best I can figure out to actually improve the status quo. The status quo being, if you run "pkg install foobarbaz" something else may be upgraded by necessity and the consequence being, your browser not working anymore.

Ah, now I see the problem.

Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have control over what is put into the repository from where pkg fetches the stuff.
I'm doing it that way - actually I do not even see a difference between a server and a desktop besides that the desktop has a graphics card inserted and a mouse attached and an X server installed (which is just a network application server, anyway). And I have no such problems (using firefox-esr) - and yes, I need a browser, because my database GUI frontend happens to be ruby-on-rails, which is actually a web application builder.
 
  • Thanks
Reactions: a6h
I recall last year looking at seamonkey and the source hasn't been touched since 2017.
According to their releases page «SeaMonkey 2.53.1 Beta 1 Released January 18, 2020»․
Of course, it's a way behind the "main line", but not too much:
SeaMonkey 2.53.1 Beta 1 uses the same backend as Firefox and contains the relevant Firefox 60.3 security fixes.

SeaMonkey 2.53.1 Beta 1 shares most parts of the mail and news code with Thunderbird. Please read the Thunderbird 60.0 release notes for specific changes and security fixes in this release.
 
I don't really understand why a browser and an email client should be components of the same application (here, SeaMonkey). Having them separate looks much preferable to me.

I currently use Thunderbird as email client, not because it is a good application, but only because I couldn't find anything better - kind of the "least worse" option. ;) :/

When your only choice is between Thunderbird and Evolution, your decision is quick and easy... :/ Other email clients are either just buggy toys, or lack essential features such as CardDAV and CalDAV support, excluding them as viable alternatives.

A potential solution for this would be to set a private webmail up with CardDAV and CalDAV plugins, in which case I would no longer need an unsatisfying "heavy" email client.
 
  • Thanks
Reactions: a6h
When your only choice is between Thunderbird and Evolution, your decision is quick and easy... :/ Other email clients are either just buggy toys, or lack essential features such as CardDAV and CalDAV support, excluding them as viable alternatives.

I use mail/neomutt with CardDAV (don't know about CalDAV becuase I don't use it) but you need to put the parts together. 👀
 
I don't really understand why a browser and an email client should be components of the same application (here, SeaMonkey). Having them separate looks much preferable to me.

I have an hypothesis on this. You remember like about 25 years ago when to connect to internet you had to download....ehm...get WinSock (i was a boy, Windows still was the only OS you could reasonable have) .

Well at that time updating your computer to connect to Internet was like, making it able to Connect in general. So it made sense to make a box of common applications for "Connection". Ok, maybe you, like me, wanted telnet, gopher, cucme, finger, an app for groups, etc. but for the general pubblic the all comprehensive box was: Internet Browser + Mail program.

Still today, I guess for most of people Internet is just web and mail. It make kind of sense to make a suite, if you have the strength to make 2 good products.
 
Ah, now I see the problem.

Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have control over what is put into the repository from where pkg fetches the stuff.
I'm doing it that way - actually I do not even see a difference between a server and a desktop besides that the desktop has a graphics card inserted and a mouse attached and an X server installed (which is just a network application server, anyway). And I have no such problems (using firefox-esr) - and yes, I need a browser, because my database GUI frontend happens to be ruby-on-rails, which is actually a web application builder.

This does not correspond to my understanding of Firefox ESR. Which is good in the following scenario. You develop your web app today, test on Firefox ESR and then say to your customer: "If you want to use my app as it was designed, without flaws, for the next ~5 year use Firefox ESR". This is a great thing.

But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an pkg install|upgrade foobar. If you use Qt applications you saw this happening several times.

My knowledge of the ports system is limited but, at best of my understanding, you can see that if you do: cd /usr/ports/www/firefox-esr ; make run-depends-list. You can see that it depends on 40 packages. For Firefox is the same number. For Chromium is 54.

If tomorrow Dolphin or Okular don't work right, you may be pissed but still, your day productivity is not totally mangled. Not so for the browser. When it stops working it creates problems. So, the proposal is to have a browser which is as much as possible indipendent respect to the other packages (being statillcally compiled); a standalone application whch you can expect to work untill you change your OS version.

Extra stuff.
1] I am using Firefox in these days, I see a big improvement respect to about 1-2 years ago. It is working well.

2] Somebody in Bugzilla has replicated the issue I have. And to him, it seems like if he presses right-click the browser unlocks.

3] I used Ruby for a few system services recently (not Rails). It is an extremely beautiful language. I like JRuby especially. My approach to the web my is Javascript both sides. Node on the server. Nginx in front of it. I am personally a big fan of 1 page web applications. I don't use "complex frameworks". Javascript+HTML+CSS+JQuery, this is my framework.
 
Ah, now I see the problem.

Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have
I am reading all the time "server oriented" but
FreeBSD is a UNIX-like operating system for the i386, amd64, IA-64, arm, MIPS, powerpc, ppc64, PC-98 and UltraSPARC platforms based on U.C. Berkeley's "4.4BSD-Lite" release, with some "4.4BSD-Lite2" enhancements. It is also based indirectly on William Jolitz's port of U.C. Berkeley's "Net/2" to the i386, known as "386BSD", though very little of the 386BSD code remains. FreeBSD is used by companies, Internet Service Providers, researchers, computer professionals, students and home users all over the world in their work, education and recreation. FreeBSD comes with over 20,000 packages (pre-compiled software that is bundled for easy installation), covering a wide range of areas: from server software, databases and web servers, to desktop software, games, web browsers and business software - all free and easy to install.
They should put on "server oriented" and as drhowarddrfine many times wrote "for professionals just..." but people don't see those on the FreeBSD site.
 
They should put on "server oriented"

There is the ethos of FreeBSD and then there is the reality of FreeBSD development. The point of the project is to provide a quality general purpose OS, suitable as a basic building block for pretty much everything: server, desktop, embedded applications. In practice, individual developers have their own priorities. The sponsored work is, understandably, heavily skewed towards very specific corporate use cases as well.
 
Which is good in the following scenario. You develop your web app today, test on Firefox ESR and then say to your customer: "If you want to use my app as it was designed, without flaws, for the next ~5 year use Firefox ESR". This is a great thing.

But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an pkg install|upgrade foobar. If you use Qt applications you saw this happening several times.

Which means that you did not do a good job developing your web app today. If you want to go down your path of tightly integrating your app with a particular version of the browser, then you need to tell your user also: Please run my app on a computer that is running version X.Y.Z of the operating system, and version A.B.C of the Firefox ESR browser, and versions L.K.M of all the other middleware the influences the behavior of your app. This is actually commonly done in large corporate settings (for example data centers, where upgrades are very rare, and very well orchestrated and tested), and in the embedded world. One way to do it is to say: We will integrate and test the complete suite of all systems (hardware and software) exhaustively, and then install and run that version for a long period, perhaps 6 months or a year. While that version is in production, we will do either no changes at all, or only the absolute minimum necessary to fix gaping security holes. And we spend the 6 months or year testing the next version on a separate test system, to get ready for the next major upgrade. Underlying this is the philosophy of "never touch a running system", which I like to describe with this joke: To administer a computer you need a man and a dog; the man is there to feed the dog, and the dog is there to bite the man if he tries to mess with the computer.

But that way of deploying systems really means that you have not architected dependability into your system (hardware, software, middleware, limpidware, ...), but instead you tested it into the system. And we all know that you can not test quality into software. A house of cards remains a house of cards and will be fragile, even if it currently seems to be standing on its own.

So the second approach is to architect dependability and quality into the system. For example, for a compiled application: Look at the exact specification of the interfaces that the application needs. I'll focus on one particular thing here for concreteness, and for example say: I will write my application using exactly nothing other than specified and documented interfaces from POSIX.1-2008, using C++ exactly following the ISO C++14 standard. And when developing my application, I will audit all uses of language features and OS interfaces to make sure I only use things from standard C++ and POSIX. At this point, my application will work on any system that supports those interfaces, upgrades or not. And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.

This might sound implausible, but a lot of high-quality development is done that way. You start by writing down requirements for the development (like: it shall work on any system that supports POSIX version X, programming language version Y, Unicode version Z, ...), and then stick to the requirements document. In the case of web applications, it for example means writing down exactly which subset of Javascript will work compatibly on all supported browsers, and only using that subset, or explicitly write libraries that correct for incompatibilities. If you do that, then your application will become independent of running on browser A versus B. And we know this is possible. The example that impresses me most is the office applications that today run in browsers (like Microsoft Office in the online version, Google docs/sheets/... suite, and most amazingly MS Visio). These applications run perfectly, and they do so on Chrome, Safari, and Edge. And while they are extensively tested, their good behavior isn't tested and then band-aided into them, they are designed for browser independence.

In a nutshell: Your proposal of freezing a certain browser version is really an admission that some software is badly written, and is a house of cards.
 
This does not correspond to my understanding of Firefox ESR.

Maybe that's because I don't have an understanding. I just need the thing, so I build it and use it.

But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an pkg install|upgrade foobar. If you use Qt applications you saw this happening several times.

That won't happen. It might only "stop working" if I upgrade one of the packages in the run-depends list. And that will also not happen, because if I upgrade a package in the run-depends list, my build system will automatically recompile firefox as well.
And no, this is not about "professional use", this is learned by long experience of running into problems, debugging and fixing the problems, and then writing scripts that make certain the problem will never appear again.
Because that's what I do: if I run into a problem, I do not waste time writing proposals. I figure out what's the problem, and try to fix it. Then, if it is of general interest and if I can reach the developers (this is usually the greatest difficulty) of the component that actually has the problem, I tell them and maybe suggest a fix (last time happened here and here),

My knowledge of the ports system is limited but, at best of my understanding, you can see that if you do: cd /usr/ports/www/firefox-esr ; make run-depends-list. You can see that it depends on 40 packages. For Firefox is the same number. For Chromium is 54.

Yes, and I am surprized that all of this does work well. (There seem to be an awful lot of people who do not write proposals and just solve the problems.)

3] I used Ruby for a few system services recently (not Rails). It is an extremely beautiful language. I like JRuby especially. My approach to the web my is Javascript both sides. Node on the server. Nginx in front of it. I am personally a big fan of 1 page web applications. I don't use "complex frameworks". Javascript+HTML+CSS+JQuery, this is my framework.
The web didn't ask me if I want it. I just needed a GUI for my postgres, and in 2008 RoR appeared to be the only thing offering progammable extensions. That's also what I do: if I come across something, I don't state that I have no skills for it, I just start to learn it.
 
The web didn't ask me if I want it. I just needed a GUI for my postgres, and in 2008 RoR appeared to be the only thing offering progammable extensions. That's also what I do: if I come across something, I don't state that I have no skills for it, I just start to learn it.


NO! All wrong! Your opinion shows you never worked on a complex problem.

You need other people ability to solve complex tasks.

And the first thing is honesty to yourself. Or we are all kernel driver writer and also we can lay down a NeuralNet, Invest with it our Bond Portfolio, design a Quant. Comp. algorithm and why not, build a QC itself. This is not the reality. I am sorry.

A few examples. Do you think McCarthty write the Lisp ? NO ! He designed it. Do you think Alan Kay wrote Smalltalk or the GUI ? NO ! He had the seminal ideas but there were a bunch of great hackers with him. Steve Jobs ... ?

There isn't scarcity of examples. You may have an idea, you may not be the perfect person to build it. If you have the humility to admit it, your idea may be built by somebody else who recognized its usefulness and can do it well and facilment.

My "idea" in this case is an humble static package, it is nothing revolutionary, but it will solve an issue that I guess several FreeBSD user have had and will have in the future.
 
I fully support this idea - app bundles. MacOS does a great job of this. You never run into dependency hell there with common desktop stuff. I actually think it would be beneficial for all users if all complex desktop-oriented software packages for FreeBSD would be statically compiled or supplied in MacOS-style bundles. This would definitely save lots of headache and improve the overall stability of the system.
 
Which means that you did not do a good job developing your web app today. If you want to go down your path of tightly integrating your app with a particular version of the browser, then you need to tell your user also: Please run my app on a computer that is running version X.Y.Z of the operating system, and version A.B.C of the Firefox ESR browser, and versions L.K.M of all the other middleware the influences the behavior of your app. This is actually commonly done in large corporate settings (for example data centers, where upgrades are very rare, and very well orchestrated and tested), and in the embedded world. One way to do it is to say: We will integrate and test the complete suite of all systems (hardware and software) exhaustively, and then install and run that version for a long period, perhaps 6 months or a year. While that version is in production, we will do either no changes at all, or only the absolute minimum necessary to fix gaping security holes. And we spend the 6 months or year testing the next version on a separate test system, to get ready for the next major upgrade. Underlying this is the philosophy of "never touch a running system", which I like to describe with this joke: To administer a computer you need a man and a dog; the man is there to feed the dog, and the dog is there to bite the man if he tries to mess with the computer.

But that way of deploying systems really means that you have not architected dependability into your system (hardware, software, middleware, limpidware, ...), but instead you tested it into the system. And we all know that you can not test quality into software. A house of cards remains a house of cards and will be fragile, even if it currently seems to be standing on its own.

So the second approach is to architect dependability and quality into the system. For example, for a compiled application: Look at the exact specification of the interfaces that the application needs. I'll focus on one particular thing here for concreteness, and for example say: I will write my application using exactly nothing other than specified and documented interfaces from POSIX.1-2008, using C++ exactly following the ISO C++14 standard. And when developing my application, I will audit all uses of language features and OS interfaces to make sure I only use things from standard C++ and POSIX. At this point, my application will work on any system that supports those interfaces, upgrades or not. And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.

This might sound implausible, but a lot of high-quality development is done that way. You start by writing down requirements for the development (like: it shall work on any system that supports POSIX version X, programming language version Y, Unicode version Z, ...), and then stick to the requirements document. In the case of web applications, it for example means writing down exactly which subset of Javascript will work compatibly on all supported browsers, and only using that subset, or explicitly write libraries that correct for incompatibilities. If you do that, then your application will become independent of running on browser A versus B. And we know this is possible. The example that impresses me most is the office applications that today run in browsers (like Microsoft Office in the online version, Google docs/sheets/... suite, and most amazingly MS Visio). These applications run perfectly, and they do so on Chrome, Safari, and Edge. And while they are extensively tested, their good behavior isn't tested and then band-aided into them, they are designed for browser independence.

ralphbsz , this would material for a great discussion with a beer in a bar:)

I add a few considerations.

Provocation. Software is house of cards. You change a tiny bit and it all fall apart. We are very far from the "stability" you have on mech.eng. structures. (very rough, but i guess you understand what i mean). Some times ago in Windows a wrong pointer in whatever app and good bye OS, reboot. Unix is better but still, a mistake in a driver, you connect your device and hopla, system down.

Stability of Microsoft Web App. Umm, i think the stability of the web app backed by big corporation is not due to great apriori study of each browser. It is instead a continuous correction of bug reports. This is a good thing of web app, you can correct all the time, and insert some new bugs, nobody will know until they hit them;) ... but you need to be big to do continuous dev.

To my knowledge there does not exist a specification for each browser. You try your code and see what brakes. CSS is interpreted a bit differently in each one. The only truly constant I found in the web world is Javascript. That is kid of solid bay and cross browser.

In terms of specifications maybe Android is the worst I had to deal with, you write and test on a phone, OS-X relase Y.Z, then you load the app to another phone brand, same OS same relase and it does not work. This happened to me several times. Vendors customize the OS and make an olimpic mess. (one name for all, just to be concrete: Samsung).

And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.

But Android user do not care if vendor Q. broke things. They see your app does not work. It is your fault. You must ensure it works! Everywhere? ... yes, that is impossible.

IMO, Firefox-ESR is good, as you say, when you are targeting a corporation or an institution who needs an internal service. Because they want something that works, does not break, and are not willing to pay for your continuous adjustments to the code.

bye
n.
 
Software is house of cards. You change a tiny bit and it all fall apart. We are very far from the "stability" you have on mech.eng. structures.
I disagree. While lots of consumer-facing software (in particular that written by amateurs for free) is very bad and very unstable, there is also a lot of extremely well written software. Supercomputer centers (with tens of thousands of nodes) that only need to be "rebooted" or shut down once a year for power distribution maintenance, and run around the clock the rest of the year. The stuff that runs data centers for the great cloud companies. The embedded systems that run cars and dishwashers and airplanes (with a few famous exceptions, like the 737 Max, which was not a software bug but a training and hardware issue). The most famous example is the software (written in the 1960s) that ran the life support system on the Apollo capsules: they carefully measured the bug rate, and it was zero. Meaning they never found a bug, in spite of very careful checking. It also had a productivity of about one line of code per engineer-month. But that was considered a good investment, since a bug would have killed 3 astronauts. Along the same lines: When was the last time you did a google search and got an error page back, saying "due to a bug we can not give you a result"?

Really good software exists. We, as a profession, even know how to do really good software engineering. But good engineering comes at a cost, and it's not always worth investing in. And quite a few people (or companies or cultures) don't even know how to do it.

Stability of Microsoft Web App
. Umm, i think the stability of the web app backed by big corporation is not due to great apriori study of each browser. It is instead a continuous correction of bug reports.
You're not all wrong here. The big developers of web-based applications clearly have enormous teams, including really good quality control and bug fixing. But: fixing bugs takes time. If you are using a spreadsheet on the web, and it calculates a wrong number, or it crashes on you, the bug that caused that will not be fixed for many days or weeks. Just rerunning all the tests after a bug fix will take considerable time. So the bug-free-ness of these great big web applications is mostly engineered into them, not added by fixing bugs.

This is actually a version of a very general statement: YOU CAN NOT TEST QUALITY INTO SOFTWARE. I'm deliberately writing this in bold uppercase, because it is important. Good (reliable, efficient, ...) software is that way because it is architected that way, and at all parts of the software production process (from collecting requirements through design, implementation, test and packaging), the people who do it have the resources required to do a good job, and they have a culture of wishing to do a good job. You can't get to good software by hacking something together, and then testing it and fixing bugs until it is perfect. That's because in a bad design, and with ill-understood requirements, every bug fix will create more problems. I was actually once in an organization carefully measured this (yes, good engineering is driven by metrics), and we had reached the point where our source base was so awful, on average every bug fix created 0.7 new bugs. Yes, testing and bug fixing is important, but it is mostly to fix minor nits and misunderstandings, and to measure and validate how well your software development process is really working.

To my knowledge there does not exist a specification for each browser. You try your code and see what brakes. CSS is interpreted a bit differently in each one. The only truly constant I found in the web world is Javascript.
And that's why good app developers use Javascript mostly today (or other languages that run in the browser), and have carefully mapped the rendering incompatibilities of browsers. You can even get textbooks that explain differences between browsers. When developing browser-based software, you don't do it by "try throwing it against the wall and seeing whether it sticks", but by consulting documentation of how each browser handles what. No, those are not specifications distributed by the browser vendors, but documentation that is either publicly available (for example w3schools has compatibility charts), or much more detailed documentation developed internally by software development companies.

The funny thing is, by the way, that in the early 2000s, Javascript support between the various browsers was an AWFUL mess. I tried to do some AJAX work in the early 2000s, and at that time you had to spend a lot of your effort on compatibility libraries between browsers. This has improved massively today.[/QUOTE]
 
Back
Top