OPTIONS_UNSET = PULSEAUTIO
I believe so, I'm using Firefox in FreeBSD for many years, always installed it with pkg() and never installed/used audio/pulseaudio.I thought I read it is a build requirement only.
Yes familiarity is what I find pleasing.What about Seamonkey makes it so attractive to you? Familiarity, perhaps? Put a different way, what is lacking in other implementations you've tried?
You could maintain a port of a static build of e.g. firefox. Mozilla still distributes static builds for Linux, so there should be some instructions on the WWW somewhere...
Firefox Quantum (i.e. Firefox 57+) was a huge redesign, and that extended to the browser's about:preferences page as well as under-the-hood performance improvements. For example, preferences are now grouped into five primary categories, which each have one or more sections:Yes familiarity is what I find pleasing.
When I go to Firefox it seems they change the security settings location with every version.
I get the feeling that they don't want you to find the settings.
make fetch
due to a long list of vulnerabilities, and it was revision 505753 that deleted the port, citing security reasons. Of course, I could ignore those vulnerabilities and build anyway, but it just doesn't seem right to do so, though this may just be a personal preference.That may be true, but I'd liken SeaMonkey to a ship full of holes: it eventually starts to feel like a lost cause to continue plugging those existing holes when more holes appear out of nowhere. Mozilla's primary focus was originally on security. If I recall correctly, that was what originally sold Firefox to the Windows crowd who was tired of fighting viruses, worms, etc. entering the system through Internet Exploder's nearly legendary number of security exploits.There is no way you could compare SeaMonkey to Firefox. Mozilla has over 80 employees and SeaMonkey has 2 or 3 voulnteers.
...
To compare the security vulns of Firefox versus SeaMonkey is absurd. Firefox has a faster release cycle and larger development team.
[...]
Maybe you'll like it, maybe you'll hate it. Either way, I can't get the deleted SeaMonkey 2.49.4_27 port to even build on 12.1-RELEASE-p2; it doesn't even finishmake fetch
due to a long list of vulnerabilities, and it was revision 505753 that deleted the port, citing security reasons. Of course, I could ignore those vulnerabilities and build anyway, but it just doesn't seem right to do so, though this may just be a personal preference.
That may be true, but I'd liken SeaMonkey to a ship full of holes: it eventually starts to feel like a lost cause to continue plugging those existing holes when more holes appear out of nowhere. Mozilla's primary focus was originally on security. If I recall correctly, that was what originally sold Firefox to the Windows crowd who was tired of fighting viruses, worms, etc. entering the system through Internet Exploder's nearly legendary number of security exploits.
[...]
3] static linking. I understand that may not be fashionable. But that is the best I can figure out to actually improve the status quo. The status quo being, if you run "pkg install foobarbaz" something else may be upgraded by necessity and the consequence being, your browser not working anymore.
According to their releases page «SeaMonkey 2.53.1 Beta 1 Released January 18, 2020»․I recall last year looking at seamonkey and the source hasn't been touched since 2017.
SeaMonkey 2.53.1 Beta 1 uses the same backend as Firefox and contains the relevant Firefox 60.3 security fixes.
SeaMonkey 2.53.1 Beta 1 shares most parts of the mail and news code with Thunderbird. Please read the Thunderbird 60.0 release notes for specific changes and security fixes in this release.
When your only choice is between Thunderbird and Evolution, your decision is quick and easy... :/ Other email clients are either just buggy toys, or lack essential features such as CardDAV and CalDAV support, excluding them as viable alternatives.
I don't really understand why a browser and an email client should be components of the same application (here, SeaMonkey). Having them separate looks much preferable to me.
Ah, now I see the problem.
Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have control over what is put into the repository from where pkg fetches the stuff.
I'm doing it that way - actually I do not even see a difference between a server and a desktop besides that the desktop has a graphics card inserted and a mouse attached and an X server installed (which is just a network application server, anyway). And I have no such problems (using firefox-esr) - and yes, I need a browser, because my database GUI frontend happens to be ruby-on-rails, which is actually a web application builder.
cd /usr/ports/www/firefox-esr ; make run-depends-list
. You can see that it depends on 40 packages. For Firefox is the same number. For Chromium is 54.I am reading all the time "server oriented" butAh, now I see the problem.
Well, FreeBSD is mostly a server OS, and there it is supposed to have a software deployment scheme. That means, have
They should put on "server oriented" and as drhowarddrfine many times wrote "for professionals just..." but people don't see those on the FreeBSD site.FreeBSD is a UNIX-like operating system for the i386, amd64, IA-64, arm, MIPS, powerpc, ppc64, PC-98 and UltraSPARC platforms based on U.C. Berkeley's "4.4BSD-Lite" release, with some "4.4BSD-Lite2" enhancements. It is also based indirectly on William Jolitz's port of U.C. Berkeley's "Net/2" to the i386, known as "386BSD", though very little of the 386BSD code remains. FreeBSD is used by companies, Internet Service Providers, researchers, computer professionals, students and home users all over the world in their work, education and recreation. FreeBSD comes with over 20,000 packages (pre-compiled software that is bundled for easy installation), covering a wide range of areas: from server software, databases and web servers, to desktop software, games, web browsers and business software - all free and easy to install.
They should put on "server oriented"
Which is good in the following scenario. You develop your web app today, test on Firefox ESR and then say to your customer: "If you want to use my app as it was designed, without flaws, for the next ~5 year use Firefox ESR". This is a great thing.
But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an pkg install|upgrade foobar. If you use Qt applications you saw this happening several times.
This does not correspond to my understanding of Firefox ESR.
But, it is not the problem at hand. Your Firefox, ESR or not, being a regular package can stop working one day, just after an pkg install|upgrade foobar. If you use Qt applications you saw this happening several times.
My knowledge of the ports system is limited but, at best of my understanding, you can see that if you do:cd /usr/ports/www/firefox-esr ; make run-depends-list
. You can see that it depends on 40 packages. For Firefox is the same number. For Chromium is 54.
The web didn't ask me if I want it. I just needed a GUI for my postgres, and in 2008 RoR appeared to be the only thing offering progammable extensions. That's also what I do: if I come across something, I don't state that I have no skills for it, I just start to learn it.3] I used Ruby for a few system services recently (not Rails). It is an extremely beautiful language. I like JRuby especially. My approach to the web my is Javascript both sides. Node on the server. Nginx in front of it. I am personally a big fan of 1 page web applications. I don't use "complex frameworks". Javascript+HTML+CSS+JQuery, this is my framework.
The web didn't ask me if I want it. I just needed a GUI for my postgres, and in 2008 RoR appeared to be the only thing offering progammable extensions. That's also what I do: if I come across something, I don't state that I have no skills for it, I just start to learn it.
Which means that you did not do a good job developing your web app today. If you want to go down your path of tightly integrating your app with a particular version of the browser, then you need to tell your user also: Please run my app on a computer that is running version X.Y.Z of the operating system, and version A.B.C of the Firefox ESR browser, and versions L.K.M of all the other middleware the influences the behavior of your app. This is actually commonly done in large corporate settings (for example data centers, where upgrades are very rare, and very well orchestrated and tested), and in the embedded world. One way to do it is to say: We will integrate and test the complete suite of all systems (hardware and software) exhaustively, and then install and run that version for a long period, perhaps 6 months or a year. While that version is in production, we will do either no changes at all, or only the absolute minimum necessary to fix gaping security holes. And we spend the 6 months or year testing the next version on a separate test system, to get ready for the next major upgrade. Underlying this is the philosophy of "never touch a running system", which I like to describe with this joke: To administer a computer you need a man and a dog; the man is there to feed the dog, and the dog is there to bite the man if he tries to mess with the computer.
But that way of deploying systems really means that you have not architected dependability into your system (hardware, software, middleware, limpidware, ...), but instead you tested it into the system. And we all know that you can not test quality into software. A house of cards remains a house of cards and will be fragile, even if it currently seems to be standing on its own.
So the second approach is to architect dependability and quality into the system. For example, for a compiled application: Look at the exact specification of the interfaces that the application needs. I'll focus on one particular thing here for concreteness, and for example say: I will write my application using exactly nothing other than specified and documented interfaces from POSIX.1-2008, using C++ exactly following the ISO C++14 standard. And when developing my application, I will audit all uses of language features and OS interfaces to make sure I only use things from standard C++ and POSIX. At this point, my application will work on any system that supports those interfaces, upgrades or not. And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.
This might sound implausible, but a lot of high-quality development is done that way. You start by writing down requirements for the development (like: it shall work on any system that supports POSIX version X, programming language version Y, Unicode version Z, ...), and then stick to the requirements document. In the case of web applications, it for example means writing down exactly which subset of Javascript will work compatibly on all supported browsers, and only using that subset, or explicitly write libraries that correct for incompatibilities. If you do that, then your application will become independent of running on browser A versus B. And we know this is possible. The example that impresses me most is the office applications that today run in browsers (like Microsoft Office in the online version, Google docs/sheets/... suite, and most amazingly MS Visio). These applications run perfectly, and they do so on Chrome, Safari, and Edge. And while they are extensively tested, their good behavior isn't tested and then band-aided into them, they are designed for browser independence.
And if my application breaks after an upgrade, I can be 99% sure to know the culprit: the underlying implementation of C++ or POSIX stopped being standards conforming.
I don't know specifically about PC-BSD, but I suppose a lot would 'fail' purely because of GPL.I think PCBSD tried bundles with PBI and it did not quite work out.
I disagree. While lots of consumer-facing software (in particular that written by amateurs for free) is very bad and very unstable, there is also a lot of extremely well written software. Supercomputer centers (with tens of thousands of nodes) that only need to be "rebooted" or shut down once a year for power distribution maintenance, and run around the clock the rest of the year. The stuff that runs data centers for the great cloud companies. The embedded systems that run cars and dishwashers and airplanes (with a few famous exceptions, like the 737 Max, which was not a software bug but a training and hardware issue). The most famous example is the software (written in the 1960s) that ran the life support system on the Apollo capsules: they carefully measured the bug rate, and it was zero. Meaning they never found a bug, in spite of very careful checking. It also had a productivity of about one line of code per engineer-month. But that was considered a good investment, since a bug would have killed 3 astronauts. Along the same lines: When was the last time you did a google search and got an error page back, saying "due to a bug we can not give you a result"?Software is house of cards. You change a tiny bit and it all fall apart. We are very far from the "stability" you have on mech.eng. structures.
Stability of Microsoft Web App
You're not all wrong here. The big developers of web-based applications clearly have enormous teams, including really good quality control and bug fixing. But: fixing bugs takes time. If you are using a spreadsheet on the web, and it calculates a wrong number, or it crashes on you, the bug that caused that will not be fixed for many days or weeks. Just rerunning all the tests after a bug fix will take considerable time. So the bug-free-ness of these great big web applications is mostly engineered into them, not added by fixing bugs.. Umm, i think the stability of the web app backed by big corporation is not due to great apriori study of each browser. It is instead a continuous correction of bug reports.
And that's why good app developers use Javascript mostly today (or other languages that run in the browser), and have carefully mapped the rendering incompatibilities of browsers. You can even get textbooks that explain differences between browsers. When developing browser-based software, you don't do it by "try throwing it against the wall and seeing whether it sticks", but by consulting documentation of how each browser handles what. No, those are not specifications distributed by the browser vendors, but documentation that is either publicly available (for example w3schools has compatibility charts), or much more detailed documentation developed internally by software development companies.To my knowledge there does not exist a specification for each browser. You try your code and see what brakes. CSS is interpreted a bit differently in each one. The only truly constant I found in the web world is Javascript.