Why are there no open source browsers using Webkit as the core engine?

We'll see how Tachyum's story develops in the future. This extremely brilliant chip will be able to execute x86, ARM and RISC-V instructions. Something nice for FreeBSD users to know is that Tachyum has announced that in addition to Linux they have managed to boot and run FreeBSD on their ISA.
Looks like standard template #3 for vaporware announcements. I call BS on this untill I see it on my desk.
VLIW is very hard to do, just check any disassembly of your code on any architecture and count how many instructions are there between conditional branches that do not have a data race. I think the number is dropping off a cliff at a window size of three, which is why many cores have only two integer pipelines. You can have more instructions in that window if they are very simple ones - there is a design for a VLIW CPU that is stack based and has really simple instructions, so you really need several to do anything meaningful. Mind you, the last instruction in that cluster has to be "instruction fetch from PC", and indirect/conditional branches were done by fetching from a new address.
 
It's also very workload dependent. There is a place where VLIW (and similar ideas) have shined, and it's signal/stream processing. For example, if you are getting megabytes of data (think synthetic aperture radar, reed-solomon encoding for disk RAID or networking, software-defined radio, self-driving car LIDAR images), and you end up doing the same integer and floating point ops for millions of cycles at a time, then keeping your ALU or FPC busy and the memory interfaces humming requires excellent optimization of instructions, and this is where VLIW shines. On the other hand, we often have other means of addressing these tasks, such as GPUs, ML accelerators, vector instructions, custom chips, programmable instruction sets, Altera or Xilinx modules on the data path, and so on. There is a reason Intel and AMD bought those two companies, and there is a reason my friend (the microprocessor architect) was the Chief Scientist of one of those companies for a while.

Now contrast those workloads with something like an interpreter that has to run Python or JS code, or that has to perform XML or JSON encoding/decoding at high speeds (this is a significant fraction of modern workloads). This is where Crivens' observations of "one branch every 3rd instruction" hits hard. This kind of workload kills simple-minded models of efficiency.

In the real world (where real money is made), the usage of CPUs is a mix of things, with the two examples above being sort of extreme.

People tend to get very upset that the existing architectures (x86, amd-64, ARM, Sparc, ...) are so inelegant. That might be true. The modern passenger car is also an inelegant design, attempting to be a compromise between lots of conflicting requirements, such as low cost, comfortable for passengers, fuel efficiency and low carbon footprint, safety in case of crashes, high cargo capacity, and high reliability. If you drop all but one requirement, you can get a much better transportation device, such as a bicycle or formula 1 car. Neither is practical for picking up your kids from school and bringing them to the afternoon soccer game. There is a reason the much-maligned minivan and pickup truck are such a hit: they're useful for the compromise workloads that normal families have.
 
3. The M1/M2 processors are very similar to the rest of the ARM processors, so this performance is relevant to the many FreeBSD users currently or in the future switching to ARM:
Apple Silicon, though ARM based, uses tons of proprietary changes done by Apple alone available to none other. It is true that Apple Silicon was a revolution, because this really is the first ARM chip which became the standard CPU of a major computer manufacturer. Apple has the technological lead here.

As consequence of this the competition is still playing the catching up game, while Apple is moving on. This means other ARM chips most likely don't perform as good as Apple Silicon.

And aside that this is not the point, the point was and is why many don't use WebKit. One possible reason might be long compile times. The standard CPU is still X64 nowadays. So you coming along with M1 indicates that it can be slower, but this does not affect the majority of users which are using X64 at all. So this is you always cherry picking stuff when it fouds your train of thoughts, just like this Tachyum CPU as well were it sounds that you have just repeated their marketing buzzwords without giving it a second thought at all.
 
Compile time don't matter for >99% of users. Having their default setup running all they want matters. That is, clicking on a $TIME_VAMPIRE link on some SM site must work for them. If that works to their satisfaction, there is no need to change anything.
 
Interesting, I did some googling and ended up here, I was reading on reddit that WebKit is "assumed" to be slower than Blink. (Bunch of Apple haters).

Your data says otherwise. Anyhow, FreeBSD devs have ported Chromium implemented in WebKit, and I had to question its speed compared to Blink since I need to make a scraping program to obtain data, which is loaded dynamically into the browser. I guess WebKit is good for the job...
 
Because...

1. Webkit, while OSS, is corporate owned and driven by Apple - also the reason why Google created their own fork, Blink, to get free of Apple's influence,
2. Building Webkit takes forever, even on a quite powerful machine,
3. Webkit is a real quick moving target, so keeping up with development can be challenging/tough on your ressources,
4. there's Chromium, which basically is just that.
Isn't Webkit open-source, which benefits primarily for Apple commercial products? I guess it's as free and open compared to Blink...
Kinda confused as to why Apple would open-source it, something they use that has great importance (Web Browsers).
 
Are you nuts? Oh please... do you really think that the compile time on Apple Silicon matters to somebody here? I don't think so.
?

Actually, it does have some significance importance; after all, Apple silicon does use the ARM architecture. This means someone using ARM chips and trying to compile WebKit would matter to them.
 
What if you don't need GPU or TPU for your application? Let's say database or file servers. This sounds like dragging around silicon useless to many, just like the brand new Xeons with their additional processing units.
Facts, it's like buying a car that you'll never use its certain features but paid for it.
 
What a great read, thanks! I'd forgotten about the Itanic mess. I thought it had finished sinking years ago. Hard to believe it's still unwinding in 2023.

Who remember the Rambus fiasco?
Well, I spent the first 2 decades of my career on DEC hardware, starting with the PDP 11/70 and RSTS/E . . .as to the Itanic, I have a rather lengthy blog post on here here. That fiasco can be summed up in one familiar phrase:

Payback is a bitch!

Because I wrote this book I still get emails and calls from agencies trying to staff metal and paper mills. Most of them are looking to pay billing rates from 1985 so I hang up.

If you want to know the most hilarious part of the twisted story amuse yourself with this blog post. For those who don't want to read, the bottom line is around 98% of the shops still running various DEC platforms don't have the source code. They can't recompile. Most have lost all the documentation so they can't even port. That's the case of almost every metal mill in America. Even Navistar is still on an Alpha for a core business system for this reason.

DEC had VARs (Value Added Resellers) that sold customized software. They would provide the source code to most of the system, but their "core routines" were only delivered in object form. When the DEC Sales Resistance Force played games, the VARs went out of business. These are core business systems handling many millions of dollars in business each day. Or . . . they are the systems controlling the manufacture of just about every type of metal.

Before anyone laughs at these companies ask yourself this:

If Oracle goes out of business tomorrow what happens to all the customers with custom software written in Oracle's universe?
 
Isn't Webkit open-source, which benefits primarily for Apple commercial products? I guess it's as free and open compared to Blink...
Kinda confused as to why Apple would open-source it, something they use that has great importance (Web Browsers).
Oh, this is an easy one. Everybody hates Apple, that is everybody who is not a cult member. And if you don't understand just how deserved that hatred is you need to read up on the history of CUPS. Out of the blue, because Apple didn't have Serial or Parallel ports in anything they made, dropped support. Not only that, but they dropped support for PostScript, a very stable, proven printing technology, and forced everyone to generate PDF. High end printers broke everywhere when the new release rolled out. Not to mention embedded systems developers like myself, who need Serial and Parallel ports to communicate with development targets, were left high and dry.

As to "why" they would OpenSource Webkit? Game of Thrones baby. I lived through the era when there was no communication between an IBM Mainframe and a DEC VAX. Everything was proprietary. Two things are going to happen.

1) Google will put even more tracking and privacy invasion into Blink than it already has. Google is notorious for invasion of privacy. A while back they pledged to end cookies you might recall. Hey, Microsoft put all kinds of tracking "telemetry" into VS Code and gave it away for free just to track you on Linux. Apple, which periodically pays lip service to privacy will tout Webkit as being OpenSource and non-invasive.

2) History will repeat. History always repeats. Google will add some proprietary feature to Blink that only works in Blink based browsers. Apple will add some proprietary feature to WebKit and do it in such a way as to make it nearly impossible for Google to add it to Blink or at least add it in a timely manner. Think WASM replacement that generates nX times faster code. Elimination of JavaScript in favor of something newer or cooler? Perhaps they add direct support for Rust, Go, whatever is in vogue this week with those who never went to college so they use scripting languages for everything?

I lived through BW1, the first Browser War. Microsoft lost. If you didn't live through the Browser War, rent Valley of the Boom. They got that story right. Despite Microsoft embedding their browser it was the least used browser on the Internet.

ie-tombstone.jpg


Edge isn't far behind. Edge uses the Google Chromium code. Firefox, the OpenSourced Netscape, won the Browser War and look at it now.

As to why there aren't OpenSource Webkit browsers? Google paid the other browsers to make Google the default search engine. Cost them dearly in the antitrust case. If you dig through the fine print of the payment arrangements they most likely also had to use Blink as part of the payment terms.

Apple doesn't have a search engine anybody knows about. Maybe one exists, but only Apple users know about it.

We don't yet know what the final fallout from the antitrust case will be. That was an August 24, 2024 decision. Google might well be broken up. Apple might well cut a deal with DoJ promising to never create a search engine and part of the settlement may well be all of the browsers have to dump Blink for Webkit to reduce Google Monopoly. Google may have to end Chromium and its Chromebook products. They aren't going to just write a check for this one. The American DoJ is looking to bust them up like it did AT&T back around the time FreeBSD (well, BSD anyway) came into being.
 
I'm content with Firefox, but was curious about switching to Safari if it was available on Android and Windows/Linux. Edge is too MS-centric, and Chromium doesn't allow uBlock/extensions on Android.

I don't currently entertain other browsers for security concerns (so only Firefox, Edge, Google Chrome or Chromium, and Safari on Apple stuff; I like the idea of more users, more eyes, more security notice :p) but might jump on a Webkit one if it supported Win/Nix and Android, and had uBlock Origin un-restricted (no V3)
 
Well, I spent the first 2 decades of my career on DEC hardware, starting with the PDP 11/70 and RSTS/E . . .as to the Itanic, I have a rather lengthy blog post on here here. That fiasco can be summed up in one familiar phrase...
One of my first real jobs was in a DEC shop. We still had VAX 8350s in the mid '90s because we couldn't port the software than ran on them to the shiny new Alphas we'd bought. Probably didn't even have all the source, as you say.

I learned a lot of what I know about networking from reading the DEC manuals for the LAN hardware we had from them. It was expensive, but a real joy to work with. The manuals were written so clearly and succinctly they were actually fun to read.
 
Apparently they are going to rewrite ?in Swift.... https://x.com/awesomekling/status/1822236888188498031
RIP Freebsd port then, I guess.
 
Back
Top