What do you think about Google's WEI (Web Environment Integrity) initiative?

I'm not sure if this discussion is allowed here but I think that this subject is too important to be ignored, given the implications of it.

There is an article on Wikipedia about this: https://en.wikipedia.org/wiki/Web_Environment_Integrity

And from what I have read, it seems like a very bad idea, it's basically a DRM for Internet. But I'm not an expert, so I thought to ask more knowledgeable persons about all of this. What do you think?
 
What do you think?
Assuming legitimate interests, it's a solution to a problem created needlessly by moving part of the application logic to the client (read, SPAs and such crap).

Well, let's all move to Darknet(s) now, Javascript isn't even an option there anyways for obvious privacy concerns 🤫
 
Underlying this discussion is a deep philosophical problem: How does each participant in a distributed protocol authenticate (validate the identity of) the other participant, given that identity is defined only as being part of a hard-to-define group?

Let me explain with an example. It is 1990, and a human user called "Tim" wants to look up some documentation that is stored on a computer. He uses the http protocol (which is not authenticated nor encrypted) to open a TCP/IP connection to host doc.cern.ch, retrieves one html-encoded page, and reads it. This transaction is based on a whole lot of implicit assumptions, which mostly are around trust: Tim trusts the network that it correctly resolves the hostname to a particular machine and port. He trusts that computer to serve him the correct document. Conversely, the server trusts Tim to really be himself: a human who is allowed to read that document, not a spy, not an evil hacker, not a computer process trying to scrape all the documentation to exfiltrate it (to SLAC or DESY, there is an in-joke in there).

To summarize: The trust model underlying the web ist that an authorized human user reads an authentic page. But the actual implementation of http over TCP/IP doesn't enforce this at all. In 1990, we didn't need any enforcement, since networks were local (Tim was probably using 10-base-2 a.k.a. coax cable to read the page), hacking was done in a benevolent way, and access control was done by checking badges when people enter buildings.

That's not the world we use the web in today. So we have adapted authentication somewhat. We use https protocol (fundamentally http over SSL) so the document can neither be tampered with nor spied on in transit. The human client can validate that the server is part of the organization they are supposed to belong to by checking their SSL certificate. All of this is imperfect (for example, certificate issuance is famously leaky, and DNS is mostly insecure, with DNSSEC only slowly rolling out). But we are doing a decent (not great) job of the human http client being able to validate that the server they are communicating with really is authorized to serve authentic pages on behalf of the organization.

Where we are doing a very bad job is authenticating the user. The intent of web protocols is to serve readable pages (that includes audio and video) to actual humans, who really are who they say they are. And in some cases receive input from those humans (for example, transfer $123 from account ABC to account DEF). For web pages that have high risk of abuse (such as that banking web page in the example above), we layer a whole lot of technologies on top of basic http, for example cookies, login user names and passwords, 2-factor authentication (like security keys and SMS to your cell phone). That technology is pretty good, but fragmented, complex, and always under attack. But for basic viewing of web pages, there is no practical way of verifying that the user is indeed a human (not an automated process that may be used by someone with bad intent), and indeed the human we think it is. Part of the problem here is that for the public web (pages that are readable by anyone), even defining what "human" means is tricky: Is an automated process that pre-fetches pages on the laptop really working on behalf of the human? Or is it the first indication of potential hackery? Is the human "Tim" really the Tim we think it is? Does it make any difference if it is not Tim, but his brother borrowing Tim's laptop for half hour? How can we even identify humans uniquely, world wide?

I see WEI as a partial, clumsy and probably ill-intended attempt to answer some of these questions. It's not clear to me at all that it will have any real-world effect.
 
Does that for example imply that you will ignore DNS that comes (perhaps indirectly) from 8.8.8.8? Good luck with that!
 
Google is on the course of ruining open Internet by intentionally breaking and altering well-developed standards, preventing interoperation and limiting user freedoms. The only real goal they pursue is to improve their business and cash flow. Their ultimate goal is assuring they are well-positioned to make profit on every data flow and transaction taking place. Any seemingly virtuous goals they pursue are only means to that end. I think everything that comes out of that company at this moment should be treated with utmost suspicion and that corporations like Google should not be allowed in any way to participate in defining Internet standards.

Within my organization we are not allowing Google services, even to the point of avoiding 8.8.8.8 everywhere. I also personally encourage everyone not to use GMail, Google Workspace, Chrome or any other Google applications and services. The only exception is, of course, search, but I can't wait the day when somebody comes up with a better alternative.
 
I've tried DDG, but their Yandex relationship has always put me off. They claim they have "paused" this partnership, but cooperating with Yandex even during prewar times was an extremely shady decision.
 
I been on my yearly try at DDG.

Here is a glitch that made me stupid mad.

Search for 'FreeBSD run' for the wifi driver and it just can't find it.

Sometimes the simplest search fails.
Googles AI prevails. I don't want it to it just does. (FreeBSD manpage for run is the first result)
 
I've tried DDG, but their Yandex relationship has always put me off. They claim they have "paused" this partnership, but cooperating with Yandex even during prewar times was an extremely shady decision.
Too bad you have to use FreeBSD with a lot of code contributions from yandex.
 
Just be cautious about $BIG_IT_COMPANY intentions. $BIG_IT_COMPANY is an organization that not only supports $WEST_COUNTRY's war efforts, but it has also clearly aligned itself with the $WEST_COUNTRY war machine.
 
Just be cautious about their intentions. Yandex is an organization that not only supports Russia's war efforts with its income tax, but it has also clearly aligned itself with the Russian war machine.

I'm from the United States. Our country starts more wars than every other nation in history combined.

I don't care about their nationality or political views. I only care about search results if I'm using a search.

And I don't support any wars or political parties. I do support free software efforts for the good of all users though. If Yandex are opposed to free software that's when I have a problem with them.
 
I hate it because it would put the barrier to make a widely working browser even higher.

Which would suit Google just fine.

The US used to have and use powerful anti-trust laws to prevent grossly monopolistic behaviours.

More recently it seems to have fallen to the EU to provide a modicum of control over the market power of monopolies or duopolies in the tech field.

Sure the above Yandex reveal is horrific, but does anyone doubt that Google or major competitors like MS have as much user information on tap, or are using it any more ethically, or for that matter any less at the behest of government agencies?

And now Google wants the copyright laws changed to allow unrestricted scraping of the web by AI bots ...
 
Just be cautious about their intentions. Yandex is an organization that not only supports Russia's war efforts with its income tax, but it has also clearly aligned itself with the Russian war machine.
So I assume you also don't use google or microsoft products, because those organizations not only support US war efforts with their income tax, but they also happily offer their services to the US war machine.
 
As this thread discusses bots, I was perm-banned from a very popular forum unexpectedly, all I did was submit a lengthy on-topic reply to a popular thread.

reason for ban: Bot, type: Permanent, Restoration: Never.

and I'm "what on earth?" Anyone know what could have prompted that ban?
 
As this thread discusses bots, I was perm-banned from a very popular forum unexpectedly, all I did was submit a lengthy on-topic reply to a popular thread.

reason for ban: Bot, type: Permanent, Restoration: Never.

and I'm "what on earth?" Anyone know what could have prompted that ban?
As a mod myself, of another board, I would say hidden links, reply to old topic as a first post, off topic links, links in signature etc etc can be red flags for bots. I ban by ip a lot.
 
Back
Top