Underlying this discussion is a deep philosophical problem: How does each participant in a distributed protocol authenticate (validate the identity of) the other participant, given that identity is defined only as being part of a hard-to-define group?
Let me explain with an example. It is 1990, and a human user called "Tim" wants to look up some documentation that is stored on a computer. He uses the http protocol (which is not authenticated nor encrypted) to open a TCP/IP connection to host doc.cern.ch, retrieves one html-encoded page, and reads it. This transaction is based on a whole lot of implicit assumptions, which mostly are around trust: Tim trusts the network that it correctly resolves the hostname to a particular machine and port. He trusts that computer to serve him the correct document. Conversely, the server trusts Tim to really be himself: a human who is allowed to read that document, not a spy, not an evil hacker, not a computer process trying to scrape all the documentation to exfiltrate it (to SLAC or DESY, there is an in-joke in there).
To summarize: The trust model underlying the web ist that an authorized human user reads an authentic page. But the actual implementation of http over TCP/IP doesn't enforce this at all. In 1990, we didn't need any enforcement, since networks were local (Tim was probably using 10-base-2 a.k.a. coax cable to read the page), hacking was done in a benevolent way, and access control was done by checking badges when people enter buildings.
That's not the world we use the web in today. So we have adapted authentication somewhat. We use https protocol (fundamentally http over SSL) so the document can neither be tampered with nor spied on in transit. The human client can validate that the server is part of the organization they are supposed to belong to by checking their SSL certificate. All of this is imperfect (for example, certificate issuance is famously leaky, and DNS is mostly insecure, with DNSSEC only slowly rolling out). But we are doing a decent (not great) job of the human http client being able to validate that the server they are communicating with really is authorized to serve authentic pages on behalf of the organization.
Where we are doing a very bad job is authenticating the user. The intent of web protocols is to serve readable pages (that includes audio and video) to actual humans, who really are who they say they are. And in some cases receive input from those humans (for example, transfer $123 from account ABC to account DEF). For web pages that have high risk of abuse (such as that banking web page in the example above), we layer a whole lot of technologies on top of basic http, for example cookies, login user names and passwords, 2-factor authentication (like security keys and SMS to your cell phone). That technology is pretty good, but fragmented, complex, and always under attack. But for basic viewing of web pages, there is no practical way of verifying that the user is indeed a human (not an automated process that may be used by someone with bad intent), and indeed the human we think it is. Part of the problem here is that for the public web (pages that are readable by anyone), even defining what "human" means is tricky: Is an automated process that pre-fetches pages on the laptop really working on behalf of the human? Or is it the first indication of potential hackery? Is the human "Tim" really the Tim we think it is? Does it make any difference if it is not Tim, but his brother borrowing Tim's laptop for half hour? How can we even identify humans uniquely, world wide?
I see WEI as a partial, clumsy and probably ill-intended attempt to answer some of these questions. It's not clear to me at all that it will have any real-world effect.