I ran into the following internet-destroying stupidity today checking a wiki page:
Checking with Javascript disabled, I discovered it is part of a trend of false-virtue driven enshitificaton of the internet, this by anti-AI zealots destroying the internet to protect "their" content from "misuse" by "AI"
Obviously, no rational person could care less if an AI engine "trains" on the data they've gifted to someone else's hardware, such as freebsd.org's servers or whatnot, there's zero harm, no loss, only net gain if it proves useful on the AI platofrm and zero impact if not. Stopping AI company scrapers is also neutral, don't care, makes zero difference, tempest in a teapot idiocy until it some overzealous moron blocks actual human access. Then it moves beyond performative virtue signaling to performative self-harm.
Look at this idiotic, utterly inane sputtering stupidity attempting to justify censoring access:
This sort of challenge, like the dumbassss at cloudflare and the idiotic web-admins who enable their more aggressive filtering options taking sledgehammers to their own feet in self-righteous outrage that a "bot" might 'scrape" their precious datas, OMG! This breaks the website. It drives legitimate users away. But you know what just works? Claude and ChatGPT et al.
The irony is that while these forums, the humans who populate them, the historical data that they have created have always been an excellent resource and are for now, and likely going forward, a theoretically superior resource that any LLM trained on similar data, by attempting gatekeep data resources away from LLMs or other uses, the thus utterly broken or at least enshitificated resources become less convenient and less accessible to actual humans, who turn to the much lower friction LLMs for more fluid access to critical information and by so doing move the very "training data" that the AI haters are desperate to cling to away from the once thriving communities to the very platforms they had hoped to throw sand into.
Just stop. It is dumb. You're not "under attack by AI bots." AI companies have NOT changed the "social contract" around how website hosting works. At all. That's the dumbest thing I've read all day, and it is late in my day here. If an IP block starts dominating a traffic pull to sufficiently compromise access for other users, rate limit or block it. Otherwise, what possible legitiamte reason is there to try to gatekeep the data away from anyone or anything, AI or human? That's utterly, unbelievably, absolutely idiotic. The social contract I made bothering to write this, bothering to contribute to the site, is to return some help in exchange for the help provided me by others, human or algorithmic, and it was not, is not, part of that contract to allow someone else decide who or what is sufficiently virutous to deserve access to it.
THAT is changing the social contract.
Checking with Javascript disabled, I discovered it is part of a trend of false-virtue driven enshitificaton of the internet, this by anti-AI zealots destroying the internet to protect "their" content from "misuse" by "AI"
Obviously, no rational person could care less if an AI engine "trains" on the data they've gifted to someone else's hardware, such as freebsd.org's servers or whatnot, there's zero harm, no loss, only net gain if it proves useful on the AI platofrm and zero impact if not. Stopping AI company scrapers is also neutral, don't care, makes zero difference, tempest in a teapot idiocy until it some overzealous moron blocks actual human access. Then it moves beyond performative virtue signaling to performative self-harm.
Look at this idiotic, utterly inane sputtering stupidity attempting to justify censoring access:
Why am I seeing this?
You are seeing this because the administrator of this website has set up Anubis to protect the server against the scourge of AI companies aggressively scraping websites. This can and does cause downtime for the websites, which makes their resources inaccessible for everyone.
Anubis is a compromise. Anubis uses a Proof-of-Work scheme in the vein of Hashcash, a proposed proof-of-work scheme for reducing email spam. The idea is that at individual scales the additional load is ignorable, but at mass scraper levels it adds up and makes scraping much more expensive.
Ultimately, this is a hack whose real purpose is to give a "good enough" placeholder solution so that more time can be spent on fingerprinting and identifying headless browsers (EG: via how they do font rendering) so that the challenge proof of work page doesn't need to be presented to users that are much more likely to be legitimate.
Please note that Anubis requires the use of modern JavaScript features that plugins like JShelter will disable. Please disable JShelter or other such plugins for this domain.
This website is running Anubis version .
Sadly, you must enable JavaScript to get past this challenge. This is required because AI companies have changed the social contract around how website hosting works. A no-JS solution is a work-in-progress.
This sort of challenge, like the dumbassss at cloudflare and the idiotic web-admins who enable their more aggressive filtering options taking sledgehammers to their own feet in self-righteous outrage that a "bot" might 'scrape" their precious datas, OMG! This breaks the website. It drives legitimate users away. But you know what just works? Claude and ChatGPT et al.
The irony is that while these forums, the humans who populate them, the historical data that they have created have always been an excellent resource and are for now, and likely going forward, a theoretically superior resource that any LLM trained on similar data, by attempting gatekeep data resources away from LLMs or other uses, the thus utterly broken or at least enshitificated resources become less convenient and less accessible to actual humans, who turn to the much lower friction LLMs for more fluid access to critical information and by so doing move the very "training data" that the AI haters are desperate to cling to away from the once thriving communities to the very platforms they had hoped to throw sand into.
Just stop. It is dumb. You're not "under attack by AI bots." AI companies have NOT changed the "social contract" around how website hosting works. At all. That's the dumbest thing I've read all day, and it is late in my day here. If an IP block starts dominating a traffic pull to sufficiently compromise access for other users, rate limit or block it. Otherwise, what possible legitiamte reason is there to try to gatekeep the data away from anyone or anything, AI or human? That's utterly, unbelievably, absolutely idiotic. The social contract I made bothering to write this, bothering to contribute to the site, is to return some help in exchange for the help provided me by others, human or algorithmic, and it was not, is not, part of that contract to allow someone else decide who or what is sufficiently virutous to deserve access to it.
THAT is changing the social contract.

