OpenAI Atlas defeats all of this by being a user's web browser. They got between you and the user you're trying to serve content, and they slurp up everything the user browses to return it back for training.
The firewall is now moot.
The bigger AI company, Google, has already been doing this for decades. They were the middlemen between your reader and you, and that position is unassailable. Without them, you don't have readers.
At this point, the only people you're keeping out with LLM firewalls are the smaller players, which further entrenches the leaders.
OpenAI and Google want you to block everybody else.
Do you have any proof, or even circumstantial evidence to point to this being the case?
If chrome actually scraped every site ever you visited and sent it off to Google, it’d be trivially simple to find some indication of that in network traffic, or heck - even chromium code.
> Is it confirmed that site loads go into the training database?
Would you trust OpenAI if they told you it doesn't?
If you would, would you also trust Meta to tell you if its multibillion dollar investment was trained on terabytes of pirated media the company downloaded over BitTorrent?
We don't have to trust it or not. If there's such claim, surely someone can point at least at a pcap file with an unknown connection. Or at some decompiled code. Otherwise it's just a conspiracy theory.
I think the original claim was about something different. "Is it confirmed that site loads..." - I read it as the author taking about general browsing, not just explicit questions, with the context of the page.
As I understand it, the main point of Anubis is to reduce the costs caused by (AI company) bots and agent-generated load is still a lot less than simply spidering the complete web site; it might actually be quite close to what a user would manually browse.
Unless the user asked something that just needs visiting many pages, I suppose. For example, Google Gemini was pretty helpful in finding out the typical price ranges and dishes a local shopping centre coffee shops have, as the information was far from being just in a single page..
It's definitely pointless if you completely miss the point of it.
> OpenAI Atlas defeats all of this by being a user's web browser. They got between you and the user you're trying to serve content, and they slurp up everything the user browses to return it back for training.
Cool. Anubis' fundamental purpose is not to prevent all bot access tho, as clearly spelled in its overview:
> This program is designed to help protect the small internet from the endless storm of requests that flood in from AI companies.
OpenAI atlas piggybacking on the user's normal browsing is not within the remit of anubis, because it's not going to take a small site down or dramatically increase hosting costs.
> At this point, the only people you're keeping out with LLM firewalls are the smaller players
OpenAI Atlas defeats all of this by being a user's web browser. They got between you and the user you're trying to serve content, and they slurp up everything the user browses to return it back for training.
The firewall is now moot.
The bigger AI company, Google, has already been doing this for decades. They were the middlemen between your reader and you, and that position is unassailable. Without them, you don't have readers.
At this point, the only people you're keeping out with LLM firewalls are the smaller players, which further entrenches the leaders.
OpenAI and Google want you to block everybody else.