Reddit has a warning for AI companies and other scrappers: play by our rules or get banned. The company said it plans to update its Robots Exclusion Protocol (robots.txt file) to allow blocking automated copy of its platform.
The company also said it will continue to block and limit speeds for browsers and other bots that don’t have a prior agreement with the company. The changes should not affect “good faith actors” such as the Internet Archive and researchers.
Reddit’s announcement comes shortly after several regular reports from Perplexity and other AI companies robots.txt protocol of websites. This protocol is used by publishers to tell web browsers that they do not want to access their content. Most recently CEO of Perplexity with Fast Companysaid the protocol was “not a legal framework”.
In the statement, a Reddit spokesperson told Engadget that it did not target any specific company. “This update is not intended to single out any entity; This is to protect Reddit while keeping the internet open,” a spokesperson said. “Over the next few weeks, we’ll be updating our robots.txt guidelines to make it as clear as possible: if you’re using an automated agent to access Reddit, no matter what company you are, you must comply with our requirements. terms and policies and you should talk to us. We believe in an open internet, but we don’t believe in the abuse of public content.”
It’s not the first time the company has taken a tough stance on access to information. The company cited AI-powered companies using the platform when it started getting paid last year. Since then, it has signed licensing agreements with some AI companies, including and . The deals allow AI firms to train their models on Reddit’s archives and have been a significant source of revenue for the newly public Reddit. The “talk to us” part of that statement is probably a not-so-subtle reminder that the company is no longer in the business of distributing its content for free.