For the most part of last year, Wikipedia and the Wikimedia Foundation have been fighting a deluge of AI slop—drafts upon drafts of clearly AI-generated content, ranging from absolute gibberish to plausible sounding text filled with hallucinations. While the Wikimedia Foundation is not against artificial intelligence per se, it is not hard to see how even an intentional and targeted AI policy places undue burden on Wikipedia's editor community, the volunteers who make the encyclopedia possible in the first place. In fact, one of the first points of controversy came when the editor community pushed back against an experiment which would show AI-generated summaries at the top of articles to users who opted into the feature.

Shortly after the experiment was paused, it became known that a full-fledged project, the WikiProject AI Cleanup, had been created as an attempt to fight against AI-generated slop. Initially, the goal wasn't to ban AI, but to develop sound polices governing the use of AI tools, such as enabling the speedy deletion of unreviewed AI-generated content and the issuing of warnings to users adding such content to Wikipedia. Still, the discussion ignited by the summaries experiment revealed that most editors opposed AI-generated content, and evidenced the difficulties associated with outlining policies that did not constitute a sweeping ban, but also satisfied the editor community.

It was in the context of these more permissive AI usage policies that another controversy was uncovered: a non-profit known as Open Knowledge Association (OKA) had reportedly been paying a stipend of a little under $400 to editors from the Global South with the goal of diversifying Wikipedia's editor pool. OKA encouraged its users to leverage general purpose chatbots like Grok and ChatGPT to automate the translation of articles into English, and while it allegedly had a quality control strategy in place, editors started finding a number of translations that had been clearly published without any sort of review. The incident led to the implementation of stricter quality standards for OKA editors, but more importantly, reignited the conversation about how to best handle AI-generated content.

The new policy bans the use of AI tools to generate articles from scratch and to substantially rewrite existing articles. The only permissive use consists in using LLMs to suggest basic copyediting changes, and even these can only be incorporated provided they don't alter the meaning of the original text. Relatedly, the policy still permits the use of AI tools to automate translations of articles to English, but editors are expected to follow the specific guidance developed for those cases: among other things, editors must be proficient enough in both the source language and English so they guarantee the translation is accurate, and they must review the whole text for inaccuracies, hallucinations and faulty citations.