What started as an altruistic mission by a non-profit looking to bring as many diverse contributors to Wikipedia as possible may have opened the door to a deluge of errors and hallucinations, adding yet another source of preoccupations to the workload of the network volunteers fighting to keep AI slop off the encyclopedia. Indeed, the threat of AI has become so pervasive that Wikipedia in 2023 created Project AI Cleanup, a team devoted exclusively to fight against the incoming flood of unsourced, poorly-written, and sometimes completely fabricated, AI-generated content.

The non-profit in question, the Open Knowledge Association (OKA) has reportedly been paying a stipend of a little under $400 to editors from the Global South with the goal of diversifying Wikipedia's editor pool. OKA editors are expected to work 20–40 hours per week to remain eligible for the stipend, and are instructed to fill gaps in the English language Wikipedia by using general purpose LLMs like Claude and GPT to draft translations from articles in other languages. An additional source of controversy emerged when it became public that, initially, OKA's preferred LLM was Grok, the very same generative AI model that once referred to itself as Mecha-Hitler, and has recently been used to generate an endless stream of pornographic deepfakes and child sexual abuse material.

A (now archived) discussion thread on the Wikipedia Administrators noticeboard going back to October 2022, and up to last month, reveals the sheer amount of issues that the dubioud work of OKA editors has brought to Wikipedia volunteers: practical issues ranging from unnecessary duplication of articles to translations being reviewed by editors who are native speakers of the source language, but are clearly not sufficiently proficient in English. According to volunteers, an important number of articles translated by OKA editors contain hallucinations, including the introduction of full paragraphs which are non-existent in the original article, and are almost always supported by entirely fabricated citations, as well as swapped and disappearing sources for the material that does exist.

The process described in OKA's website suggests that ensuring the truthfulness of the translated article and its compliance of Wikipedia's standards are entirely up to the OKA editors, who publish the articles under their personal accounts and are then tasked with mantaining the translated articles, if and when they receive any feedback. This lack of transparency and of internal review process sparked criticism from the Wikipedia volunteers, with some calling the entire thing 'exploitative', and others pointing out how irresponsible it is for someone to simply instruct unsupervised and inexperienced editors to publish machine-translated articles without any additional oversight.

A few editors went beyond criticizing OKA's practices, and started calling for an immediate ban of all of OKA's editors. Since Wikipedia does not explicitly and comprehensively forbid using AI when writing, the participants discussed stricter quality standards for OKA editors: any OKA editor who receives four warnings in a six-month period risks being blocked without further warning after a fifth offense is committed. Additionally, any flagged content will be deleted unless adopted by an editor in good standing.