AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles

404 Media
Wikipedia editors restricted contributors using AI translation due to introduced errors, or 'hallucinations,' in articles.

Summary

Wikipedia editors implemented new policies and restricted contributors paid by the Open Knowledge Association (OKA) after discovering that AI-generated translations were introducing factual errors, or "hallucinations," into articles. The issue arose because OKA leveraged Large Language Models (LLMs) like Gemini and ChatGPT, and previously Grok, to automate translations, often relying on contractors in the Global South who were instructed to copy/paste content. Editors found instances where sources cited by the AI translations did not support the claims made, or where unsourced sentences were added. In response, Wikipedia editors established rules to block OKA translators who accumulate four verification warnings within six months. Jonathan Zimmermann, OKA's founder, stated that they emphasize quality, pay translators hourly, and have since strengthened safeguards by introducing a second, independent LLM review step to complement manual review, acknowledging that using AI to check AI output can be flawed but serves as an additional check.

(Source:404 Media)