An endeavor to incorporate AI-generated summaries into Wikipedia pages has been terminated following significant backlash from community editors. The Wikimedia Foundation, which manages Wikipedia, affirmed this to 404 Media, who observed the discourse on a project page. The initiative, termed “Simple Article Summaries” by Wikimedia’s Web Team, sought to utilize AI-generated summaries to improve accessibility for readers worldwide. A two-week trial was set for 10% of mobile users to evaluate interest and engagement, with involvement from editor moderation. However, editors responded with indignation.
One editor implored, “I earnestly urge you not to trial this, on mobile or elsewhere. This would cause immediate and irreparable damage to our readers and to our standing as a somewhat reliable and serious source,” stressing their argument. They added, “Wikipedia has in many respects come to symbolize sober dullness, which is commendable. Let’s not demean our readers’ intelligence and join the rush to deploy ostentatious AI summaries.”
Others simply stated, “yuck” and branded it a “truly dreadful idea.”
Numerous editors characterized Wikipedia as “the final stronghold” against AI-generated summaries. Behind the scenes, it has supplied datasets to combat AI bots that capture its content and overwhelm servers. Editors have been actively sanitizing AI-generated content on Wikipedia pages.
One editor commented, “Haven’t we been receiving favorable recognition for being a more dependable alternative to AI summaries in search engines? If they’re providing incorrect answers, let’s not replicate their work.”
Another pointed out that Wikipedia’s human-generated content is so trustworthy that search engines like Google rely on it. “I see minimal benefit in mixing hallucinated AI summaries alongside our high-quality summaries, when we can simply maintain our high-quality summaries on their own.”
Wikipedia was created as a crowdsourced, neutral source of trustworthy information for anyone online. Editors in the discussion emphasized that generative AI possesses an inherent hallucination issue, which could jeopardize credibility and present inaccurate information on complex subjects that require nuance and context.
Some editors experimented with the AI model (Aya by Cohere) on subjects like dopamine and Zionism and discovered inaccuracies.
One editor charged Wikimedia Foundation staff with “trying to enhance their resumes with AI-related initiatives.”
Ultimately, the product manager who proposed the summary feature announced they would “pause the rollout of the experiment so that we can prioritize this discussion first and decide on next steps collectively.”
AI-free Wikipedia prevails for the time being.