A fresh report from The Midas Project’s Model Republic publication discloses that the news platform, The Wire by Acutus, significantly depends on AI-generated content. Since its launch at the end of 2025, the site has released almost 100 articles spanning diverse fields such as tech, energy, media, science, business, and healthcare. Notably, their About page refers to their efforts as “collaborative journalism” led by an “editorial team,” although the site does not provide a masthead or acknowledge any editors or journalists.
The reasoning behind this lack of transparency is outlined in their How It Works section:
“Our editorial team pinpoints timely issues and invites contributors with relevant, firsthand experience to relay their viewpoints through organized discussions. Those viewpoints are amalgamated and refined into narratives that reflect where contributors resonate, where they differ, and what it all signifies — offering depth, balance, and clarity beyond the headline.”
Nevertheless, journalist Tyler Johnston utilized Pangram, an AI detection tool with a 99.98% accuracy rate, to examine the site’s content. He discovered that 69% of the 94 articles were entirely AI-generated, while an additional 28% were partially AI-generated. Only three articles were identified as authored by humans.
Johnston’s concerns were heightened by the content, which predominantly advocated for the advancement of artificial intelligence and dismissed its detractors. One article cautions against “Escalating Anti-AI Radicalism,” while another probes, “Will Republicans Allow Blue States to Dictate America’s AI Regulations?”
As Johnston delved deeper, he recognized that The Wire, a nascent site with a limited social media footprint, garnered half of its engagement on X from Patrick Hynes, the president of the PR firm Novus Public Affairs. Novus Public Affairs collaborates with Targeted Victory, a consulting firm integral to OpenAI’s lobbying initiatives in Washington.
Generative AI has already induced upheaval in our understanding of reality. With adequate computing resources, one can fabricate fake movie trailers, replicate a politician’s voice for a deep fake, or concoct unbelievable scenarios such as a shark assaulting a plane, deceiving certain online users.
If Johnston’s observations hold true, this may represent a situation where an AI company inaccurately portrays its operations as “independent journalism” to advocate for its interests, which Johnston points out breaches its own usage policies.
Disclosure: Ziff Davis, the parent company of Mashable, initiated a lawsuit against OpenAI in April, claiming copyright infringement in the training and operation of its AI systems.