Within hours of the internet discovering the homicide of Charlie Kirk, who was shot at a public gathering in Utah, conspiracy theories started to spread. The far-right commentator, recognized for his controversial discussions on immigration, gun control, and abortion on college campuses, was murdered during a university tour with his conservative media organization, Turning Point USA. This group has spent the past decade forming conservative youth alliances at prominent universities and is tightly connected to the nationalist MAGA movement and President Trump. Initial news reports from credible media outlets and pop culture update accounts were vague regarding Kirk’s condition or whether the shooter had been caught.
Nonetheless, internet users from both political factions swiftly turned to social media, attempting to identify people in the audience and examining the disturbing footage of Kirk being shot. Some asserted that Kirk’s bodyguards were communicating through hand signals prior to the shooting, while others theorized the murder was a smokescreen to divert attention from Trump’s discussions with Jeffrey Epstein.
AI-driven chatbots intensified the situation on social media, functioning as both automated assistants and AI spam accounts. For example, an X account named @AskPerplexity, seemingly associated with an AI firm, initially stated that Kirk was alive, later retracting its claim after a user prompted it to discuss gun reform. The response was deleted following a NewsGuard article. Perplexity clarified that their bot should not be mistaken for their main account, stressing their dedication to accuracy.
Elon Musk’s AI bot, Grok, erroneously identified the video as an edited “meme,” asserting that Kirk would easily recover. Security professionals later validated the video’s authenticity. In other instances reported by NewsGuard, users shared chatbot replies to bolster their conspiracy theories, alleging that foreign agents orchestrated the assassination and that Democrats were implicated. Grok inaccurately told a user that significant news organizations had identified a Democrat as a suspect, which was false.
A Google representative mentioned that most inquiries on this matter yield accurate answers, but this particular AI Overview breached their guidelines, leading to corrective measures. Mashable also contacted Grok’s parent company, xAI, for a statement.
AI chatbots, while beneficial for straightforward tasks, encounter challenges with news reporting, posing a risk as noted by watchdogs and media executives. Deborah Turness, CEO of BBC News and Current Affairs, questioned how long it would be before an AI-twisted headline resulted in real-world repercussions. Chatbots frequently repeat information without caution, lacking the capacity to verify facts like human journalists. Instead, they derive answers from available online material, which can result in misinformation during urgent news circumstances.
Chatbots favor frequently stated information, bestowing it with authority, which can overshadow precise reporting. The Brennan Center for Justice at NYU has expressed worries regarding AI’s influence on news literacy and the “Liar’s Dividend,” where individuals deny genuine information as false due to AI-generated material.
Despite the dangers, an increasing number of individuals are relying on generative AI for news as businesses incorporate the technology into social media and search engines. A Pew Research survey revealed that users presented with AI-generated search results were less inclined to click on additional sources. Meanwhile, major tech firms have diminished human fact-checking teams in favor of community-monitored notes, raising alarms about misinformation and AI’s effect on news and politics. In July, X revealed a pilot program allowing chatbots to create their own community notes.