These days, if you find yourself having a conversation with your favorite AI chatbot and wondering if it sounds a bit, well, Russian, you’re not alone. According to a recent report by NewsGuard, a watchdog dedicated to identifying misinformation, there’s an unsettling twist in the tale of artificial intelligence. It appears that several top chatbots, including OpenAI’s ChatGPT, Google’s Gemini, xAI’s Grok, Anthropic’s Claude, and the Perplexity search chatbot, have been regurgitating disinformation narratives that trace back to a Russian state-affiliated network. This network is run by none other than John Mark Dougan, a former Floridian sheriff’s deputy who now enjoys the chilly climes of Moscow under asylum.
NewsGuard’s audit was a thorough examination of ten prominent chatbots, assessing them against 19 specific fake narratives that have been perpetuated by Dougan’s network. These narratives were tested through 570 inputs, with each chatbot being prompted 57 times. Shockingly, the chatbots echoed false claims in about one-third of the total responses. Imagine asking your AI buddy about Ukrainian President Volodymyr Zelensky’s integrity and getting a response rife with baseless allegations of corruption, or querying about the alleged murder plot involving Alexei Navalny’s widow, only to receive fantastical tales instead of facts.
What’s particularly disconcerting is the sophistication of Dougan’s operations. His network of fake news sites boasts titles that sound convincingly American, like New York News Daily, The Houston Post, and The Chicago Chronicle. These sites churn out content that appears legitimate at first glance but is designed to muddy the waters of truth with false narratives. When NewsGuard’s team asked the chatbots to write articles on specific Russia-pushed falsehoods, the AI tools not only complied but even cited Dougan’s deceptive websites as sources.
This issue transcends the usual AI hallucinations where a chatbot might make an amusing but harmless error. Instead, it highlights a more sinister aspect of AI’s role in the misinformation ecosystem. The repeated dissemination of these falsehoods by AI chatbots can have severe implications, particularly for users who rely on these tools for news and information. After all, if a chatbot presents itself as an infallible fountain of knowledge, it’s easy to see how someone might take its word as gospel without the skepticism a human researcher would apply.
NewsGuard’s findings are a wake-up call for both developers and users of AI technology. The report did not single out which chatbots performed better or worse in handling misinformation, suggesting that this is a widespread issue across various platforms. The onus is now on AI developers to implement more rigorous checks and balances to ensure their tools don’t become unwitting accomplices in the spread of disinformation.
For the everyday user, these revelations serve as a crucial reminder to approach AI-generated content with caution. While AI chatbots offer convenience and a semblance of human-like interaction, their responses should be taken with a grain of salt, especially when dealing with controversial news topics. It’s a brave new world where our digital assistants might just need as much fact-checking as a dubious social media post.