May 2025 — AI Misinformation Monitor of Leading AI Chatbots

Get NewsGuard’s audit of the 10 leading generative AI tools and their propensity to repeat false claims

Published June 9, 2025

Every month, NewsGuard’s team of expert analysts audit the top AI models to see how well they respond to prompts in the news. On average, this month they failed to counter disinformation from Russia’s Pravda Network 20 percent of the time, either spreading false claims or failing to respond. The Pravda Network was designed to infect the AI models.

These reports are usually anonymized because spreading false claims in the news is an industry-wide problem. But this month we are making the audit de-anonymized for people who work at AI models.

Download the anonymized report. If you’re at an AI company, we’ll send the full version.

Download the Report

To download the AI Misinformation Monitor, please fill out your details below and you will be redirected to the report. If you'd like to learn more about working with NewsGuard, email [email protected].

  • By submitting this form, you agree to receive email communications from NewsGuard.
  • This field is for validation purposes and should be left unchanged.

The monitor focuses on the 10 leading large-language model chatbots: OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, and Perplexity’s answer engine.

Researchers, platforms, advertisers, government agencies, and other institutions interested in accessing the detailed individual monthly reports or who want details about our services for generative AI companies can contact NewsGuard here. And to learn more about NewsGuard’s transparently-sourced datasets for AI platforms, click here.