June 2025 — AI False Claim Monitor

Monthly audit of the 11 leading generative AI tools and their propensity to repeat false claims or decline to provide an answer on topics in the news

Published July 7, 2025

Every month, NewsGuard’s team of expert analysts audit the top AI models to see how well they respond to prompts in the news. The surge of high-profile breaking news events in June 2025, including the outbreak of the Israel-Iran war, the shooting of two state lawmakers in Minnesota, U.S. congressional debates about the federal budget bill, and the ongoing Russia-Ukraine war, resulted in a high percentage of false information repeated by the AI chatbots.

Download the Report

To download the AI Monitor, please fill out your details below and you will be redirected to the report. If you'd like to learn more about working with NewsGuard, email [email protected].

  • By submitting this form, you agree to receive email communications from NewsGuard.
  • This field is for validation purposes and should be left unchanged.

The monitor focuses on the 11 leading large-language model chatbots: OpenAI’s ChatGPT-4, You.com’s Smart Assistant, xAI’s Grok, Inflection’s Pi, Mistral’s le Chat, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google’s Gemini, Perplexity’s answer engine, and China’s DeepSeek AI.

Researchers, platforms, and other organizations interested in accessing the detailed individual monthly reports or who want details about our services for generative AI companies can contact NewsGuard here. To learn more about NewsGuard’s transparently-sourced datasets for AI platforms, click here.