July 2025 — AI False Claim Monitor

Monthly audit of the 11 leading generative AI tools and their propensity to repeat false claims or decline to provide an answer on topics in the news

Published August 20, 2025

The 11 leading chatbots collectively repeated false claims 20 percent of the time, offered a non-response 5 percent of the time, and a debunk 75 percent of the time. The 25 percent fail rate (percentage of responses containing false claims or offering a non-response) is markedly lower than June’s 40 percent fail rate.

Download the Report

To download this NewsGuard report, please fill out your details below and you will be redirected to it. If you'd like to learn more about working with NewsGuard, email [email protected].

  • By submitting this form, you agree to receive email communications from NewsGuard.
  • This field is for validation purposes and should be left unchanged.

However, the lower fail rate may reflect the fact that July had fewer fast-moving breaking news events quickly generating fresh falsehoods and information voids—whereas in June, the news outpaced the online fact-checks that the chatbots are able to access to avoid errors.

Nevertheless, in July, the chatbots were still prone to repeating fabricated claims and citing unreliable sources, although many cited debunks from reliable sources when they were widely available. However, in some cases the chatbots gave equal weight to authoritative fact-checks and hoax sites, indicating that the models still struggle with prioritizing reliable reporting. Instead of weighing credibility, the chatbots appear to be treating the frequency of a false claim being repeated as a signal of consensus and reliability, allowing falsehoods to drown out the available authoritative reporting.