Published Sept. 4, 2025
Despite a year of technical advancements in the AI industry, generative AI tools fail at a nearly doubled rate when it comes to one of the most basic tasks: distinguishing facts from falsehoods. The 10 leading AI tools repeated false information on topics in the news more than one third of the time — 35 percent — in August 2025, up from 18 percent in August 2024. When it comes to providing reliable information about current affairs, the industry’s promises of safer, more reliable systems have not translated into real-world progress.
The increase reflects a structural tradeoff. As chatbots adopted real-time web searches, they moved away from declining to answer questions. Their non-response rates fell from 31 percent in August 2024 to 0 percent in August 2025. But at 35 percent, their likelihood of repeating false information almost doubled. Instead of citing data cutoffs or refusing to weigh in on sensitive topics, the LLMs now pull from a polluted online information ecosystem — sometimes deliberately seeded by vast networks of malign actors, including Russian disinformation operations — and treat unreliable sources as credible.
Malign actors are exploiting this new eagerness to answer news queries to launder falsehoods via low-engagement websites, social media posts, and AI-generated content farms that the models fail to distinguish from credible outlets. In short, the push to make chatbots more responsive and timely has inadvertently made them more likely to spread propaganda.