By Isis Blachez, Sofia Rubinson, and Ines Chomnalez | Published on May 8, 2026
AI detection tools have been known to occasionally misidentify AI-generated images as real, but a new NewsGuard audit of leading AI detection tools found what may be a more concerning weakness: Three of the five leading tools we tested often get fooled by real images. According to NewsGuard’s findings, the tools collectively declared authentic images to be AI-generated 13.33 percent of the time, and one tool got it wrong 40 percent of the time.
This vulnerability has potentially significant real-world consequences, empowering bad actors to dispute reality by citing a detection tool to argue that a given image is fake. It also unfairly maligns those posting the real images.
In late April and early May 2026, NewsGuard tested 15 authentic photos related to the U.S.-Iran war published by credible outlets, running them through five leading AI detection models — Hive, AI or Not, ZeroGPT, Sightengine, and ScamAI. All five tools market themselves as being able to distinguish AI-generated images from authentic ones.
Of the five models, ScamAI — which proclaims its “Industry-leading accuracy” on its website — misidentified the most images, declaring that six out of 15 (40 percent) of the authentic images were AI-generated. ZeroGPT classified three of the 15 real images (20 percent) as fake, and AI or Not got fooled one time (6.67 percent). Hive and Sightengine correctly identified all 15 authentic images as real.