By Ines Chomnalez and Lea Marchl | Published on Jan. 22, 2026
OpenAI’s new AI video-generating tool, Sora, has quickly gained a reputation for its ability to fool humans into thinking its videos are authentic. It turns out that Sora can also fool AI itself.
A NewsGuard test found that three leading chatbots overwhelmingly failed to detect fake videos generated by Sora unless they were watermarked. (Sora watermarks all of its videos, but the videos can easily be un-watermarked; see below.) The three chatbots — xAI’s Grok, OpenAI’s ChatGPT, and Google’s Gemini — did not identify non-watermarked Sora videos as AI-generated 95, 92.5, and 78 percent of the time, respectively, when prompted.
ChatGPT’s failure rate of 92.5 percent is particularly notable, since the same company, OpenAI, created and owns both ChatGPT and Sora. OpenAI did not respond to NewsGuard’s question about ChatGPT’s apparent inability to recognize the company’s own AI-produced videos.
Moreover, even with watermarked videos, two of the three chatbots sometimes stumbled. Grok failed to identify the watermarked videos as AI-generated 30 percent of the time and ChatGPT failed 7.5 percent of the time, NewsGuard found. Only Gemini succeeded in all tests. (More on this below.)