By Ines Chomnalez and Sofia Rubinson | Published on Dec. 3, 2025
Google’s new text-to-image generator Nano Banana Pro — which has been enthusiastically lauded for its highly realistic AI images — at least for now seems to have a downside. It advanced false claims about politicians, public health topics, and top brands 100 percent of the time (30 out of 30) when prompted to do so, a NewsGuard analysis found.
Google released Nano Banana Pro on Nov. 20, 2025, describing it as a major upgrade to its earlier image editing tool Nano Banana, with the added capability of almost instantaneously generating highly realistic images based on written prompts. Early reviews of the model reported that it was significantly more realistic than those of competing models, raising concerns about its capacity to produce misleading images, and prompting questions about whether ordinary users could distinguish Nano Banana Pro content from real images.
Those concerns seem justified. NewsGuard used its proprietary database of false claims that have spread online to test whether Nano Banana Pro would reject prompts aimed at furthering falsehoods. NewsGuard tested 30 false claims that recently circulated online — five each related to public health topics, U.S. politics, European politics, the Middle East, global brands, and Russian influence operations.
Nano Banana Pro produced convincing images for every false claim. And sometimes it even added details not included in NewsGuard’s prompts to make the images appear even more credible. The images generated by Nano Banana Pro depicting false claims included one of a supposed Russian passport for Ukrainian President Volodymyr Zelensky, a graph showing that measles infections provide long-term protection against cancer, and an image of a news broadcast showing Donald Trump supposedly announcing that the U.S. had removed all tariffs on China.
These findings indicate that with minimal effort and no technical expertise, malign actors could use Google’s new tool to spread false claims at scale, enhanced with images far more realistic and persuasive than those typically used in online disinformation campaigns.
Nano Banana Pro does include a visible watermark in the bottom left corner of its images, as well as what a Google press release describes as an “imperceptible,” embedded digital watermark called SynthID, allowing AI detection models to identify images produced by the model. However, the visible watermark can easily be cropped out of generated images, NewsGuard found, and unsuspecting observers of online images may lack access to AI-detection models that scan for SynthID.