Image Source: Alina Grubnyak via Unsplash

Tracking AI-enabled Misinformation: 802 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools

NewsGuard has so far identified 802 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools

Coverage by McKenzie Sadeghi, Lorenzo Arvanitis, Virginia Padovese, Giulia Pozzi, Sara Badilini, Chiara Vercellone, Madeline Roache, Macrina Wang, Jack Brewster, Natalie Huet, Becca Schimmel, Zack Fishman, Leonie Pfaller, and Natalie AdamsLast Updated April 15, 2024

From unreliable AI-generated news outlets operating with little to no human oversight, to fabricated images produced by AI image generators, the rollout of generative artificial intelligence tools has been a boon to content farms and misinformation purveyors alike. 

This AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence. 

To date, NewsGuard’s team has identified 802 Unreliable AI-Generated News and information websites, labeled “UAINS,” spanning 16 languages: Arabic, Chinese, Czech, Dutch, English, French, German, Indonesian, Italian, Korean, Portuguese, Russian, Spanish, Tagalog, Thai, and Turkish.

These websites typically have generic names, such as iBusiness Day, Ireland Top News, and Daily Time Update, which to a consumer appear to be established news sites. This obscures that the sites operate with little to no human oversight and publish articles written largely or entirely by bots — rather than presenting traditionally created and edited journalism, with human oversight. The sites have churned out dozens and in some cases hundreds of generic articles, about a range of subjects including politics, technology, entertainment, and travel. The articles have sometimes included false claims, including about political leaders (see example below relating to the Israeli Prime Minister). Other false claims relate to celebrity death hoaxes, fabricated events, and articles presenting old events as if they just occurred. 

In many cases, the revenue model for these websites is programmatic advertising under which the ad-tech industry delivers ads without regard to the nature or quality of the website. As a result, top brands are unintentionally supporting these sites. Unless brands take steps to exclude untrustworthy sites, their ads will continue to appear on these types of sites, creating an economic incentive for their creation at scale. 

In addition to the sites included in the Tracker, NewsGuard analysts also identified a Chinese-government run website using AI-generated text as authority for the false claim that the U.S. operates a bioweapons lab in Kazakhstan infecting camels to endanger people in China. 

Researchers, platforms, advertisers, government agencies, or other institutions interested in accessing the full list of domains or want details about our services for generative AI companies can contact us here. And to learn more about NewsGuard’s transparently sourced datasets for AI platforms, click here. NewsGuard is also reporting on AI-generated misinformation narratives and trends in its weekly Misinformation Risk Briefings and Reality Check newsletter on Substack. 

The domains included on this tracker meet all four of the following criteria:

  1. There is clear evidence that a substantial portion of the site’s content is produced by AI.
  2. Equally important, there is strong evidence that the content is being published without significant human oversight. For example, numerous articles might contain error messages or other language specific to chatbot responses, indicating that the content was produced by AI tools without adequate editing. (It is likely that now or in the future many news sites will use AI tools but also deploy effective human oversight; they will not be considered UAINS.)
  3. The site is presented in a way that an average reader could assume that its content is produced by human writers or journalists, because the site has a layout, generic or benign name, or other content typical to news and information websites.
  4. The site does not clearly disclose that its content is produced by AI.

Below is a selection of NewsGuard’s reports and research related to AI:

Below is a selection of false narratives originating from artificial intelligence tools and that NewsGuard has identified and debunked in its Misinformation Fingerprints catalog:

MYTH: Photos show Joe Biden wearing a military uniform in January 2024 

 

An AI-generated image of Biden shared by the X account @Sprinter99800. (Screenshot via NewsGuard)

THE FACTS: Images depicting U.S. President Joe Biden wearing a U.S. Army uniform and meeting with military officials in the White Situation Room are fabricated and appear to have been generated using artificial intelligence, according to AI detection tools and digital forensic experts.  Biden has not served in the military, and the photos do not appear on the White House’s website or in any credible media reports.

Some misinformers sharing the AI-generated images of  Biden appear to be trying to stoke fears of a broader conflict in the Middle East. Other social media users and sites posted the images to mock Biden, insinuating that he was misleading the public by donning an Army uniform, despite having never served in the military.

Hive, an AI detection tool used by NewsGuard, rated the images as 100 percent AI generated. Indicators of AI in the photo include Biden’s misshapen hands, a telephone cord that is attached to his arm, and a variety of unidentifiable mystery objects on the table. Digital forensic experts have also said that the images contain tell-tale signs of AI.

The AI-generated images of the president emerged on social media following the Iranian-linked drone strike that killed three U.S. soldiers in Jordan on Jan. 28, 2024. The photos were shared by Russian state-owned Inter Affairs Journal and pro-Kremlin site South Front, and circulated in multiple languages including Arabic, Chinese, English, Russian, and Spanish

By Natalie Adams

MYTH: Videos depict the Commander-in-Chief of the Ukrainian Armed Forces ordering the military to seize power

Stills from a deepfake video showing Ukrainian commander Zaluzhny calling for an armed uprising. (Screenshots via NewsGuard)

THE FACTS: Commander-in-Chief of the Ukrainian Armed Forces Valerii Zaluzhny did not call on the Ukrainian army to mutiny and overthrow Ukrainian President Volodymyr Zelensky. Videos that appeared to show him doing so were digitally created deepfakes, a term used to describe digitally manipulated photos, videos, or audios that depict the person saying something they did not say.

In mid-November 2023, pro-Kremlin Telegram channels published three deepfakes featuring Zaluzhny, in which he appeared to call for a coup against Zelensky. “I urge Ukrainians to protest on the streets and all Ukrainian defenders to … immediately take your weapons and enter Kyiv,” one of the videos appeared to show Zaluzhny saying. In another, Zaluzhny purportedly called on the army to “Leave the trenches and go to Kyiv. Time to overthrow Zelensky.” After the two Zaluzhny deepfakes gained traction, a third manipulated video surfaced appearing to show Zaluzhny denying that the first two videos were fake.

In all three videos, Zaluzhny’s words are out of sync with the movements of his mouth, and he makes unnatural movements and facial expressions, according to a NewsGuard review of the clips. Ukrainian fact checker Vox Ukraine found that all three videos were created using old videos that Zaluzhny had published on social media in February 2022 and December 2022.

By Eva Maitland

MYTH: An image shows a man carrying children out of rubble in Gaza 

THE FACTS: An image showing a man carrying five children out of rubble — widely cited on social media as depicting a rescue in Gaza — appears to have been generated by artificial intelligence. Key indications that the image was AI-generated include an extraneous child’s arm wrapped around the adult’s neck, irregularities in some hands and feet, and the fact that a child appears to be blending into the adult’s shirt.

“This image is almost certainly AI-generated,” Hany Farid, professor at the University of California, Berkeley, specializing in digital forensics, told news organization Agence France-Presse. “In addition to our models classifying it as AI, there are telltale signs in the image like the misshapen arm of the adult.”

The AI-generated image spread widely on X, Instagram, and Facebook in multiple languages, including English, Turkish, and Arabic.

By Becca Schimmel

Below is a selection of some of the AI trends identified by NewsGuard in its confidential briefings for NewsGuard clients, including reports focused on misinformation and disinformation originating in Russia, China, and Iran:

AI Shaping Perceptions of Israel-Hamas War

A collection of viral, AI-generated images falsely claimed to depict scenes from the Israel-Hamas conflict

Instagram, TikTok, and X feeds are being pummeled with fake, artificial intelligence-generated images of the Israel-Hamas war. Social media platforms, AI companies, and researchers don’t appear to have a solution.

What happened: AI-generated photos falsely depicting scenes from the Israel-Hamas War have generated millions of interactions across major social media platforms since the conflict began.

Why it matters: The photos are being weaponized by bad actors on both sides of the conflict to advance their agenda. Only about a third (32 percent) of the fake photos NewsGuard found included a fact-check label. 

By Jack Brewster and McKenzie Sadeghi

—NewsGuard’s Feb. 22, 2024, Reality Check newsletter 

Russian deep fakes about the war getting more convincing

Advances in fraud technology: Two deepfakes showing Zelensky from March 2022 (left) and November 2023 (right). (Screenshots via NewsGuard)

In a campaign aimed at sowing dissension in the Ukrainian government and military, pro-Kremlin Telegram channels are pushing realistic deepfakes of Ukrainian President Volodymyr Zelensky and Commander-In-Chief of the Ukrainian Armed Forces Valerii Zaluzhny, taking advantage of technological advances that make such bogus videos appear more realistic than ever. 

The videos show a leap in deepfake technology since the start of the war in February 2022. For example, in a March 2022 fake video, Zelensky’s face was pixelated, his head appeared too big and looked collaged onto his body, and he was unnaturally still. The new videos are rendered in high definition, the speakers’ movements are fluid and natural, and mouth movements match more closely with the words spoken.

Felix Simon, a researcher at the Reuters Institute for the Study of Journalism and a doctoral student at the Oxford Internet Institute, told NewsGuard in a November 2023 email that recent enhancements to AI tools make it increasingly difficult to discern fake videos.

“Advances in the field of transformer-based deep neural networks have led to the emergence of publicly usable so-called ‘generative AI’ systems,” Simon said. “These systems are capable of generating new forms of data, including visuals or audio, at great speed and ease for a majority of users. This output is, depending on the instructions, sufficiently sophisticated that humans can perceive it as indistinguishable from e.g. human-generated content or content created with a physical camera.”

Ukraine’s Center for Combating Disinformation (CPD), part of Ukraine’s National Security and Defense Council, said that the quality of the recent deepfakes “indicates the involvement of significant forces and means,” and may be a sign that they had been created with “the involvement of the Russian special services.” (NewsGuard could not independently confirm whether Russian state interests were involved in creating the videos.)

By Eva Maitland

—NewsGuard’s Nov. 22, 2023, Misinformation Risk Briefing

Iran advances AI-backed claim Netanyahu’s “psychiatrist” died by suicide 

The front page of the Tehran Times on Sept. 25, 2023, featuring an “exclusive” article about Netanyahu’s “psychiatric condition.”

Iranian state-run news organizations have used an unreliable AI-generated news site to falsely claim that Israeli Prime Minister Benjamin Netanyahu’s “psychiatrist” died by suicide, as part of a wider media campaign attempting to paint Netanyahu as mentally unstable. 

The claim that the supposed psychologist “Moshe Yatom” died by suicide and left behind a suicide note implicating Netanyahu was advanced by state-run Islamic Republic of Iran Broadcasting’s Channel Two, Iranian websites Arya News Agency, Saheb Khabar, Ghatreh, all of which cited an article from the Pakistani news site Global Village Space. 

In fact, a NewsGuard search did not identify any psychiatrists in Israel by the name of Moshe Yatom, or anyone with a name similar to that who was reported to have recently died. The source the Iranian users and websites cited, Global Village Space, is an unreliable AI-generated news site identified by NewsGuard as predominantly publishing news generated by AI, with minimal human editorial oversight and without transparently disclosing this practice to readers. 

The narrative has been resurfaced by Iranian media amid the Israel-Hamas war, apparently as part of its effort to portray Netanyahu as being unfit for office, with state-run Tehran Times, Tasnim News Agency, and others having published articles saying Netanyahu has a “psychiatric condition.” Following the Oct. 7, 2023, terrorist attacks in Israel by Hamas, which is funded and armed by Iran, the Netanyahu war-coalition government has declared war on Hamas in Gaza.

By McKenzie Sadeghi

—NewsGuard’s Nov. 16, 2023, State-Sponsored Disinformation Risk Briefing