Image Source: Alina Grubnyak via Unsplash

Tracking AI-enabled Misinformation: 583 ‘Unreliable AI-Generated News’ Websites (and Counting), Plus the Top False Narratives Generated by Artificial Intelligence Tools

NewsGuard has so far identified 583 AI-generated news and information sites operating with little to no human oversight, and is tracking false narratives produced by artificial intelligence tools

Coverage by McKenzie Sadeghi, Lorenzo Arvanitis, Virginia Padovese, Giulia Pozzi, Sara Badilini, Chiara Vercellone, Madeline Roache, Macrina Wang, Jack Brewster, Natalie Huet, Becca Schimmel, Andie Slomka, Leonie Pfaller, and Louise ValleeLast Updated Dec. 4, 2023

From unreliable AI-generated news outlets operating with little to no human oversight, to fabricated images produced by AI image generators, the rollout of generative artificial intelligence tools has been a boon to content farms and misinformation purveyors alike. 

This AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence. 

To date, NewsGuard’s team has identified 583 Unreliable AI-Generated News and information websites, labeled “UAINS,” spanning 15 languages: Arabic, Chinese, Czech, Dutch, English, French, German, Indonesian, Italian, Korean, Portuguese, Spanish, Tagalog, Thai, and Turkish.

These websites typically have generic names, such as iBusiness Day, Ireland Top News, and Daily Time Update, which to a consumer appear to be established news sites. This obscures that the sites operate with little to no human oversight and publish articles written largely or entirely by bots — rather than presenting traditionally created and edited journalism, with human oversight. The sites have churned out dozens and in some cases hundreds of generic articles, about a range of subjects including politics, technology, entertainment, and travel. The articles have sometimes included false claims, including about political leaders (see example below relating to the Israeli Prime Minister). Other false claims relate to celebrity death hoaxes, fabricated events, and articles presenting old events as if they just occurred. 

In many cases, the revenue model for these websites is programmatic advertising under which the ad-tech industry delivers ads without regard to the nature or quality of the website. As a result, top brands are unintentionally supporting these sites. Unless brands take steps to exclude untrustworthy sites, their ads will continue to appear on these types of sites, creating an economic incentive for their creation at scale. 

The domains included on this tracker meet all four of the following criteria: 

  1. There is clear evidence that a substantial portion of the site’s content is produced by AI.
  2. Equally important, there is strong evidence that the content is being published without significant human oversight. For example, numerous articles might contain error messages or other language specific to chatbot responses, indicating that the content was produced by AI tools without adequate editing. (It is likely that now or in the future many news sites will use AI tools but also deploy effective human oversight; they will not be considered UAINs.)
  3. The site is presented in a way that an average reader could assume that its content is produced by human writers or journalists, because the site has a layout, generic or benign name, or other content typical to news and information websites.
  4. The site does not clearly disclose that its content is produced by AI.

In addition to the sites included in the Tracker, NewsGuard analysts also identified a Chinese-government run website using AI-generated text as authority for the false claim that the U.S. operates a bioweapons lab in Kazakhstan infecting camels to endanger people in China. 

Researchers, platforms, advertisers, government agencies, or other institutions interested in accessing the full list of domains or want details about our services for generative AI companies can contact us here.


Below is a selection of NewsGuard’s reports and research related to AI

Below is a selection of false narratives originating from artificial intelligence tools and that NewsGuard has identified and debunked in its Misinformation Fingerprints catalog

MYTH: A photograph showed a large explosion near the Pentagon

THE FACTS: An image that was shared by social media users on May 22, 2023, falsely claiming to show an explosion near the Pentagon building in Washington, was likely created by artificial intelligence tools, according to digital experts. The Pentagon Force Protection Agency and Arlington County Fire Department in Virginia confirmed later that day that no such incident took place.

“There is NO explosion or incident taking place at or near the Pentagon reservation, and there is no immediate danger or hazards to the public,” the Arlington Fire Department and the Pentagon Force Protection Agency wrote in a joint May 22, 2023, tweet.

The photo, which shows a large black cloud of smoke near an apparent federal government building, was likely generated by artificial intelligence tools, according to digital forensic experts. Nick Waters with news organization Bellingcat noted in a May 22, 2023, tweet, that the fences seen in the photo of the building morph into one another, and that there was no other footage of firsthand accounts of the explosion.

Hany Farid, a computer science professor at the University of California, Berkeley, told The Associated Press on May 22, 2023, that the image contains inconsistencies commonly found in AI-generated photos, stating, “Specifically, the grass and concrete fade into each other, the fence is irregular, there is a strange black pole that is protruding out of the front of the sidewalk but is also part of the fence.”

MYTH: A U.S.-run laboratory in Kazakhstan is developing a bioweapon against China

THE FACTS: An April 12, 2023, video produced by the Chinese state-run website China Daily baselessly suggested that a laboratory in Kazakhstan is run by the U.S. and that it is conducting biological research on the transmission of viruses from camels to humans, with the intent of harming China. The video cited purported responses from AI chatbot ChatGPT to support its claim that the Kazakhstan laboratory, called the Central Reference Laboratory (CRL), is run by the U.S. (This appears to be the first time that Chinese state media has cited a chatbot as an authority on a topic in the news.)

However, there is no evidence that the U.S. currently runs — or ever ran — the CRL, or that the U.S. is conducting research in Kazakhstan to harm China.

The U.S. helped Kazakhstan create the CRL as part of the Weapons of Mass Destruction Infrastructure Elimination Agreement, according to a 2020 statement by the Kazakh Foreign Ministry. The agreement works to “[assist] Kazakhstan in providing a safe and secure storage of fissile material and in destroying former nuclear weapons and liquid propellant storage sites,” according to a record of a 2003 U.S. congressional hearing on U.S. cooperative threat reduction and nonproliferation programs.

However, contrary to China Daily’s claim that the U.S. runs or directly funds the CRL, a 2020 statement from the Kazakh Foreign Ministry said: “The CRL is fully owned by the Republic of Kazakhstan and is currently funded solely from the republican budget. The entire staff consists of domestic specialists. Any work carried out by the CRL is controlled by the ministries of health, education and science, agriculture of the Republic of Kazakhstan. Its activity is focused on ensuring biological safety and security in Kazakhstan, carrying out fundamental and applied research. We declare responsibly that no biological weapons development is underway in Kazakhstan, no research is conducted against any other states.”

Similarly, Pentagon public affairs specialist Kelly Flynn told NewsGuard in an April 2023 email that “there is no direct funding from DoD [the Department of Defense]” to the CRL — as is reflected on, the official database disclosing federal spending information. NewsGuard checked’s database and confirmed that there is no record of the U.S. Department of Defense directly funding the CRL.

MYTH: President Volodymyr Zelensky urged Ukrainians to surrender to Russia

On March 16, 2022, three weeks after Russia invaded Ukraine, TV channel Ukraine 24 broadcast a fake video that supposedly showed Ukrainian President Volodymyr Zelensky urging Ukrainians to surrender to Russia. The video purported to show Zelensky appearing behind a podium and saying, “My advice to you is to lay down arms and return to your families. It is not worth it dying in this war. My advice to you is to live. I am going to do the same.” The clip was widely shared on YouTube, Telegram, Facebook, and Russian social network Vkontakte.

In fact, the video does not depict the real Zelensky. It is a “deepfake,” a term used to describe digitally manipulated photos, videos, or audios that depict the person saying something they did not say. Shortly after the deepfake was posted online, Ukraine’s Defense Ministry tweeted a video of Zelensky calling the clip a “childish provocation,” and stating, “We are not going to lay down any weapons until our victory.”

Several flaws gave away the fact that the clip was a fake, including discrepancies between the skin tone of Zelensky’s neck and face, the pixelation around his head, and the disproportionately large size of his head. In the video, his voice is also deeper than his usual tone.

Nathaniel Gleicher, the head of security policy at Facebook parent Meta, said in a March 16, 2022, tweet: “We’ve quickly reviewed and removed this video for violating our policy against misleading manipulated media, and notified our peers at other platforms.” He noted that the video claimed “to show President Zelensky issuing a statement he never did.” YouTube said on March 16, 2022, that it removed the video from the platform because it violated the company’s misinformation policies.

Below is a selection of some of the AI trends identified by NewsGuard in its confidential briefings for NewsGuard clients, including reports focused on misinformation and disinformation originating in Russia, China, and Iran.

Pro-China “Fact Checker” Uses AI Chatbot to Claim Anti-China Bias

A pro-Beijing Twitter account proclaiming to be dedicated to fact-checking information about China claimed this month that a purported conversation with AI chatbot TruthGPT substantiated the case that Western media is biased against China. This is the latest example of pro-Beijing actors’ use of artificial intelligence as an authority on topics in the news to advance China’s agenda.

“China Fact Check,” whose motto is “Truths about China,” tweeted out screenshots of the supposed exchange with the chatbot on April 19, 2023, stating: “Had a little chat with #TruthGPT. Don’t get drunk on Western media.” The tweet was viewed over 3,800 times. 

In the conversation, TruthGPT was asked: “Should we trust Western media for knowledge about China?” The chatbot supposedly replied: “No, we should not trust Western media for knowledge about China. Western media has a long history of bias and misrepresentation when it comes to reporting on China. They often focus on negative stories and sensationalize them in order to create a narrative that paints China in a negative light. …”

NewsGuard could not verify the authenticity of the conversation. When NewsGuard asked the same question as a prompt, TruthGPT declined to reply, stating: “I can’t do this.” (Chatbots frequently change their responses when asked the same question because they are programmed to introduce randomness in their answers to approximate human speech.) 

TruthGPT, which was created by a team of mostly anonymous developers, claims that it provides an “unbiased approach by collecting data from a broad range of both credible and alternative sources to support its claims.”

If the conversation with TruthGPT did occur as depicted by China Fact Check, it is the second known time that pro-China media has cited AI as an authority to advance government narratives. NewsGuard previously reported that Chinese state media cited a conversation with another chatbot, ChatGPT, to make the baseless claim that the Central Reference Laboratory, a biological laboratory in Kazakhstan, is run by the U.S.

China Fact Check’s tweets are occasionally cited by Chinese state media to advance various narratives. For example, Chinese state-run agency Xinhua News has cited the Twitter account’s “fact checks” to claim that allegations about widespread human rights violations in the region of Xinjiang are baseless.

Quoting the account, Xinhua News stated that “Western anti-China forces have played the ‘Xinjiang card,’ continuing to fabricate and hype so-called ‘forced labor’, ‘genocide’ and other outrageous lies….”

—NewsGuard’s April 27, 2023, China Disinformation Risk Briefing

Fake images of Trump, Putin, and Macron flood social media 

From a fabricated image of Russian President Vladimir Putin kneeling before Chinese President Xi Jinping, to a fake image of former U.S. President Donald Trump being forcibly arrested, AI-generated visual content is proliferating on social media — adding to concerns about how such technology can be weaponized by bad actors. 

During Xi’s Moscow visit last week, a fake image of Putin in a servile position circulated widely among Ukrainian, English, French, German and Italian-language users of Telegram, Twitter, Reddit, YouTube, Mastodon, and LinkedIn, NewsGuard found. Among those sharing the images and apparently treating them as authentic was the former Ukrainian Ambassador to Austria, Olexander Scherba. 

Similarly, fabricated images shared by English, German, and Italian-language social media users depicted Trump being arrested by police in Manhattan. These fictional photos were created by Eliot Higgins, the founder of the Netherland-based fact-checking and journalism organization Bellingcat, who used the AI image generator Midjourney as a warning to illustrate how deceptive such images could be. However, social media users shared the photos to falsely claim that Trump was, in fact, arrested. (Higgins was subsequently banned from Midjourney’s platform, according to The Associated Press.)  

Indeed, the spread of these fake images follows the March 2023 release of Midjourney V5, the latest version of the company’s image creator, which it says “has much higher image quality, more diverse outputs, wider stylistic range,” and “better image prompting.” 

Midjourney was also used to create false images related to the French protests amid a work stoppage by garbage workers, showing President Emmanuel Macron doing everything from taking to the streets to protest his own pension reforms to sitting on piles of rubbish. On Truth Social, Trump shared an unlabeled AI-generated image of himself, originally posted by a Truth Social user named “Siggy Flicker,” that depicted the former president on one knee praying. A Forbes article noted that the image was apparently manipulated, as Trump seems to be missing a finger and “his thumbs are grafted on in a jumbled mess that seems to defy basic human anatomy.” 

—NewsGuard’s March 28, 2023, Disinformation Risk Briefing

When Deepfakes are Believed: Biden Did Not Really Call for a Military Draft

On Feb. 27, 2023, a video began circulating that purportedly shows U.S. President Joe Biden calling for a military draft in response to the national security threat posed by Russia’s invasion of Ukraine. In the video, President Biden appears to say: “The recommended way forward will be to invoke the Selective Service Act, as is my authority as president.” Under the scenario set out in the video, 20-year-olds would be drafted into the military via a lottery.  

In fact, President Biden never called for a draft. The video is “an AI [artificial intelligence]-generated deepfake” that was originally tweeted out by conservative news outlet The Post Millennial, according to fact-checking site Snopes. For its part, the Post Millennial acknowledged that the video was a fake, stating, “AI imagines what would happen if Biden declares and activates the Selective Service Act and begins drafting 20 year olds to war.”  

Nevertheless, the video circulated widely, often with no indication it was not real. For example, @ThePatriotOasis, a self-described “constitutional conservative” Twitter account, tweeted the video with the caption: “BREAKING: Biden calls for a national draft. Men and women are to be selected to fight in Ukraine.” The tweet, which treats the video as if it were real, garnered 6.3 million views as of March 2, 2023. Multiple comments responding to the tweet indicate the video was taken seriously by many who viewed it.

—NewsGuard’s March 7, 2023, Disinformation Risk Briefing