Coverage by McKenzie Sadeghi, Lorenzo Arvanitis, Virginia Padovese, Giulia Pozzi, Sara Badilini, Chiara Vercellone, Madeline Roache, Macrina Wang, Jack Brewster, Natalie Huet, Becca Schimmel, Andie Slomka, Leonie Pfaller, and Louise Vallee | Last Updated Dec. 4, 2023
From unreliable AI-generated news outlets operating with little to no human oversight, to fabricated images produced by AI image generators, the rollout of generative artificial intelligence tools has been a boon to content farms and misinformation purveyors alike.
This AI Tracking Center is intended to highlight the ways that generative AI has been deployed to turbocharge misinformation operations and unreliable news. The Center includes a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence.
To date, NewsGuard’s team has identified 583 Unreliable AI-Generated News and information websites, labeled “UAINS,” spanning 15 languages: Arabic, Chinese, Czech, Dutch, English, French, German, Indonesian, Italian, Korean, Portuguese, Spanish, Tagalog, Thai, and Turkish.
These websites typically have generic names, such as iBusiness Day, Ireland Top News, and Daily Time Update, which to a consumer appear to be established news sites. This obscures that the sites operate with little to no human oversight and publish articles written largely or entirely by bots — rather than presenting traditionally created and edited journalism, with human oversight. The sites have churned out dozens and in some cases hundreds of generic articles, about a range of subjects including politics, technology, entertainment, and travel. The articles have sometimes included false claims, including about political leaders (see example below relating to the Israeli Prime Minister). Other false claims relate to celebrity death hoaxes, fabricated events, and articles presenting old events as if they just occurred.
In many cases, the revenue model for these websites is programmatic advertising under which the ad-tech industry delivers ads without regard to the nature or quality of the website. As a result, top brands are unintentionally supporting these sites. Unless brands take steps to exclude untrustworthy sites, their ads will continue to appear on these types of sites, creating an economic incentive for their creation at scale.
The domains included on this tracker meet all four of the following criteria:
- There is clear evidence that a substantial portion of the site’s content is produced by AI.
- Equally important, there is strong evidence that the content is being published without significant human oversight. For example, numerous articles might contain error messages or other language specific to chatbot responses, indicating that the content was produced by AI tools without adequate editing. (It is likely that now or in the future many news sites will use AI tools but also deploy effective human oversight; they will not be considered UAINs.)
- The site is presented in a way that an average reader could assume that its content is produced by human writers or journalists, because the site has a layout, generic or benign name, or other content typical to news and information websites.
- The site does not clearly disclose that its content is produced by AI.
In addition to the sites included in the Tracker, NewsGuard analysts also identified a Chinese-government run website using AI-generated text as authority for the false claim that the U.S. operates a bioweapons lab in Kazakhstan infecting camels to endanger people in China.
Researchers, platforms, advertisers, government agencies, or other institutions interested in accessing the full list of domains or want details about our services for generative AI companies can contact us here.