Watch Out: AI “News” Sites Are on the Rise

by Hailey O’Connor | November 2024

Safeguard yourself against misinformation by staying alert to the rise of AI-generated websites. 

As local news organizations face a budget crisis, AI-generated sites are increasingly masquerading as legitimate local news outlets — making it harder than ever for digital platforms, advertisers, and everyday news consumers to know which news sources are legitimate and which are not.

Between content farms and disinformation operations, NewsGuard has identified 1,121 AI-generated websites, with new sites emerging daily (as of Nov 11., 2024). Given the simultaneous decline of local news outlets worldwide, the odds are better than 50-50 that a news website claiming to cover local news is fake. 

AI “news” sites crowd out genuine local journalism and exacerbate the financial struggles of news organizations. Because AI-generated sites typically do not have paywalls and do not incur the costs associated with hiring real journalists, they can attract programmatic advertising revenue more easily. 

This creates a vicious cycle: as these sites siphon off advertising dollars, local news organizations struggle even more to sustain themselves, leading to further cuts in staffing and resources. The proliferation of AI-generated content oversaturates the local news industry with inauthenticity, ultimately diminishing the informed public discourse that local journalism fosters. 

Platforms, advertisers, and consumers alike should take care to read trusted news sources, and learn how to spot unreliable AI sites.

Classification of Unreliable AI-Generated News Sites (UAINs)  

NewsGuard identifies a category of websites as Unreliable AI-Generated News Sites (UAINs), which predominantly publish AI-generated news with minimal human editorial oversight and without transparently disclosing this practice to their readers. (See NewsGuard’s UAIN tracker here.) These sites often repackage old content as new, and content that is not their own; UAINs often steal other news sites’ content and reporting to present it as their own. All of this is done through AI, and done without editorial oversight. These AI-generated websites are classified as such based on the following four criteria:  

  1. Substantial AI Content: There is clear evidence that a significant portion of the site’s content is produced by AI.
  1. Lack of Human Oversight: Strong evidence indicates that the content is published without significant human oversight. This might include articles with common AI chatbot error messages or phrases signature of an AI bot, suggesting that the content was generated without adequate editing. (Note: Many legitimate news sites may use AI tools with effective human oversight; these are not classified as UAINs.)
  1. Misleading Presentation: The site is presented in a way that an average reader might assume its content is produced by human writers or journalists, featuring layouts, generic names, or other elements typical of news and information websites.
  1. Non-disclosure of AI Production: The site does not clearly disclose that its content is produced by AI.

(Get updates on the number of AI-generated websites by subscribing to Reality Check, where NewsGuard analysts track their spread.)

Chart denoting the number of AI-generated News Sites
The number of AI-generated new sites is only increasing.

Content farms—organizations and individuals that produce websites and articles using artificial intelligence—create vast amounts of often subpar content. They employ AI tools to rehash and rewrite thousands of articles from mainstream news sources without proper attribution. This practice is mainly focused on boosting search engine rankings, but it lacks the depth and accountability of human reporting.

What does this mean, and why is it important? 

With state-sponsored disinformation from countries like Russia, China, and Iran attempting to influence the presidential election, and the ease with which these sites can be created (see how one of our analysts created an AI content farm for only $100), news consumers must enhance their media literacy skills to combat these new threats. 

For instance, an Iranian-backed AI-generated news site propagated the viral claim that Israeli Prime Minister Benjamin Netanyahu’s alleged psychiatrist committed suicide, part of a broader media campaign aimed at portraying Netanyahu as mentally unstable. This narrative was unverified, but gained traction online. Therefore, understanding the origins of information and the nature of the sites and sources it comes from is essential in the pursuit of trusted news.

AI-generated sites often display recognizable signs of low quality, making it crucial to identify these characteristics for news literacy. Two common indicators of AI “news” sites are generic website domains and the presence of programmatic advertising.

Generic Names  

AI-generated websites frequently have nonspecific names like “iBusiness Day,” “Ireland Top News,” and “Daily Time Update.” This can mislead news consumers into thinking the domains are established media outlets, while in reality, they operate with little to no human oversight and publish content largely created by bots. These sites churn out dozens or even hundreds of generic articles on various topics, including politics, technology, entertainment, and travel. Such articles may contain false claims — including those about political leaders and fabricated events — which can often cause a maelstrom of false narratives on social media. 

Programmatic Advertising 

In many cases, the revenue model for AI-generated websites relies on programmatic advertising, where ad-tech companies deliver ads without regard for the nature or quality of the site. As a result, blue-chip brands may unintentionally support these sites by having their advertisements placed alongside unreliable content. Unless brands actively take steps to exclude untrustworthy sites, their ads will continue to appear on such platforms, creating an economic incentive for their proliferation. This framework has significant social and political consequences, particularly during an election year, as these inadvertently placed funds breathe life into untrustworthy journalism — and takes funds away from legitimate news outlets.

This photo, included in our weekly Reality Check feature “Brand Victims of the Week,” shows how Major companies including AT&T, Adobe, and Grubhub have unintentionally helped fund the spread of one of the most pervasive false claims of the 2024 election: that Haitian migrants are stealing and eating pets in Springfield, Ohio.

Recognizing these characteristics and issues, NewsGuard has created an AI Tracking Center aimed at highlighting how generative AI has been deployed to amplify misinformation and unreliable news. The Center features a selection of NewsGuard’s reports, insights, and debunks related to artificial intelligence. In the weeks following the 2024 Presidential Election, identifying and avoiding AI-generated news sources will help ensure a more trustworthy election season.

To learn more about our offerings for news safety and media literacy, click here or contact us at partnerships@newsguardtech.com.