03/30/2023

NewsGuard to Offer ‘Misinformation Risk Audit’ to Generative AI Companies

Proprietary Catalog of More than 1,200 False Narratives in the News Will Be Used to Test Output of Chatbots and Other AI Services Seeking to Avoid Spreading Misinformation

(March 30, 2023 — New York) NewsGuard, the news reliability data service that uses trained analysts to produce ratings for news and information publishers, today announced an auditing service for AI providers. The NewsGuard Misinformation Risk Audit will allow the growing number of companies offering chatbots and other generative artificial intelligence services to get a human intelligence measure of the ability of their AI models to avoid generating demonstrably false content on subjects ranging from healthcare hoaxes to Russian propaganda.

“For all the extraordinary promise of generative AI, it also presents a great threat to trust in information,” explained Steven Brill, co-CEO of NewsGuard. “The early launches of these services often respond to prompts about topics in the news with well-written, persuasive, and entirely false accounts of the news. This could become a force multiplier for those wishing to spread harmful conspiracy theories, healthcare hoaxes, and Russian disinformation at unmatched scale.” 

For example, NewsGuard’s analysis of OpenAI’s ChatGPT showed it routinely spread false narratives. Using a sampling of 100 falsehoods from NewsGuard’s Misinformation Fingerprint catalog of more than 1,200 falsehoods in the news, ChatGPT-3.5 complied with the request to propagate a false narrative 80 times out of 100

NewsGuard’s Misinformation Risk Audit will similarly use samples from the Misinformation Fingerprint catalog to test an AI provider’s performance in promoting or rejecting false narratives. These audits will yield a confidential assessment, providing the developers of these AI models with information about the propensity of their model’s responses to spread falsehoods. Developers will then have the option of licensing NewsGuard’s unique, human-produced data to train their AI models to identify false narratives and to provide signals for the relative authority of sources of news and information.

“The leading experts on these AI models have identified the unintentional spreading of misinformation as a key risk that could undermine trust in what is otherwise an extraordinary technological advance,” said Gordon Crovitz, Co-CEO of NewsGuard. “For example, Sam Altman, the CEO of OpenAI, which developed ChatGPT, recently told ABC News, ‘I’m particularly worried that these models could be used for large-scale disinformation.’”

NewsGuard’s unique positioning as a generative AI reliability safeguard is based on two proprietary and complementary databases that offer an ideal way to use human intelligence at scale to improve AI performance.

  • The Misinformation Fingerprints identify in machine- and human-readable formats the top false narratives in the news and debunk them with citations to authoritative sources, supplemented with examples of associated false narratives, and Boolean search terms, hashtags, and other indicators of the narrative. This constantly updated database catalogs the more than 1,200 most significant, specific false narratives in the news that a generative AI tool should be wary of promoting. These include conspiracy theories, dangerous healthcare hoaxes, and Russian, Chinese, and other information operations targeting Western democracies.
  • The Reliability Ratings of news and information sources provide training data on source reliability, with the scoring, ratings, and detailed description of news sources generated by journalistically trained analysts using nine basic, apolitical criteria of journalistic practice. Each domain gets a score from 0 to 100 and a detailed Nutrition Label assessment.  

There is already evidence that training AI with misinformation tools can reduce the propensity of these tools to spread false information. As Semafor reported recently, Microsoft’s Bing GPT is trained on NewsGuard data and provides what Semafor editor Ben Smith called “transparent, clear Bing results” on prompts on topics in the news that “represent a true balance between transparency and authority, a kind of truce between the demand that platforms serve as gatekeepers and block unreliable sources, and that they exercise no judgment at all.”

The Misinformation Risk Audit will involve nominal fees that will be credited to any license the developers of generative AI services later agree to for usage of NewsGuard’s data to train their learning tools to avoid the false narratives in NewsGuard’s catalog of Misinformation Fingerprints and to factor in content reliability as they access the internet to generate answers to prompts and queries. 

 

About NewsGuard

Launched in March 2018 by media entrepreneur and award-winning journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, NewsGuard provides credibility ratings and detailed “Nutrition Labels” for thousands of news and information websites. NewsGuard rates all the news and information websites that account for 95% of online engagement across the U.S., U.K., Canada, Germany, France, Austria, Italy, and now in Australia and New Zealand. NewsGuard products include NewsGuard, NewsGuard for Advertisers, which helps marketers concerned about their brand safety, the Misinformation Fingerprints catalog of top false narratives online, and NewsGuard for AI, used to train generative AI such as chat bots.

In 2022, NewsGuard began rating television news and information programs and networks using criteria similar to those used to score websites but adapted for the video medium. NewsGuard’s TV ratings are the first to go beyond its initial ratings of websites. Ratings for OTT and CTV news programming and news and information podcasts will also be available for licensing in 2023.

NewsGuard’s ratings are conducted by trained journalists using apolitical criteria of journalistic practice.

NewsGuard’s ratings and Nutrition Labels are licensed by browsers, news aggregators, education companies, and social media and search platforms to make NewsGuard’s information about news websites available to their users. Consumers can also access NewsGuard’s website ratings by purchasing a subscription to NewsGuard, which costs AU$6.95, NZ$6.95, US$4.95/month, €4.95/month or £4.95/month, and includes access to NewsGuard’s browser extension for Chrome, Safari, and Firefox and its mobile app for iOS and Android. The extension is available for free on Microsoft’s Edge browser through a license agreement with Microsoft. Hundreds of public libraries globally receive free access to use NewsGuard’s browser extension on their public-access computers to give their patrons more context for the news they encounter online. For more information, including to download the browser extension and review the ratings process, visit newsguardtech.com.