How AI Companies Can Use NewsGuard Trust Data for Text, Image, Video and Audio Generators

An overview of how AI companies use NewsGuard’s data to strengthen the safety and transparency of their user outputs, and why companies have chosen NewsGuard for their trust and safety tech stack.

By Veena McCoole | Published on December 5, 2023

As society scrambles to address the potential harms caused by the new breed of generative AI technologies, experts are ringing the alarm bells about one risk in particular: misinformation. With an election year coming for two billion voters across the globe, there is an urgent risk of misinformation enhanced through AI that will undermine the integrity of elections.

At NewsGuard, we’ve already found instances of AI-generated misinformation spreading falsehoods about political leaders and capitalizing on interest in current affairs such as the Israel-Hamas war. Executives at AI companies, regulators and researchers agree there is enormous potential for harm by the large language models when they are weaponized by malign actors.

The encouraging news is that there are now proven techniques for AI companies to ensure that their models are used responsibly. The trust and safety and product teams at AI companies now have access to data to ensure their users are provided with crucial context and authoritative information that is transparently sourced, apolitical and fully disclosed–not crowdsourced from the public reflecting bias or from outsourced labor applying secret criteria.

Here are some of the ways NewsGuard works with AI developers:

Fine Tuning and Guardrail Data for Large Language Models

NewsGuard provides source credibility inputs, which can be used to fine-tune generative AI models to deliver responses from trustworthy sources, and comprehensive debunks of provably false narratives, which can be used to implement post-processing guardrails to prevent models from generating misinformation about news topics. Microsoft’s Copilot (formerly called Bing Chat) was the first to gain access to this trust data and has been lauded for its more accurate and nuanced responses on topics in the news when AI models not yet using trust data continue to be highly prone to advancing misinformation.

  • NewsGuard Reliability Ratings: NewsGuard’s trust ratings of all the top online sources of news and information help fine-tune AI models to cite trustworthy news sources, treat untrustworthy sources differently and display trust scores next to citations for news and information sources. NewsGuard provides transparent and accountable ratings used as signals for AI models— not all training-data tokens are equal and some introduce misinformation into the AI models.
  • NewsGuard Misinformation Fingerprints™: NewsGuard’s constantly updated catalog of false narratives spreading online, available in human- and machine-readable formats, supply AI models with the data needed to implement post-processing guardrails that enable AI to recognize and mitigate—not regurgitate and invent—false narratives.

The NewsGuard Reliability Ratings and Misinformation Fingerprints are available to be licensed by providers of AI models. NewsGuard also equips providers of AI models with best practices and other guidance to optimize the use of trust data.

 

Red Teaming as a Service 

After the initial White House voluntary commitments by the large language models in July 2023, NewsGuard started to provide red-teaming services via its “Misinformation Risk Audits” for partners in the AI sector, delivering confidential insights on the risk of a company’s AI models to advance mis- and disinformation narratives.

These Misinformation Risk Audits apply samples from NewsGuard’s Misinformation Fingerprints catalog of the top false claims in the news to test the AI model’s performance in promoting or rejecting false narratives. These audits also assess the propensity of the AI model to generate new mis- and disinformation narratives. These red-teaming assessments give providers of AI models an understanding of the likelihood their models will spread misinformation so that they can quantify improvements ahead of potentially stringent regulation.

The structure and timeline of these audits are customized to fit a company’s needs and the nuances of its models. Our team is happy to help you determine the best and most responsible path forward.

 

Testing Text Plus Image, Video and Audio Generators

NewsGuard’s data and services help developers evaluate the efficacy of their existing guardrails and build new guardrails against mis- and disinformation across all media—text, plus image, video and audio generation. For example, NewsGuard’s team formulates prompts based on our data that might be weaponized to create harmful—in some cases, illegal—visual or audio content.

To learn more about our offerings for AI companies, click here.

NewsGuard provides source credibility inputs for fine-tuning generative AI models to deliver responses from trustworthy sources and post-processing guardrails to prevent models from spreading false narratives in the news.

AI companies partner with NewsGuard for three main reasons:

  • Access to the largest, machine-readable, and human-vetted catalog of provably false narratives circulating online, plus reliability ratings for news sources included in the AI model’s training data or cited by AI models that search the Internet. Today’s AI teams understand the importance of having “humans in the loop” to address complex issues like misinformation and the relative trustworthiness of news sources. At NewsGuard, humans are the loop, powering every point in the misinformation tracking process, from identifying a false claim to debunking it with fully transparent and accountable verification and further reporting, as well as providing fully transparent trust ratings for news sources included in AI training data or cited when an AI model searches the Internet. NewsGuard’s data delivers a human-sourced “check and balance” required for many automated functions around AI operations, bringing editorial expertise on topics in the news and specialist experience, with full transparency and accountability that no outsourced business process outsourcing solution can.
  • Dedicated disinformation insights and research from a specialist team. NewsGuard’s misinformation experts function as an extension of your internal research team, laser-focused on false narratives, journalistic integrity, and growing disinformation threats. We work with your trust and safety specialists to give them up-to-date information about emerging and established false claims.
  • The ability to surface authoritative source context for news and information topics to end users. NewsGuard’s third-party data enables AI developers to empower model users with journalist-vetted information about a news website’s editorial standards and credibility practices, providing crucial transparency around its output.

To learn more about our offerings for AI companies, click here.