NewsGuard helps developers of AI models avoid the spread of misinformation

News topics present a high-profile challenge to generative AI models that so far are spreading falsehoods at unprecedented scale. Companies offering generative AI products can license NewsGuard’s human-curated data to fine-tune their models and to create post-processing guardrails to recognize and debunk demonstrably false narratives, and treat content from trustworthy news sites differently than content from misinformation sites.

Learn More

Responsible AI requires human judgment and accountability – which NewsGuard’s Reliability Ratings and Misinformation Fingerprints are uniquely equipped to provide.

The first generation of generative AI models lost public trust because they too often deliver responses on topics in the news that are well written, persuasive—and false. Responsible data can prevent AI models from spreading conspiracy theories and other falsehoods. NewsGuard’s data, powered entirely by our journalistically trained analysts, gives generative AI developers peace of mind for the first time with anti-misinformation tools.

NewsGuard provides source credibility inputs for fine-tuning generative AI models to deliver responses from trustworthy sources and post-processing guardrails to prevent models from spreading false narratives in the news.

  • NewsGuard Reliability Ratings: Access NewsGuard’s trust ratings of all the top online sources of news and information to fine-tune AI models to cite trustworthy news sources, treat untrustworthy sources differently and display trust scores next to citations for news and information sources.
  • NewsGuard Misinformation Fingerprints™: Access NewsGuard’s constantly updated catalog of all the top examples of false narratives spreading online, available in human- and machine-readable formats, to supply AI models with post-processing guardrails enabling the AI to recognize and mitigate—not create and spread—false narratives.
Read NewsGuard's AI handout

To download more information about NewsGuard for AI, please fill out your details below and you will be redirected to the handout.

  • By submitting this form, you agree to receive email communications from NewsGuard.
  • This field is for validation purposes and should be left unchanged.

“Microsoft Bing Chat is the first generative AI model with a license to access NewsGuard data:'[T]he transparent, clear Bing results [...]represent a true balance between transparency and authority, a kind of truce between the demand that platforms serve as gatekeepers and block unreliable sources, and that they exercise no judgment at all.'”

Ben Smith

Semafor

“NewsGuard assembles data on the most authoritative sources of information and the most significant top false narratives spreading online. Generative AI providers can then use the data to better train their algorithms to elevate quality news sources and avoid false narratives.”

Ashley Gold & Sara Fischer

Axios

“When the chat-driven version of Microsoft’s Bing search engine was put to the same test, it not only presented a more balanced assessment, but gave citations, often including its sources’ reliability assessments according to NewsGuard, a website dedicated to rating the reliability of the vast majority of the English-language world’s news sites.”

Richard A. Lovett

Cosmos

NewsGuard's AI work In the News