NewsGuard for AI ensures AI responses are:
Reliably Sourced
Our continuously updated Source Reliability Ratings datastream provides human-labeled data assessing the accuracy and reliability of 35,000 news and information sources based on apolitical journalistic criteria. With this datastream, AI companies can ensure news-related outputs are based on high-quality data sources in model training and RAG processes—and avoid citing unreliable sources in responses.
Safeguarded Against False Claims
NewsGuard’s False Claim Fingerprints datastream protects AI products against false claims spreading online that can enter responses through low-quality training data or RAG processes searching the open web. A continuously updated database of labeled data about false claims, the Fingerprints contain rich detail about where each claim has spread and how it is articulated in text, images, video, and audio assets across platforms—enabling full protection.
Red-Teamed on High-Risk News Topics
Our expert information reliability analysts have subject-matter expertise on false claims, conspiracy theories, and malign information operations that pose significant risks to AI response quality. Our false claim red-teaming can identify weak points and vulnerabilities in existing guardrails and benchmark AI performance on news topics against the industry.
Protected from Emerging Threats
With real-time threat alerts and risk briefings, our expert analysts warn AI companies about emerging threats—including malign influence operations from authoritarian states targeting AI responses with false claims and propaganda.