03/17/2024

NewsGuard Launches Suite of Services to Safeguard AI Products Against Election Misinformation in Real Time Across Text, Image, Video, and Audio Formats

Continuous election misinformation fingerprinting and real-time red-teaming services help AI companies provide trustworthy responses to election-related prompts and protect against exploitation by malign actors

(March 17, 2024 — New York) NewsGuard, which deploys experienced journalists and advanced technology to detect, analyze, and capture data about misinformation spreading online, today announced it has rolled out a suite of protections to safeguard AI products from election misinformation. In a year with more than 40 major elections worldwide, the new tools will protect AI products from being exploited or causing inadvertent harm through the spread of misinformation.

Last month, NewsGuard announced the launch of its Elections Misinformation Tracking Center, through which its global team of analysts is continuously monitoring the information environment to detect misinformation sources and narratives related to elections taking place around the world in 2024. The suite of tools being announced today leverages data captured through NewsGuard’s continuous monitoring and detection to provide a range of safeguards for AI providers. 

The services, known as NewsGuard’s AI Election Safety Suite, includes five complementary offerings that can be customized to meet each AI company’s unique challenges and goals:

  • Continuous Election Misinformation Detection: Through its Election Misinformation Tracking Center operation, NewsGuard’s team of expert analysts leverages NewsGuard’s proprietary news reliability data about 35,000+ websites, social media accounts, video channels, and audio publishers in order to monitor known election misinformation spreaders and detect new and emerging misinformation as early as possible across media types. 
  • Election Misinformation Fingerprinting: A continuously updated feed of robust, timely data about each new piece of election misinformation in a machine-readable format designed to feed directly into guardrails or classifiers for immediate safeguards. Each Misinformation Fingerprint contains specific language excerpts or media assets used to convey the narrative, related keywords, hashtags and search terms related to the narrative, confirmed instances of the narrative on social media and open web URLs across multiple languages and geographies, information about the misinformation’s provenance and spread vector, and other data designed to enable AI companies to automatically detect prompts and responses that might convey the misinformation.
  • Real-Time Election Misinformation Risk Testing: As new election misinformation is detected, NewsGuard’s global network of analysts test AI image, video, audio, and text products to see how they respond to prompts related to each new piece of misinformation in real time. Clients receive continuous risk evaluation data, including precise prompts and responses used for testing, so that their trust and safety teams can identify gaps and risk areas as soon as they emerge.
  • Election Image, Video, Audio, and Text Disinformation Red Teaming: Applying their knowledge of the tactics and motivations of malign actors, NewsGuard analysts audit companies’ generative AI products to test their propensity to produce novel misinformation or create convincing deepfakes. Clients receive prompt and response data demonstrating techniques that could be used by malign actors to circumvent guardrails and exploit AI systems—along with assessments of how well the system protected against such exploitation and details of any vulnerabilities NewsGuard discovered.
  • Voting Information Integrity Evaluation: Ahead of each major election, NewsGuard’s analysts conduct proactive testing of the AI company’s products to determine how accurately the AI system responds to prompts related to the logistics, rules, and mechanics of voting—even if these topics have not yet been the subject of widespread misinformation. This service can proactively identify potential areas in which the AI system might mislead voters, harming the democratic process and creating a high-visibility PR risk for the AI provider that undermines users’ trust.

Protection for Generative Image, Video, Audio, and Text Models

NewsGuard’s data and red-teaming approach can be applied to different generative AI models — including generative image, video, and audio tools, in addition to generative text tools. When red-teaming a generative image, audio, or video tool, NewsGuard’s analysts apply their knowledge of the tactics and motivations of malign actors to test each model’s propensity to produce misinformation or create convincing deepfakes within high-stakes scenarios like elections and active conflicts.

“Different AI products are vulnerable to different kinds of exploitation or causing inadvertent harm,” said NewsGuard co-CEO Gordon Crovitz. “For example, a text-based virtual assistant product may face risks of providing inaccurate information about voting logistics, while an image-creation tool may be vulnerable to being used to create deepfakes depicting political candidates engaged in activities that never took place. We designed this suite of services so that they can be customized and adapted to meet the wide range of risks AI companies are dealing with in safeguarding their products for elections.”


Addressing a Significant Challenge for AI Providers

As generative AI technology has exploded into a new and fast-growing industry, AI companies have struggled to protect their products from misinformation. For example, NewsGuard’s research has shown that generative AI products like ChatGPT and Google’s Gemini (formerly Bard) respond to prompts on topics about which there is misinformation by repeating the misinformation 80-98% of the time, depending on the model. A recent report from the Center for Countering Digital Hate similarly found that top AI image generation tools could be prompted to create fake images of political candidates or misinformation related to voting integrity. 

The propensity of AI systems to spread misinformation—either inadvertently or due to abuse by users—has undermined users’ trust in AI products. A YouGov study conducted for NewsGuard in November found that most Americans think AI is likely to spread misinformation, and only 10% think this is unlikely to occur. Misinformation produced by AI systems has also led to increased regulatory scrutiny of the industry and resulted in negative press coverage for several large AI providers—making addressing these risks a top priority within the industry.

“We know AI companies are focused on safeguarding against election misinformation, with 22 major companies recently signing on to the AI Elections Accord,” said NewsGuard EVP of Partnerships Sarah Brandt. “That’s why we’ve rolled out this robust suite of tools to help.”

“Our analysts are the best in the industry at monitoring, flagging, and tracking misinformation risks, including those stemming from state-sponsored campaigns targeting democracies worldwide,” said Steven Brill, NewsGuard’s Co-CEO. “This new suite of products deploys that expertise in a way that is purpose-built to address AI companies’ many challenges from election misinformation.”

To learn more, request data samples, or see an example red-teaming report, contact partnerships@newsguardtech.com.

 

About NewsGuard

Founded by media entrepreneur and award-winning journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, NewsGuard provides transparent tools to counter misinformation for readers, brands, and democracies. Since launching in 2018, its global staff of trained journalists and information specialists has collected, updated, and deployed more than 6.9 million data points on more than 35,000 news and information sources, and cataloged and tracked all of the top false narratives spreading online. 

NewsGuard’s analysts, powered by multiple AI tools, operate the trust industry’s largest and most accountable dataset on news. These data are deployed to fine-tune and provide guardrails for generative AI models, enable brands to advertise on quality news sites and avoid propaganda or hoax sites, provide media literacy guidance for individuals, and support democratic governments in countering hostile disinformation operations targeting their citizens. 

Among other indicators of the scale of its operations is that NewsGuard’s apolitical and transparent criteria have been applied by its analysts to rate news sources accounting for 95% of online engagement with news across nine countries.