The Social Media Platforms Say They Can’t Keep Up With the Volume of Hoaxes. Now, They Can
NewsGuard, with its new Misinformation Fingerprints ™, and Blackbird.AI join forces to demonstrate that merging human intelligence with machine learning can quickly detect and mitigate key misinformation narratives across a massive amount of content
(October 30, 2020—New York City) As hoaxes and misinformation about COVID-19 and other topics have spread quickly online, major platforms have struggled to enforce their own policies against misinformation. Platforms find themselves unable to keep up with new and emerging narratives, resulting in enforcement decisions that are after-the-fact, inconsistent, and often too late to staunch the problems they are intended to solve.
That problem may now be on its way to being solved, based on a new test conducted by NewsGuard and Blackbird.AI, according to a report both companies are releasing today. Applied at scale, the same techniques NewsGuard and Blackbird AI use in the test could enable platforms to identify misinformation narratives in near real-time across an enormous volume of content.
The experiment tested the potential of combining human intelligence with AI to track misinformation narratives at scale. It used entries from NewsGuard’s Misinformation Fingerprints™ product—a constantly updated, detailed dataset of unique identifiers for all of the top current hoaxes and misinformation narratives spreading online, derived from NewsGuard’s work rating the credibility of thousands of news and information sites worldwide to flag misinformation sources. Entries from this one-of-a-kind, machine-readable catalog of hoaxes were used as seeds for Blackbird’s Machine Learning and Natural Language Processing algorithms. In the test, the companies sought to prove that starting from NewsGuard’s Fingerprints, Blackbird’s AI tools could identify content from various platforms and the open web that matched the specific false claims catalogued by NewsGuard.
The test focused on two false claims about COVID-19—one that the flu vaccine increases one’s risk of COVID-19, and another that face masks cause the medical condition of excessive carbon dioxide, called hypercapnia. Starting from seeds of NewsGuard’s fingerprints—which include data such as a description of each hoax, variations of each hoax, examples of content containing each hoax, and related search terms, keywords, and hashtags as well as a transparent debunking of each hoax, with citations to authoritative sources—Blackbird’s AI system was able to identify a wide range of posts, videos and articles that matched the two claims used in the test across platforms as well as the open web, resulting in large numbers of people seeing these hoaxes.
If applied at scale, the same techniques could enable platforms to identify misinformation narratives in near real-time across a massive volume of content. Using the system, platforms could also automatically debunk the specific hoax by including mitigating information, such as NewsGuard’s language rebutting the hoax, at exactly the time that a user first sees the hoax, such as in a social media feed or a search result.
“Today, we see the platforms either taking action against content containing myths that went viral weeks or even months ago and are no longer relevant—or, arguably worse, enforcing their policies inconsistently and eroding users’ trust. In the process they typically use vague warning language or even censor claims, instead of delivering authoritative, specific information about these hoaxes,” said NewsGuard Co-CEO Gordon Crovitz. “By combining our constantly updated fingerprints of all of the top trending myths with Blackbird’s powerful AI capabilities, they could address these challenges much more effectively.”
The test results demonstrated the limits of the platforms existing enforcement strategies, surfacing content on various platforms that seemed to violate policies against COVID-19 misinformation and yet remained available, with substantial engagement, without any fact-checks or warnings displayed by the platforms. For example, the test surfaced a post from a Twitter account with more than 97,000 followers claiming that flu vaccines increase the risk of COVID-19. The post had garnered over 800 retweets, quote tweets and 821 likes, and had not been taken down or fact-checked by Twitter, despite its policies against COVID-19 misinformation.
On YouTube, the system flagged a video from the account for Children’s Health Defense, a prominent anti-vaccine group with more than 24,000 YouTube subscribers, using the same hoax to question a statement from Dr. Anthony Fauci urging Americans to get a flu shot. YouTube had taken no action against the video or against the account, which has published other, similar falsehoods in the past.
“The massive volume of information created every day combined with the exponential growth of disinformation, underscores the need for platforms to deploy scalable more accurate solutions tuned to detect harmful hoaxes and conspiracies before they can mutate and spread widely,” said Blackbird.AI CEO Wasim Khaled. “This test demonstrated the power of our technology—especially when combined with NewsGuard’s fingerprints as seed data resulting in a powerful and scalable man-machine interface suitable to handle the speed, scale and complexity of information disorder.”
The report describing the experiment, methodology and findings is available here.
Launched in March 2018 by media entrepreneur and award-winning journalist Steven Brill and former Wall Street Journal publisher Gordon Crovitz, NewsGuard provides credibility ratings and detailed “Nutrition Labels” for thousands of news and information websites. NewsGuard rates all the news and information websites that account for 95% of online engagement across the U.S., U.K., Germany, France, and Italy. NewsGuard products include NewsGuard, HealthGuard, and BrandGuard, which helps marketers concerned about their brand safety, and the Misinformation Fingerprints catalog of top hoaxes.
NewsGuard rates each site based on nine apolitical criteria of journalistic practice, including whether a site repeatedly publishes false content, whether it regularly corrects or clarifies errors, and whether it avoids deceptive headlines. It awards weighted points for each criterion and sums them up; a score of less than 60 earns a “Red” rating, while 60 and above earns a “Green” rating, which indicates it is generally reliable.
NewsGuard’s ratings and Nutrition Labels can be licensed by internet service providers, browsers, news aggregators, education companies, and social media and search platforms in order to make NewsGuard’s information about news websites available to their users. Consumers can access these ratings by purchasing a subscription to NewsGuard, which costs $2.95/month and includes access to NewsGuard’s browser extension for Chrome, Safari, and Firefox and its mobile app for iOS and Android. The extension is available for free on Microsoft’s Edge browser through a license agreement with Microsoft, and NewsGuard’s ratings can also be accessed free through the Edge mobile browser. Hundreds of public libraries globally receive free access to use NewsGuard’s browser extension on their public-access computers to give their patrons more context for the news they encounter online. For more information, including to download the browser extension and review the ratings process, visit newsguardtech.com.
Blackbird is a mission driven organization founded by a multidisciplinary team of entrepreneurs, engineers and national security experts with an aligned interest to combat disinformation and misinformation. Blackbird’s technology empowers governments, platforms and Fortune 500 brands with the ability to unmask the drivers and impacts of harmful emergent narratives and influence campaigns.
Blackbird’s Constellation Engine™ has the unique capability to detect the signatures of narratives that are being deliberately manipulated and promoted in a “propaganda-like” fashion as part of directed disinformation agendas. The Blackbird Risk Index (BRI) fuses together threat risks based on manipulation, network topology, user behavior, context analysis, toxic language and bot-like activity to generate Automated Narrative Intelligence through the extraction of anomalous signals in social, news, and web data.
By enabling clients to understand entirely new dimensions of risk that would otherwise drive financial, reputational and even societal harm, Blackbird aims to bring order to today’s polluted digital ecosystem.
- Steven Brill, Co-CEO, firstname.lastname@example.org, +1 212-332-6301
- Gordon Crovitz, Co-CEO, email@example.com, +1 212-332-6407
- Matt Skibinski, General Manager, firstname.lastname@example.org, +1 973.818.4698