Academic Researchers Make the Case that Search Engines Should Include Trust Indicators for News and Information. Here’s Why:

A study by NYU researchers using NewsGuard data found that using online search to evaluate the truthfulness of false news articles often actually increases the probability of believing them.

By Elan Kane and Veena McCoole | Published on January 9, 2024

Your uncle shares on Facebook that he refuses to get the COVID-19 vaccine because it causes Vaccine-Acquired Immunodeficiency Syndrome, or VAIDS. (The claim is provably false. NewsGuard licensees can access the full Misinformation Fingerprint here.) 

Having never come across this claim and recalling the methods of cross-referencing information taught in media literacy programs at school, you head to your favorite search engine and search: “Does the COVID-19 vaccine cause VAIDS?” 

While a search engine may give you the correct information and explain that VAIDS does not exist, a December 2023 study by NYU researchers published in Nature identified important limitations to debunking false content online. The report found that using online search to evaluate the truthfulness of false news articles often actually increases the probability of believing them. 

In other words, asking a search engine if the COVID-19 vaccine causes VAIDS makes it likelier the searcher will believe the claim that the vaccine does cause VAIDS, due to the low-quality sources surfaced in search results that provide supporting evidence for false claims.

“Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources,” the researchers explained. “We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources.”

The researchers also recommended the need for “search engines to invest in solutions to the challenges” of increased belief in false claims when evaluated using online search. 

 

Low-Quality Search Results and Misinformation Rabbit Holes

Low-quality results from search engines promote user belief in false claims and can also lead users down dangerous new rabbit holes of misinformation. In May 2023, researchers from Princeton University and Microsoft used NewsGuard’s data to identify misinformation rabbit holes through referrals on news websites. The study found that “the incentives for unreliable sites to retain and monetize users create misinformation rabbit holes” — the idea that users browsing on unreliable news sites are likely to visit another page on the site, supporting another conspiracy theory, healthcare hoax or other false claim and led down a further “rabbit hole” of unreliable news. 

These sites, the study found, are incentivized by advertising revenue to keep users on their sites for as long as possible and are better at retaining users than are reliable news sites. The study notes that “reliable sites rarely drive traffic to unreliable sites, suggesting that fact-checking drives little traffic to misinformation.”

“After landing on an unreliable site, the next referral is very likely to lead to an additional page on the site, meaning users are likely to view additional misinformation. While this financial incentive exists for all sites, we find that unreliable sites are especially good at retaining users.”

How can platforms work to improve searching online to evaluate news? 

For search engines to maintain trust among their user base and help detoxify the information environment, they should empower their users with reliable information. Researchers find that this can be done effectively by integrating trust indicators into the front and backend of their operations to increase transparency and prioritize trustworthy sources of information in search results.

The Bing search engine on Microsoft’s Edge browser, for example, automatically provides users with NewsGuard trust scores of news websites on its “discover pane,” and incorporates NewsGuard’s trust data as an input for prioritizing credible journalistic outlets in its search results.

Browsing on the Microsoft Edge browser gives users access to NewsGuard's Reliability Ratings in the discover pane. (Image via NewsGuard)

By providing trust scores to users, Microsoft’s Edge browser empowers them to decide for themselves which news outlets to read, trust, and share.

Here are ways search engines can incorporate trust indicators to prevent the spread of misinformation and ensure users are being shown factually accurate content that dispels falsehoods: 

Displaying Source Credibility Information in Search Results

Search platforms such as Sooth.fyi display NewsGuard’s Reliability Ratings and Nutrition Labels directly into the search results they provide to their consumers, alongside indications of political leaning or bias of the news source. This provides users with greater context for search results and enables them to evaluate the reliability of a given search result, based on the detailed work of NewsGuard’s misinformation experts.

Sooth.fyi is a curated internet search platform that uses a proprietary index comprised only of diverse, reliable, expert sources, including news and information sites vetted by NewsGuard.

NewsGuard Reliability Ratings are integrated into the Sooth.fyi user interface, enabling users to evaluate source credibility while browsing search results. (Image via NewsGuard and Sooth.fyi)

Incorporating Machine-Readable Misinformation Debunks into Search Results

Beyond source-level credibility indicators, search engines can also incorporate claim-level debunks to widely circulated false narratives: both for front-end users and back-end operations.

NewsGuard’s Misinformation Fingerprints catalog is the most comprehensive dataset of provably false narratives online, built for misinformation detection at scale. The Fingerprints can be used to aid curation of search results, label potentially misleading content, bolster content moderation efforts with a crucial layer of human-vetted data and empower users by informing them of potential false claims they may encounter — all without censoring any content.

For more peer-reviewed studies from leading academics about how trust data helps improve media literacy outcomes, increases transparency on technology platforms, and provides readers with crucial source context for online information browsing, read our Misinformation White Paper.