One year later, Facebook and Twitter are still ignoring warnings about half of the accounts they knew spread COVID-19 misinformation, keeping their users’ health at risk
By Alex Cadier
Additional reporting by Melissa Goldin, Chine Labbé, Kendrick McDonald, Virginia Padovese, and Marie Richter
Welcome back to NewsGuard’s Misinformation Monitor, our newsletter tracking misinformation with exclusive data from five countries. Sign up to get the Misinformation Monitor in your inbox or download NewsGuard for your browser.
Facebook and Twitter have failed to protect their users from 43 accounts that have continued to spread dangerous COVID-19 misinformation despite being flagged to the platforms last year by NewsGuard. Out of 85 accounts flagged by NewsGuard for spreading misinformation about the virus at the start of the pandemic, more than half have seen no visible action taken against them by the platforms. Nearly a quarter of them grew, and those that did grow gained over 1 million new followers or likes between them
This long-term failure by the platforms to address known harms caused by their products helps explain why legislators and regulators increasingly are looking to solutions beyond hoping these companies will voluntarily protect their users.
In April 2020, NewsGuard published four reports about COVID-19 misinformation “superspreaders” on Facebook and Twitter in the United States and Europe. These reports highlighted 85 accounts and pages across both platforms that had shared unequivocally false information related to the COVID-19 pandemic.
Facebook and Twitter were made aware of these reports after publication, and both received press coverage in which Facebook and Twitter commented on NewsGuard’s findings. Yet, as of November 2021, 43 of these accounts were still active and 19 of them saw increases in their number of likes or followers, a NewsGuard review found. All 43 have since spread more false information related to COVID-19.
Facebook and Twitter could take a variety of effective steps to mitigate the damage posed by these publishers beyond the opaque, unaccountable, and demonstrably inconsistent step of simply censoring them by removing them or suppressing how many users see them. These could include fact-checking individual posts, although fact checking is hard to scale and is only effective after false claims spread widely. Platforms could also adopt the more scalable and effective step of proactively providing ratings and labels explaining to their users that a source in their news feeds or in Facebook Groups is a known spreader of false information and that users should proceed with caution. Platforms could open up their products to independent third parties providing indications of the trustworthiness of sources so that users have the choice to access safety protections and other middleware solutions.
False claims shared by accounts flagged by NewsGuard in April 2020, which are still active on either Facebook or Twitter, included:
- COVID-19 does not exist
- Bill Gates announced that COVID-19 vaccines would result in 700,000 deaths
- 5G technology is linked to the spread of COVID-19
- COVID-19 was predicted in a simulation
- Vitamin C can prevent COVID-19
- COVID-19 is a “biological weapon which was created in an American military laboratory”
- Healthy people “suffer no harm” from COVID-19
- COVID-19 vaccines have microchip tracking technology funded by Bill Gates
Of the 26 accounts NewsGuard flagged to Twitter, 13 were still active and 10 gained 358,927 new followers between them, an increase by over 23.7 percent on average. Of the 59 accounts NewsGuard flagged to Facebook, 31 were still active and nine gained 718,000 new “likes” between them, an increase of just under four percent on average.
Combined, the active Facebook and Twitter accounts publishing false claims that grew gained 1,076,927 new “likes” or followers since April 2020.
Five Facebook accounts and one Twitter account that were included in NewsGuard’s April 2020 reports were not deemed as active for this analysis as they have since stopped publishing COVID-19 misinformation.
Although Twitter removed a higher proportion (50 percent) of the flagged accounts than Facebook (47.5 percent), those that remained on Twitter saw a significantly higher increase in their following (23.7 percent on average) than the pages and accounts that remained active on Facebook (3.9 percent on average).
Cross border inconsistencies
Twitter and Facebook’s actions against these pages have been inconsistent, depending on their country of origin. Twitter removed more than half of the flagged accounts in the U.S., while all but one of the francophone accounts were still active. A similar pattern emerged for Facebook. The U.S. saw the largest proportion of removals, with 38 percent of flagged accounts removed, while France saw only 17 percent removed.
NewsGuard’s April 2020 report into COVID-19 misinformation “superspreaders” on Facebook found at least nine pages that shared the false claims that Vitamin C can cure COVID-19. Of these nine, four have been removed and five remain active. Facebook users are given no indication by the platform as to why certain pages are allowed to spread misinformation while others are removed.
These findings echo those of two other recent NewsGuard reports. An October 2021 report to the World Health Organization and other NGOs, health authorities, regulators and government officials in Europe and the U.S. showed that 20 accounts, pages, or groups on Facebook that were known to spread COVID-19 or anti-vaccine misinformation gained 370,000 followers between them in a year. The other, published in September 2021, showed that video-sharing app TikTok fed vaccine misinformation to children as young as 9 without their following any accounts or using the search bar.
The platforms’ continued tolerance for this kind of toxic misinformation could be a result of them being purposefully opaque in how decisions are made regarding misinformation, fact-checking, and content removal. In October 2021, documents leaked by Facebook whistleblower France Haugen showed that Facebook conducted several large studies into how COVID-19 and vaccine misinformation spread on its apps. Facebook’s own research showed that echo-chambers of misinformation were fueling vaccine hesitancy and that posts from reliable sources of information, like the World Health Organization, were being swarmed with comments by anti-vaccine activists on Facebook’s apps. Despite requests from experts, lawmakers, and the Biden administration, Facebook has chosen to keep this data secret.
It remains unclear how platforms determine which pages get removed and which do not. Even when accounts share the exact same hoaxes, some are removed while others are not.
NewsGuard’s April 2020 report into COVID-19 misinformation “superspreaders” on Twitter found four pages that shared false claims connecting 5G technology to the spread of COVID-19. Again, users have no way of knowing why certain accounts that spread misinformation are removed and others remain on the platform.
Removal is by no means a silver bullet or even the most desirable outcome. Social media companies could help protect their users by labeling these accounts as having shared misinformation. Arming users with more information, not less by secretly suppressing or removing content, empowers them to make judgments about which pages can be trusted to adhere to basic standards and which don’t. Instead, opaque removal policies are used inconsistently, and users are kept in the dark as to how decisions about the content they see are made.
NewsGuard flagged 85 accounts in April 2020 for spreading COVID-19 misinformation. Of those, 43 have had no action taken against them and 19 have increased their follower numbers by 13.8 percent on average. All the accounts that saw no visible action taken against them have continued to spread misinformation.
Of the 26 accounts that NewsGuard flagged to Twitter for spreading COVID-19 misinformation, 13 have had no action taken against them and 10 of them have increased their follower number by 358,927 between them, a 23.7 percent increase on average.
Of the 59 accounts that NewsGuard flagged to Facebook for spreading COVID-19 misinformation, 31 have had no action taken against them and 9 of them have increased their number of “likes” by 718,000 between them, a 3.9 percent increase on average.
Read NewsGuard’s four April 2020 reports referenced above:
- Tracking Facebook’s COVID-19 Misinformation ‘Super-spreaders’
- Tracking Facebook’s COVID-19 Misinformation ‘Super-spreaders’ in Europe
- Tracking Twitter’s COVID-19 Misinformation ‘Super-spreaders’
- Tracking Twitter’s European COVID-19 misinformation ‘Super-spreaders’
Correction: An earlier version of this report erroneously stated in the “Findings Summary” section that all 85 accounts flagged by NewsGuard for spreading COVID-19 misinformation were continuing to do so. In fact, as the report itself stated, 43 of the 85 flagged accounts have continued to publish misinformation. NewsGuard apologizes for the error.