What promises have tech giants made on removing Covid misinformation?

Facebook, Twitter, YouTube and Instagram have all promised to combat Covid misinformation, but each platform applies different criteria for removing false content.


As part of our mission to counter digital hate and misinformation, we have collated each platform’s stated policies on Covid misinformation below, so they can be held to account on them.


Facebook has promised to “remove COVID-19 related misinformation that could contribute to imminent physical harm”. Nick Clegg, Facebook’s Vice President of Global Affairs, has stated that this includes “removing claims that physical distancing doesn’t help prevent the spread of the coronavirus.” Mark Zuckerberg recently told BBC News that his platform will “take down” posts that say “something is a proven cure for the virus when in fact it isn’t” as well as “5G misinformation which has led to some physical damage of 5G infrastructure”.


Where Covid misinformation does not threaten “imminent physical harm”, Facebook has committed only to reducing the distribution of posts that have been rated false by fact-checkers. The platform has also promised to ban “ads and commerce listings that imply a product guarantees a cure or prevent people from contracting COVID-19”, but our own research suggests that such products have still been listed for sale on Facebook’s marketplace.


Instagram is owned by Facebook and has adopted a similar approach, focusing on “known harmful misinformation” identified by health authorities and fact-checkers. Other content identified as false by fact-checkers is downranked in users’ feeds and hidden from the platform’s “Explore” function but is otherwise left intact. Instagram stated in January that it would “block or restrict hashtags used to spread misinformation” on the platform, but our own research showed that many such hashtags were still sharing Covid harmful misinformation in March.


Twitter has published a list of false claims it will remove, including denial of official advice for halting the spread of the virus, incitement to attack 5G infrastructure and promotion of false cures, even if they are not harmful. However, Twitter says it has removed just 2,230 tweets under this policy, representing just a fraction of the misinformation available on the platform amongst 628 million tweets in total about the coronavirus. Twitter is more active in labelling content containing misinformation, notably applying this policy to some of Donald Trump’s tweets but has not yet provided any evidence that these labels are directing users away from misinformation.


YouTube had largely relied on its existing policy banning “content which claims that harmful substances or treatments can have health benefits” until it added COVID-19 misinformation policy to its Community Guidelines on 20 May. YouTube now says it will not allow Covid misinformation “that poses a serious risk of egregious harm” or that “spreads medical misinformation that contradicts the World Health Organization's (WHO) or local health authorities' medical information about COVID-19.” The policy sets out a range of false claims, from denial that the disease exists to claiming it is caused by 5G, that users should not post. While YouTube’s move to a more explicit policy is welcome, our past research suggests that the platform was not enforcing its existing policies on false cures.


Click here to access our full quote bank of tech giants' promises on Covid misinformation.

Center for Countering Digital Hate Ltd

Company Number: 11633127
Registered Address (Please note this is not the location of our physical office):
Langley House, Park Road, East Finchley, London, United Kingdom, N2 8EY

©2020 Center for Countering Digital Hate Ltd