The parallel pandemics: who will pay the societal costs of coronavirus misinformation?

Coronavirus has unleashed two parallel pandemics. One is the biological pandemic which is putting our National Health Service under such strain. The second is a social pandemic of digital misinformation – fake news and conspiracy theories that don’t just militate against our success in containing coronavirus but fundamentally threaten to weaken the liberal democratic values that underpin our society.

Both pandemics were, in fact, predictable. They are acute eruptions of chronic problems: the first of coronaviruses, variants of which caused SARS and MERS; the second of globalised, digitally-transmitted misinformation, which has exacerbated social maladies in recent years. We can count the long-term cost of digitally-enabled divisiveness in political instability, illiberal democratic politics, vaccine hesitancy, climate denial and rising identity-based hate worldwide.

The digital infrastructure which pumps out misinformation has been long in the making, but one thing we know about the bots and the trolls is they pivot fast to new issues. The Center for Countering Digital Hate (CCDH) exists to research and disrupt digital hate and misinformation and we switched our focus entirely to misinformation relating to coronavirus two months ago when we saw the actors and spaces we track had opportunistically done the same. 

Our findings so far are deeply alarming. Research by Dr Daniel Allington of King’s College London suggests that people who fall for coronavirus misinformation, even seemingly innocuous conspiracies about 5G technology, are less likely to follow clinical guidance such as washing their hands frequently, physically distancing and remaining at home wherever possible. The team at CCDH have found Facebook groups amassing hundreds of thousands of members, and YouTube channels with millions of subscribers, awash with fake “cures” and conspiracy theories about everything from 5G to theories about Jews, Muslims or the Chinese having planned this.

Just like coronavirus itself, the misinformation pandemic respects no geographical boundaries. In the past month we’ve seen cell towers – including those serving hospitals – set alight in Britain, groups breaking official guidance to hold anti-lockdown protests in North America, and harassment, violence and boycotts targeting Muslims in India.

Over the past two months, we have seen hate actors seeking to racialise coronavirus, blaming marginalised ‘out groups’ and minorities for spreading the virus by not complying with the government’s guidance. In a development only possible in the digital age, the UK far-right has adopted the tactics and even the hashtags of Indian Prime Minister Modi’s BJP party, using outdated images of Muslims gathering in public places to separate them from the collective efforts of society to overcome the virus. In an effort to exploit the increased solidarity felt across Britain’s population, perhaps best expressed by the weekly applause for NHS and care workers, we noted seemingly disconnected actors tweeting simultaneously (including bot activity) demanding money raised by Comic Relief for overseas aid be diverted to the NHS instead.

Undermining science

We also find economically-motivated actors employing populist tropes to cast doubt on the official medical guidance, in order to sell false “cures” and ineffective methods of supposedly immunising from the disease. Online forums in which people are given false confidence that they can protect themselves from the virus as long as they take vitamin C or other “alternative” supplements number millions in membership.

Additionally, fringe political actors and conspiracy theorists spread dangerous claims about the virus’ origin, claiming a motive lies behind the pandemic or questioning its existence at all. As Josie Cluer charts in her essay, the effects of these claims are felt well beyond the fringes of the internet – they undermine trust in advice of both the government and the scientific establishment, as well as in the institutions themselves.

All of this has been undertaken with the tacit consent of the large social media platforms which have been slow to act to prevent their platforms being exploited for these ends, even explicitly refusing to remove dangerous or harmful content, in the case of anti-vaccine propaganda for example. Silicon Valley’s stubbornness is nothing new, but even seasoned tech-for-good advocates have been shocked at its continuation in the face of a global pandemic.

Governments have tended not to demand social media companies remove conspiracy theories and medical misinformation from their publishing platforms. The lack of state action does not obviate but rather intensifies the moral duty on social media companies to do their bit to beat this pandemic, a duty most of us shoulder every day. Indeed, YouTube have, through their limited actions and comments, recognised the harm conspiracism and hate cause and their ability to intervene, but tend not to take decisive action to stop violators using their platforms. This is a familiar pattern of acting in a limited way to avert negative PR, but rarely showing the will to act meaningfully and decisively remove sources of misinformation and hate.

Facebook, among others, has sought to promote and fund “fact-checking” services, claiming that the way to defeat “bad information” is with “good information”. But psychological research on the effectiveness of fact checking in health misinformation is mixed at best. Whereas fact checking can inoculate people against misinformation and conspiracy theories, it is far less effective in disabusing people of misinformation they already believe. There is evidence to suggest it can even lead to hardening of opinions. What is uncontested, however, is that people cannot believe a conspiracy theory they haven’t heard. The most effective way to reduce the transmission rate of misinformation is to take the megaphone away.

The lockdown and social distancing guidelines mean that people are isolated, often alone, spending more time online, and they are hungry for information about the coronavirus. Further, we know from psychological academic research that public health crises trigger our sanctity moral foundation, which is linked to authoritarianism. There is a risk that with these conditions, in addition to narratives offered across social media by the opportunistic actors outlined above, we could see large numbers of the people radicalised to adopting quite extremist views. Since the outbreak the CCDH has found evidence of social media users, who are looking to find and share information about coronavirus, being funnelled into darker digital spaces with more and more extremist content, in very short spaces of time. The conditions also appear to be particularly conducive to an upsurge in aversion to out-groups and anti-migration sentiments.

It is also possible that we are left with far larger audiences for digitally savvy misinformation peddlers and hate actors. Those whose message is particularly flexible already appear to have been successful in this end. David Icke is adept at applying his “superconspiracy” to world events – from 9/11 to the 2008 financial crash – and saw  his social media following grow by 300,000 since the outbreak of coronavirus until he was recently ‘deplatformed’ by Facebook and YouTube following action by anti-hate groups including CCDH and prominent medical celebrities.

The undermining of trust in medical guidance and the scientific establishment we see from populist politicians, economically-motivated actors pushing their own “remedies,” and conspiracy theorists, could result in an increase in vaccine hesitancy, both threatening the success of any future vaccine to inoculate against coronavirus and worsening some countries’ already-falling immunisation rates. Anti-vaccine Facebook groups, Instagram accounts and YouTube channels count their audiences in the millions, and social media platforms have made it clear that they will not remove this form of medical misinformation, which Mark Zuckerberg has argued is legitimate “free expression.”

Freedom of speech ≠ freedom of reach

The impact of the pandemic on the future for social media is less clear. Legislation to introduce clearer regulation is on the horizon for the British & French governments and in the EU, the US, and elsewhere. Social media companies’ inaction against hate, misinformation and dangerous conspiracy theories, even in the face of a global pandemic that has already killed hundreds of thousands, is shocking even to those already well acquainted with their unwillingness to act. The immorality of refusing to do all they can to put a halt to the social pandemic of misinformation is laying the ground for harsher regulation than may otherwise have been forthcoming. If the platforms do not shift their policies and begin demonstrating greater willingness to act against producers of harmful content, it is likely to cost them in the aftermath of coronavirus.

This is why CCDH advocated the UK Online Forums Bill 2019, which would have made administrators of social media groups vicariously liable for any illegal content if they refuse to remove that content and a reasonable person would conclude they had failed to act appropriately when notified. This would change the culture of groups, encouraging active moderation and creating real consequences for those who knowingly set up groups in which illegal activity such as harassment, incitement of hate that crosses the criminal threshold or the planning of criminal enterprises including identity-based violence takes place. This does not infringe on freedom of speech or the protections that social media platforms currently enjoy under US law, designating them as platforms rather than publishers. Creative legislation can fill the gap until a proper regulator can be put into place to ensure that online spaces are treated the same as other publishers and held accountable for their content.

More fundamentally, there will need to be a rethink of the way social media giants are treated in the tax system. At the moment, these companies have enormous influence on our society, profit greatly from our economy, and yet do not pay their fair share. This must change.

Future taxation of social media companies should rest on two pillars: parity with the offline economy and compensation for harm.

The first priority is to ensure that they pay their taxes. At the moment, social media companies are able to use a web of complex offshore arrangements to avoid paying taxes in the UK. This should change. The government has already indicated it is willing to create a digital services tax to target these companies after reports that firms like Facebook paid just £28m in UK taxes despite making £1.6bn in UK sales in 2019. That digital services tax, which the government has mooted might be 2% of revenues, should close the gap much further so they are contributing back an equivalent proportion to that paid by most other UK businesses. This would remove the advantage they have in moving around the world, finding the lowest tax jurisdictions from which to claim to operate. 

The second is that a hypothecated levy should be created to ensure they pay for the social problems they create. This would act as a means to encourage them to act to avert the clear negative consequences of their activities, for example, the damage they do to community cohesion, the cost of policing the spaces they allow to be colonised with hate actors and criminals, and the cost of dealing with the public health risks they introduce through promoting health misinformation in their newsfeeds.

Taken together, these have a chance of finally socialising social media, and bringing them into the family of responsible economic actors that take part in our economy. These firms are here today, gone tomorrow; such are the dynamics of online business cycles. While active and on the up, they grab for every dollar with a desperation borne of their own awareness that they cannot last forever – these disruptors will be disrupted themselves one day, and they know it. But there is a future in which they could become established and responsible parts of our society, harnessing technology to social purpose and building companies of enduring value. That future has always been there waiting to be built – perhaps it is the double pandemic of biological contagion and viral fake news that will finally give us the spur we need.

Imran Ahmed is the CEO of the Center for Countering Digital Hate which studies and actively counters the use of hate and misinformation to polarise societies and undermine democracy. He is a trustee of Victim Support. @Imi_Ahmed