As doctors, nurses and health experts from around the world, we are here to sound the alarm. Our job is to keep people safe. But right now we face not only a COVID-19 pandemic but a global infodemic, with viral misinformation on social media threatening lives around the world.
Stories claiming cocaine is a cure, or that COVID-19 was developed as a biological weapon by China or the US, have spread faster than the virus itself. Tech companies have tried to act, taking down certain content when it is flagged, and allowing the World Health Organization to run free ads.
But their efforts are far from enough.
The tsunami of false and misleading content about the coronavirus is not an isolated outbreak of disinformation, it’s part of a global plague. On Facebook, we have seen claims that chlorine dioxide helps people suffering from autism and cancer; that millions of Americans were given a “cancer virus” via the polio shot; that ADHD was “invented by big pharma”. The list goes on.
By promoting bogus cures, or scaring people off vaccines and effective treatments, these lies matter. And they travel far — one Facebook post claiming ginger is 10,000 times more effective at beating cancer than chemotherapy has been liked, shared and commented on almost 30,000 times.
That’s why today we are calling on the tech giants to take immediate systemic action to stem the flow of health misinformation, and the public health crisis it has triggered.
Working in hospitals, clinics and public health departments across the globe we are all too familiar with the real-world impacts of this infodemic. It’s us who treat the toddlers hospitalised for measles, a completely avoidable disease once eliminated in countries like the US but now on the rise largely thanks to anti-vaxxer propaganda.
Not only do we health professionals deal with such repercussions, but often we are blamed. Misinformation thus dents the morale of an already strained profession, while the financial cost of battling it eats into badly over-stretched budgets.
The diagnosis is grim, so what can be done?
Social media platforms must start with two obvious and urgent steps.
First, they must correct the record on health misinformation. This means alerting and notifying every single person who has seen or interacted with health misinformation on their platforms, and sharing a well-designed and independently fact-checked correction — something shown to help prevent users believing harmful lies. While platforms like Facebook have already moved to label fact-checked misinformation, this system does not go far enough since millions of people may have seen a post before it is fact-checked and labeled. That is why we are urgently calling on Facebook to alert ALL users who have fallen victim to such content, which means going a step further than labeling by providing users with retroactive corrections.
Second, the platforms must detox the algorithms that decide what people see. This means harmful lies, and the pages and groups that share them, are downgraded in user feeds, instead of being amplified. Harmful misinformation, and pages and channels that belong to repeat offenders who spread it, should also be removed from the algorithms that recommend content. These algorithms currently prioritise keeping users online over safeguarding their health, and that ends up downgrading humanity’s well-being.
Technology companies who have both facilitated the spread of ideas and profited from it have a unique power and responsibility to counter the deadly spread of misinformation and stop social media from making our communities sicker. To save lives and restore trust in science-based healthcare, tech giants must stop giving oxygen to the lies, smears and fantasies that threaten us all.