In recent weeks, the outbreak of COVID-19 has demonstrated how false rumours, unverified information, and harmful lies can be spread with rapid speed and unprecedented efficiency.
Amongst many pressing concerns associated with the response to COVID-19, the proliferation of disinformation online signifies an urgent need to advance measured policy responses to this digitally spawned phenomenon. In particular, a number of rights-based threats arise on foot of this problem, and responses that are oriented towards preventing the spread of false information must also be cognisant of fundamental rights protections.
A major shift that must take place in discussions surrounding this problem is the descriptive framing. The phrase “fake news” is often used to describe the harmful dissemination of false and misleading information. However, this term has been misused and abused in many instances by anti-democratic actors. It has often been used to discredit legitimate media sources and expert advice. In addition, “fake news” can refer to satire and parody, which must be protected in line with human rights protections to free expression.
Therefore, this term should be avoided when discussing COVID-19 and long term solutions to harmful and false information. Instead, the terms misinformation and disinformation should be used.
Disinformation refers to the dissemination of intentionally false information. On the other hand, misinformation refers to false information spread unknowingly.
For example, if an individual on social media knowingly attempts to spread a lie online about a “potential cure” for COVID-19, this is disinformation. If an individual sees false medical advice online, mistakes it for accurate and verified advice, and then shares it online, this is misinformation.
Words matter, and it’s important not to empower anti-democratic actors and rights abusers in co-opting and weaponising this delicate and at times confusing terminology.
It is becoming increasingly clear that the spread of disinformation is a human rights issue. This can be seen in a number of ways.
Firstly, disinformation can undermine official expert advice and can pollute public discourse with knowingly false and harmful lies. This is often highly relevant in the electoral context. Numerous international human rights instruments protect the right to free elections, including Article 3 of Protocol 1 of the European Convention on Human Rights (ECHR).
If and when anti-democratic actors target vulnerable citizens with disinformation in the run up to electoral events, this is a form of harmful of electoral interference. The methods through which anti-democratic actors target citizens are often based upon valuable data extracted from citizens’ online profiles, in order to identify which people are most vulnerable to certain forms of false stories Consequently, invasive techniques with citizens’ data can be highly useful in pre-selecting vulnerable individuals to target with disinformation. This means that, as well as presenting an electoral issue, disinformation also affects the right to privacy.
The right to privacy is protected under Article 8 of the European Convention on Human Rights (ECHR), and the right to “personal data” is protected by Article 8 of the Charter of Fundamental Rights of the European Union (CFREU).
In light of recent events, it has become clear that during a public health crisis, the threat of disinformation becomes even more urgent. This is also a rights-based issue.
Throughout Articles of the European Convention on Human Rights (ECHR) an explicit recognition of the need to protect public health and safety is routinely expressed. This is particularly relevant when assessing the critical role of freedom of expression in attempts to prevent the spread of disinformation.
The protection of public health is one aspect where an interference with freedom of expression can be justified, and the current outbreak demonstrates an event where the protection of public health is of paramount importance. While the COVID-19 pandemic represents a unique state of emergency, concerns about freedom of expression will need to be substantively addressed when this crisis finally subsides.
In particular, regulation or other policy based responses to disinformation must be constructed with freedom of expression protections. Freedom of expression is guaranteed under Article 10 of the European Convention on Human Rights (ECHR), but it is not absolute. This is something that legislators, in collaboration with online platforms and other stakeholders, must carefully mediate in future responses.
It is important to note that other human rights are also relevant to consider at this particular time. False rumours with racist and xenophobic themes are likely to pose harm to minority communities, and this must be avoided.
A number of technological platforms have faced criticism for their role in facilitating the spread of false information in recent weeks. This has led numerous social media platforms such as Twitter to initiate tighter restrictions on content. As it stands, the European Commission’s voluntary codes of practice on disinformation provide a key policy development in this area. These present voluntary self-regulation guidelines for technological platforms when addressing such harmful content.
However, they are still voluntary, and the Commission has advised that there is scope for more robust regulatory measures in response to this issue, should the voluntary codes prove unsatisfactory. This is likely to be revisited by the Commission in light of the current pandemic. This is also linked to ongoing concerns about the responsibility of social media platforms in enabling misleading, false, and harmful content to spread unchecked.
A notable example in recent weeks of how this has played out in the COVID-19 pandemic can be seen in the circulation of a false and unverified rumour on WhatsApp concerning a potential shutdown in Ireland. This is extremely dangerous. It can lead to panic buying, social unrest, and can undermine confidence in official and verified information coming from experts.
Going forward, it is becoming increasingly clear that law makers and technological platforms must reassess how responses to disinformation can be systematic, proactive, and cognisant of these rights-based considerations. A reactive, poorly thought out, and rushed response to this issue could have the potential of exacerbating the already severe harm that the proliferation of disinformation can cause.
Ethan Shattock is a PhD researcher at Maynooth University Department of Law, focusing on electoral disinformation online and human rights. He completed an internship with ICCL in 2017. Follow Ethan @shattockethan .