The Johannesburg Principles on National Security, Freedom of Expression and Access to Information

1998 ◽  
Vol 20 (1) ◽  
pp. 1-11
Author(s):  
Article 19 (Organization)
2020 ◽  
Vol 20 (4) ◽  
pp. 607-640
Author(s):  
Thiago Dias Oliva

Abstract With the increase in online content circulation new challenges have arisen: the dissemination of defamatory content, non-consensual intimate images, hate speech, fake news, the increase of copyright violations, among others. Due to the huge amount of work required in moderating content, internet platforms are developing artificial intelligence to automate decision-making content removal. This article discusses the reported performance of current content moderation technologies from a legal perspective, addressing the following question: what risks do these technologies pose to freedom of expression, access to information and diversity in the digital environment? The legal analysis developed by the article focuses on international human rights law standards. Despite recent improvements, content moderation technologies still fail to understand context, thereby posing risks to users’ free speech, access to information and equality. Consequently, it is concluded, these technologies should not be the sole basis for reaching decisions that directly affect user expression.


Sign in / Sign up

Export Citation Format

Share Document