scholarly journals What is Left of User Rights? – Algorithmic Copyright Enforcement and Free Speech in the Light of the Article 17 Regime

2020 ◽  
Author(s):  
Sebastian Felix Schwemer ◽  
Jens Schovsbo

Article 17 of the Directive on copyright and related rights in the Digital Single Market (the DSM Directive) has strengthened the protection of copyright holders. Moving forward, online content-sharing providers will be responsible for copyright infringement unless the use of works on their platforms is authorized or if they have made ‘best efforts’ to obtain an authorization and prevent the availability of unlicensed works. At the same time, the Directive has made it clear that users of protected works shall be able to rely on the existing limitations and exceptions regarding quotation, criticism and review and caricature, parody or pastiche. The Directive even casts these limitations and exceptions as user rights. This paper points out that copyright’s limitations and exceptions have traditionally consti- tuted a corner stone in the internal balancing of the interests of users against rights holders and with a clear view of safeguarding the interests of free expression and information protected by the Charter. Given the overall purpose of the DSM Directive in strengthening the position of rights holders, there is a dire risk that the benefits of the limitations and exceptions evaporate in the attempts of platform operators to escape liability by use of algorithmic enforcement. The article uses the recent decisions of the CJEU in Pelham, Funke Medien and Spiegel Online to draw attention to the central importance of the limitations and exception as the primary channel for fundamental rights analyses in copyright. It is finally pointed out how the DSM Directive –despite of its on-the-paper recognition of users’ rights– is most likely going to lead to a devaluation of those same rights.

2021 ◽  
Author(s):  
Christophe Geiger ◽  
Bernd Justin Jütte

Abstract The Directive on Copyright in the Digital Single Market (CDSM Directive) introduced a change of paradigm with regard to the liability of some platforms in the European Union. Under the safe harbour rules of the Directive on electronic commerce (E-Commerce Directive), intermediaries in the EU were shielded from liability for acts of their users committed through their services, provided they had no knowledge of it. Although platform operators could be required to help enforce copyright infringements online by taking down infringing content, the E-commerce Directive also drew a very clear line that intermediaries could not be obliged to monitor all communications of their users and install general filtering mechanisms for this purpose. The Court of Justice of the European Union confirmed this in a series of cases, amongst other reasons because filtering would restrict the fundamental rights of platform operators and users of intermediary services. Twenty years later, the regime for online intermediaries in the EU has fundamentally shifted with the adoption of Art. 17 CDSM Directive, the most controversial and hotly debated provision of this piece of legislation. For a specific class of online intermediaries known as ‘online content-sharing providers’ (OCSSPs), uploads of infringing works by their users now result in direct liability and they are required undertake ‘best efforts’ to obtain authorization for such uploads. With this new responsibility come further obligations which oblige OCSSPs to make best efforts to ensure that works for which they have not obtained authorization are not available on their services. How exactly OCSSPs can comply with this obligation is still unclear. However, it seems unavoidable that compliance will require them to install measures such as automated filtering (so-called ‘upload filters’) using algorithms to prevent users from uploading unlawful content. Given the scale of the obligation, there is a real danger that measures taken by OCSSPs in fulfilment of their obligation will amount to expressly prohibited general monitoring. What seems certain, however, is that the automated filtering, whether general or specific in nature, cannot distinguish appropriately between illegitimate and legitimate use of content (e.g. because it would be covered by a copyright limitation). Hence, there is a serious risk of overblocking certain uses that benefit from strong fundamental rights justifications such as the freedom of expression and information or freedom of artistic creativity. This article first outlines the relevant fundamental rights as guaranteed under the EU Charter of Fundamental Rights and the European Convention of Human Rights that are affected by an obligation to monitor and filter for copyright infringing content. Second, it examines the impact on fundamental rights of the obligations OCSSPs incur under Art. 17, which are analysed and tested also with regard to their compatibility with general principles of EU law such as proportionality and legal certainty. These are, on the one hand, obligations to prevent the upload of works for which they have not obtained authorization and, on the other, an obligation to remove infringing content upon notification and prevent the renewed upload in relation to these works and protected subject matter (so-called ‘stay-down’ obligations). Third, the article assesses the mechanisms to safeguard the right of users of online content-sharing services under Art. 17. The analysis demonstrates that the balance between the different fundamental rights in the normative framework of Art. 17 CDSM Directive is a very difficult one to strike and that overly strict and broad enforcement mechanisms will most likely constitute an unjustified and disproportionate infringement of the fundamental rights of platform operators as well as of users of such platforms. Moreover, Art. 17 is the result of hard-fought compromises during the elaboration of the Directive, which led to the adoption of a long provision with complicated wording and full of internal contradictions. As a consequence, it does not determine with sufficient precision the balance between the multiple fundamental rights affected, nor does it provide for effective harmonization. These conclusions are of crucial importance for the development of the regulatory framework for the liability of platforms in the EU since the CJEU will have to rule on the compatibility of Art. 17 with fundamental rights in the near future, as a result of an action for annulment filed by the Polish government. In fact, if certain features of the article are considered incompatible with the constitutional framework of the EU, this should lead to the erasing of certain paragraphs and, possibly, even of the entire provision from the text of the CDSM Directive.


2019 ◽  
Vol 13 (2) ◽  
pp. 361-388
Author(s):  
Andrea Katalin Tóth

Although digitalization and the emergence of the Internet has caused a long-term crisis for copyright law, technology itself also seems to offer a seemingly ideal solution to the challenges of digital age: copyright has been a major use case for algorithmic enforcement from the early days of digital rights management technologies to the more advanced content recognition algorithms. These technologies identify and filter possibly infringing content automatically, effectively and often in a preventive fashion. These methods have been criticized for their shortcomings, such as the lack of transparency, bias and the possible impairment of fundamental rights. Self-learning machines and semi-autonomous AI have the potential to offer even more sophisticated and expeditious enforcement by code, however, they could also aggravate the aforementioned issues. As the EU legislator envisions to make the use of such technologies essentially obligatory for certain online content sharing service providers (via the infamous Article 17 of the directive on copyright in the digital single market), the assessment of the situation in light of future technological development has become a current topic.This paper aims to identify the main issues and potential long-term consequences of creating legislation that practically requires the employment of such filtering algorithms as well as their solutions. This paper focuses on the potential role a broad copyright exception for text and data mining could play in counterbalancing the issues associated with algorithmic enforcement.


Author(s):  
Niva Elkin-Koren ◽  
Maayan Perel

In recent years, there is a growing use of algorithmic law enforcement by online intermediaries. Algorithmic enforcement by private intermediaries is located at the interface between public law and private ordering. It often reflects risk management and commercial interests of online intermediaries, effectively converging law enforcement and adjudication powers, at the hands of a small number of mega platforms. At the same time, algorithmic governance also plays a critical role in shaping access to online content and facilitating public discourse. Yet, online intermediaries are hardly held accountable for algorithmic enforcement, even though they may reach erroneous decisions. Developing proper accountability mechanisms is hence vital to create a check on algorithmic enforcement. Accordingly, relying on lessons drawn from algorithmic copyright enforcement by online intermediaries, this chapter demonstrates the accountability deficiencies in algorithmic copyright enforcement; maps the barriers for algorithmic accountability and discusses various strategies for enhancing accountability in algorithmic governance.


2021 ◽  
pp. 1-16
Author(s):  
Eran Fish

Memory laws are often accused of enforcing an inaccurate, manipulative or populist view of history. Some are also said to violate fundamental rights, in particular the right to free speech. These accusations are not entirely unjustified. Yet, a discussion of memory legislation that concentrates on these faults might be missing its mark. The main problem with memory legislation is not necessarily with the merits of any particular law. Rather, the determination of historical facts is not the kind of matter that should be entrusted to the legislator in the first place. The role of legislation is to make social cooperation possible despite substantial disagreement, but only when such social cooperation is indeed required. Disputes about historical facts, I argue, are not a coordination problem that requires a legislative solution. Still less can they justify legal coercion.


Author(s):  
Corey Brettschneider

This concluding chapter examines some possible further implications of democratic persuasion that might be a source for further study. The first implication is that the book's view might serve as a model for other states that seek an alternative to the two dominant approaches to free speech. The third approach, democratic persuasion, allows free speech advocates to retain the protections against coercion found in rights of free expression. However, democratic persuasion also gives voice to the fundamental value of free and equal citizenship that underlies free speech. The second implication of the book's view is that it can also serve as a model for understanding how to promote ideals of equality in international law without violating the rights of individuals or the rights of states. Indeed, democratic persuasion already has a prominent role in international law.


Author(s):  
Poorna Mysoor

This chapter addresses policy-based implied bare licences. Unlike in the previous chapter, there is no contract in existence and no voluntariness on the part of the copyright owner, and indeed in some cases, no prior relationship between the parties. Historically, English common law has recognised an open-ended power of the courts to restrict or prevent copyright enforcement in the public interest, which has been acknowledged under section 171(3) of the UK Copyright, Designs, and Patents Act 1988. The chapter considers how a successful invocation of this provision implies a bare licence to achieve policy goals. Although there is no statutory equivalent of this provision in other common law jurisdictions considered here, the chapter explores if the power has nevertheless been exercised by the courts based on their inherent powers. Since policy-based implied bare licences produce the same effect on copyright owners as the statutory limitations or exceptions, the framework for implying this type of licence draws inspiration from the three-step test and the fundamental rights regime.


2020 ◽  
pp. 191-213
Author(s):  
Alison Scott-Baumann ◽  
Mathew Guest ◽  
Shuruq Naguib ◽  
Sariya Cheruvallil-Contractor ◽  
Aisha Phoenix

Media and government accuse students of being libertarian (encouraging reckless free speech) or of too much no-platforming (banning external speakers). Both accusations are exaggerated but influential and make it difficult for students to develop face to face conversations about difficult and controversial topics. Government policies on securitization (Prevent) encourage risk averse behaviour, particularly but not exclusively among Muslims. Staff also feel constrained by these pressures and so staff and students self-censor. Analysis of free speech models available in a liberal democracy show two main types, each of which can become an extreme version of itself. The liberal model advocates legal free expression; however if exaggerated the liberal model becomes libertarian and can be offensive. The second approach is the guarded liberal model that seeks to protect minorities but if exaggerated it can turn into no platforming. Students and staff can learn to use combinations of all four approaches and increase face to face discussions.


2020 ◽  
pp. 163-192
Author(s):  
Amy Aronson

In June 1917, Congress passed the Espionage Act, suspending basic civil liberties in the name of wartime national security. Suddenly, peace work seemed dangerously untenable, even to some in movement leadership. Nevertheless, the American Union Against Militarism (AUAM) voted to test the new wartime laws, campaigning to prevent a draft and devising a new category of military exemption based on conscience. But continuing tensions threatened to rupture the AUAM from the inside. Lillian Wald and Paul Kellogg wanted to resign. Eastman proposed an eleventh-hour solution: create a single, separate legal bureau for the maintenance of fundamental rights in wartime—free press, free speech, freedom of assembly, and liberty of conscience. The new bureau became the American Civil Liberties Union (ACLU). However, Eastman’s hopes to shape and oversee that work, keeping it focused on internationalism and global democracy, were not to be. The birth of her child sidelined her while Roger Baldwin, arriving at a critical time for the country and the organization, took charge and made the bureau his own.


Author(s):  
Nicholas Hatzis

The experience of suffering offence relates to a constellation of unpleasant feelings stirred up when one’s expectations of being treated in a certain way are frustrated. This chapter explores how the nature of offence matters for the way the law responds to offensive conduct. Prohibiting speech which offends poses a special problem because it clashes with the free speech principle, i.e. the idea that there is something particularly important in being allowed to speak our minds, which sets free expression apart from a general liberty claim to choose a way of life. It is suggested that when deciding what should count as properly offensive for the purpose of exercising state coercion, only a very narrow definition of offensive speech is compatible with the values underlying freedom of expression. Then, offensive speech is distinguished from hate speech. As the two are morally different, it is inappropriate to borrow arguments from the hate speech debate to justify restrictions on offensive speech.


Sign in / Sign up

Export Citation Format

Share Document