Free Speech in the Digital Age
Latest Publications


TOTAL DOCUMENTS

14
(FIVE YEARS 0)

H-INDEX

1
(FIVE YEARS 0)

Published By Oxford University Press

9780190883591, 9780190883638

Author(s):  
James Weinstein

For most people the internet has been a dream come true, allowing instantaneous access to a vast array of information, opinion, and entertainment and facilitating communication with friends and family throughout the world. For others, however, the internet has wrought a nightmare, allowing often anonymous enemies a platform for vicious attacks on the character of their victims and a means for revealing to the world embarrassing private information about them. To combat these attacks, victims and law enforcement officials in the United States have employed both analogue remedies such as harassment and stalking laws as well as cyber-specific provisions. Since the attacks involve speech, however, all these remedies must comport with the First Amendment. The typical response of courts and commentators to the First Amendment issues raised in these cases is to ask whether the perpetrator’s speech falls within one of the limited and narrow traditional exceptions to First Amendment coverage, such as true threats, defamation, obscenity, or fighting words. This approach is understandable in light of unfortunate dicta in several United States Supreme Court decisions—that all content-based restrictions of speech other than speech falling within one of these exceptions are subject to “strict scrutiny,” a rigorous test that few speech restrictions can pass. This chapter argues that this approach to dealing with cyber harassment is misguided. This methodology often results in shoehorning the speech at issue into exceptions into which the speech does not fit, or worse yet, in a finding that the speech is protected by the First Amendment simply because it does not fall within a recognized exception.


Author(s):  
Soraya Chemaly

The toxicity of online interactions presents unprecedented challenges to traditional free speech norms. The scope and amplification properties of the internet give new dimension and power to hate speech, rape and death threats, and denigrating and reputation-destroying commentary. Social media companies and internet platforms, all of which regulate speech through moderation processes every day, walk the fine line between censorship and free speech with every decision they make, and they make millions a day. This chapter will explore how a lack of diversity in the tech industry affects the design and regulation of products and, in so doing, disproportionately negatively affects the free speech of traditionally marginalized people. During the past year there has been an explosion of research about, and public interest in, the tech industry’s persistent diversity problems. At the same time, the pervasiveness of online hate, harassment, and abuse has become evident. These problems come together on social media platforms that have institutionalized and automated the perspectives of privileged male experiences of speech and violence. The tech sector’s male dominance and the sex segregation and hierarchies of its workforce result in serious and harmful effects globally on women’s safety and free expression.


Author(s):  
Mary Anne Franks

John Perry Barlow, one of the founders of the Electronic Frontier Foundation (EFF), famously claimed in 1996 that the internet “is a world that is both everywhere and nowhere, but it is not where bodies live.” The conception of cyberspace as a realm of pure expression has encouraged an aggressively anti-regulatory approach to the internet. This approach was essentially codified in U.S. federal law in Section 230 of the Communications Decency Act, which invokes free speech principles to provide broad immunity for online intermediaries against liability for the actions of those who use their services. The free speech frame has encouraged an abstract approach to online conduct that downplays its material conditions and impact. Online intermediaries use Section 230 as both a shield and a sword—simultaneously avoiding liability for the speech of others while benefiting from that speech. In the name of free expression, Section 230 allows powerful internet corporations to profit from harmful online conduct while absorbing none of its costs.


Author(s):  
Danielle Keats Citron

A decade ago, online abuse was routinely dismissed as “no big deal.” Activities ordinarily viewed as violations of the law if perpetrated in physical space acquired special protection because they occurred in “cyberspace.” Why? The “internet” deserved special protection, commentators contended, because it was a unique zone of public discourse. No matter that individuals (more often women and minorities) were being terrorized and silenced with rape threats, defamation, and invasions of sexual privacy. The abuse had to be tolerated, lest we endanger speech online. Much has changed in the past ten years. Social attitudes have evolved to recognize the expressive interests of victims as well as those of the perpetrators. Cyber harassment is now widely understood as profoundly damaging to the free speech and privacy rights of people targeted. Law and corporate practices have been developed or enforced to protect those expressive interests. As this chapter explores, this development is for the good of free expression in the digital age.


Author(s):  
Frederick Schauer

This chapter investigates whether speech acts of urging, advising, recommending, instructing, and informing ought all to be treated in the same way for purposes of implementing a principle of freedom of speech, and asks: If not, how do we justify treating them differently? This problem is arguably more pressing than it has been in the past, as the internet and various forms of social media have seemingly caused the mass distribution of instructions for committing antisocial acts have proliferated. After discussion of examples of publications that allow the reader to acquire knowledge on how to engage in dangerous activities, the chapter concludes that the normative and philosophical questions about the relationship between freedom of speech and the provision of instructions, plans, recipes, and detailed facts are in the final analysis less philosophical than they are empirical and social scientific.


Author(s):  
Dinah PoKempner

This chapter argues that we are at a difficult juncture in protecting online speech and privacy when states resist applying principles they have endorsed internationally to their domestic legislation and practice. Although governments have welcomed the internet’s globalizing effect on economic development, they now fear its ability to amplify messages such as terrorism, revolution, pornography, or propaganda. But sacrificing basic freedoms to control the internet’s powers is neither effective nor wise. How well we protect privacy and speech in the digital age will determine whether the internet liberates or enchains us.


Author(s):  
Diana L. Ascher ◽  
Safiya Umoja Noble

Notions of free speech and expectations of speaker anonymity are instrumental aspects of online information practice in the United States, which manifest in greater protections for speakers of hate, while making targets of trolling and hate speech more vulnerable. In this chapter, we argue that corporate digital media platforms moderate and manage “free speech” in ways that disproportionately harm vulnerable populations. After being targets of racist and misogynist trolling ourselves, we investigated whether new modes of analysis could identify and strengthen the ties between the online personas of anonymous speakers of hate and their identities in real life, which may present opportunities for intervention to arrest online hate speech, or at least make speakers known to those who are targets or recipients of their speech.


Author(s):  
Katharine Gelber ◽  
Susan J. Brison

This chapter critiques the view, expressed in the 1996 Barlow Declaration and elsewhere, that the digital realm—“cyberspace”—is a disembodied space for pure thought. This chapter shows that the view that speech online is disconnected from the material realm echoes the same idea in traditional free speech theory, which has long considered speech to be something nonmaterial. Given the agent-driven nature of online communications, the materiality of internet technology, and the very real, often physical, effects of online speech on users and audiences, the chapter argues that the view that the digital realm has its own ontological status, distinct from that of the material world, is unsupportable. The chapter concludes that it is incorrect to hold that online communications are, in their causal capacity, more akin to thought than to non-speech conduct, just as it is incorrect to hold that offline communications are, in their causal capacity, more akin to thought than to non-speech conduct.


Author(s):  
Heather M. Whitney ◽  
Robert Mark Simpson

This chapter investigates whether search engines and other new modes of online communication should be covered by free speech principles. It criticizes the analogical reasoning that contemporary American courts and scholars have used to liken search engines to newspapers, and to extend free speech coverage to them based on that likeness. There are dissimilarities between search engines and newspapers that undermine the key analogy, and also rival analogies that can be drawn which do not recommend free speech protection for search engines. Partly on these bases, we argue that an analogical approach to questions of free speech coverage is of limited use in this context. Credible verdicts about how free speech principles should apply to new modes of online communication require us to re-excavate the normative foundations of free speech. This method for deciding free speech coverage suggests that only a subset of search engine outputs and similar online communication should receive special protection against government regulation.


Author(s):  
Robert C. Post

Norms of privacy are grounded in social practices. When social practices are unsettled and rapidly evolving, as they are in digital space, these norms are subject to confusion and uncertainty. A good example is the recent decision of Court of Justice of the European Union (CJEU) in Google Spain SL v. Agencia Española de Protección de Datos (AEPD) (“Google Spain”), which created the “right to be forgotten.” The CJEU derived the right to be forgotten from Directive 95/46/EC (“Directive”), which is arguably the most influential privacy document in the world. The Directive imagines digital data as stored in a space of instrumental reason, as it is when data is compiled and processed by large bureaucratic organizations. The Directive protects data privacy in order to maximize the control of data by data subjects. But the CJEU applied the right to be forgotten to public discourse in the public sphere. The instrumental logic of data privacy is inappropriate to the communicative action of the public sphere, as is the value of “control.” Instead the CJEU should have conceptualized the right to be forgotten to safeguard the dignitary privacy that courts have applied to public discourse for more than a century. Dignitary privacy ensures civility within public debate. It focuses on communicative acts, rather than data. And it requires an assessment of harm to public discourse. All of these concepts are foreign to the analytic framework of data privacy. The CJEU’s confusion between data privacy and dignitary privacy leads to inconsistencies and logical deficiencies in its opinion, which are unlikely to have occurred were the court to have focused on the ordinary print media of the public sphere.


Sign in / Sign up

Export Citation Format

Share Document