scholarly journals The Role of Social Media Companies in the Regulation of Online Hate Speech

Author(s):  
Chara Bakalis ◽  
Julia Hornle

Significance Facebook has indefinitely suspended Trump from its main platform and Instagram, while Twitter has done so permanently for his role in instigating violence at US Capitol Hill on January 6. These developments spotlight the role of social media firms in spreading and tackling hate speech and disinformation, and their power unilaterally to shut down public speech. Impacts Democratic control of the White House and Congress offers social media companies a two-year window to ensure softer regulation. The EU will push its new digital markets legislation with vigour following the events at US Capitol Hill. Hard-right social media will find new firms willing to host their servers, partly because their user numbers run to millions not billions.


AJIL Unbound ◽  
2019 ◽  
Vol 113 ◽  
pp. 256-261
Author(s):  
Emma Irving

In its August 2018 report on violence against Rohingya and other minorities in Myanmar, the Fact Finding Mission of the Office of the High Commissioner for Human Rights noted that “the role of social media [was] significant” in fueling the atrocities. Over the course of more than four hundred pages, the report documented how Facebook was used to spread misinformation, hate speech, and incitement to violence in the lead-up to and during the violence in Myanmar. Concluding that there were reasonable grounds to believe that genocide was perpetrated against the Rohingya, the report indicated that “the Mission has no doubt that the prevalence of hate speech,” both offline and online, “contributed to increased tension and a climate in which individuals and groups may become more receptive to incitement.” The experience in Myanmar demonstrates the increasing role that social media plays in the commission of atrocities, prompting suggestions that social media companies should operate according to a human rights framework.


2021 ◽  
pp. 016344372110158
Author(s):  
Opeyemi Akanbi

Moving beyond the current focus on the individual as the unit of analysis in the privacy paradox, this article examines the misalignment between privacy attitudes and online behaviors at the level of society as a collective. I draw on Facebook’s market performance to show how despite concerns about privacy, market structures drive user, advertiser and investor behaviors to continue to reward corporate owners of social media platforms. In this market-oriented analysis, I introduce the metaphor of elasticity to capture the responsiveness of demand for social media to the data (price) charged by social media companies. Overall, this article positions social media as inelastic, relative to privacy costs; highlights the role of the social collective in the privacy crises; and ultimately underscores the need for structural interventions in addressing privacy risks.


2019 ◽  
pp. 203
Author(s):  
Kent Roach

It is argued that neither the approach taken to terrorist speech in Bill C-51 nor Bill C-59 is satisfactory. A case study of the Othman Hamdan case, including his calls on the Internet for “lone wolves” “swiftly to activate,” is featured, along with the use of immigration law after his acquittal for counselling murder and other crimes. Hamdan’s acquittal suggests that the new Bill C-59 terrorist speech offence and take-down powers based on counselling terrorism offences without specifying a particular terrorism offence may not reach Hamdan’s Internet postings. One coherent response would be to repeal terrorist speech offences while making greater use of court-ordered take-downs of speech on the Internet and programs to counter violent extremism. Another coherent response would be to criminalize the promotion and advocacy of terrorist activities (as opposed to terrorist offences in general in Bill C-51 or terrorism offences without identifying a specific terrorist offence in Bill C-59) and provide for defences designed to protect fundamental freedoms such as those under section 319(3) of the Criminal Code that apply to hate speech. Unfortunately, neither Bill C-51 nor Bill C-59 pursues either of these options. The result is that speech such as Hamdan’s will continue to be subject to the vagaries of take-downs by social media companies and immigration law.


Author(s):  
Daniela Stockmann

In public discussions of social media governance, corporations such as Google, Facebook, and Twitter are often first and foremost seen as providers of information and as media. However, social media companies’ business models aim to generate income by attracting a large, growing, and active user base and by collecting and monetising personal data. This has generated concerns with respect to hate speech, disinformation, and privacy. Over time, there has been a trend away from industry self-regulation towards a strengthening of national-level and European Union-level regulations, that is, from soft to hard law. Hence, moving beyond general corporate governance codes, governments are imposing more targeted regulations that recognise these firms’ profound societal importance and wide-reaching influence. The chapter reviews these developments, highlighting the tension between companies’ commercial and public rationales, critiques the current industry-specific regulatory framework, and raises potential policy alternatives.


Teknokultura ◽  
2019 ◽  
Vol 16 (2) ◽  
pp. 265-276
Author(s):  
Chris H. Gray

Using Shoshana Zuboff’s 2019 book, The Age of Surveillance Capitalism, the essay explores this latest form of capitalism and Zuboff’s claims about its organization. Her arguments are compared and contrasted with David Eggers novel, and the movie that came out of it, called The Circle, as well as other perspectives on capitalism (Marx, Barry Unsworth’s Sacred Hunger) and the current dominance of social media companies (especially Alphabet/Google, Facebook, and Amazon) from Evgeny Morozov, Natasa Dow Schüll, Zeynep Tufekci, Steve Mann and Tim Wu. Zuboff’s description and critique of Surveillance Capitalism is a convincing and important addition to our understanding of the political economy of the early 21st Century and the role of giant monopolistic social media companies in shaping it.


Author(s):  
Jeffrey W. Howard

Social media are now central sites of democratic discourse among citizens. But are some contributions to social media too extreme to be permitted? This entry considers the permissibility of suppressing extreme speech on social media, such as terrorist propaganda and racist hate speech. It begins by considering the argument that such restrictions on speech would wrong democratic citizens, violating their freedom of expression. It proceeds to investigate the moral responsibilities of social media companies to suppress extreme speech, and whether these ought to be enforced through the law. Finally, it explores an alternative mechanism for combatting extreme speech on social media—counter-speech—and evaluates its prospects.


Author(s):  
Molly K. Land

The internet would seem to be an ideal platform for fostering norm diversity. The very structure of the internet resists centralized governance, while the opportunities it provides for the “long tail” of expression means even voices with extremely small audiences can find a home. In reality, however, the governance of online speech looks much more monolithic. This is largely a result of private “lawmaking” activity by internet intermediaries. Increasingly, social media companies like Facebook and Twitter are developing what David Kaye, UN Special Rapporteur for the Promotion and Protection of the Right to Freedom of Opinion and Expression, has called “platform law.” Through a combination of community standards, contract, technological design, and case-specific practice, social media companies are developing “Facebook law” and “Twitter law,” displacing the laws of national jurisdictions. Using the example of content moderation, this chapter makes several contributions to the literature. First, it expands upon the idea of “platform law” to consider the broad array of mechanisms that companies use to control user behavior and mediate conflicts. Second, using human rights law as a foundation, the chapter makes the case for meaningful technological design choices that enable user autonomy. Users should be able to make explicit choices about who and what they want to hear online. It also frames user choice in terms of the right to hear, not the right to speak, as a way of navigating the tension presented by hate speech and human rights without resorting to platform law that sanitizes speech for everyone.


2019 ◽  
Vol 72 (1) ◽  
pp. 1-16
Author(s):  
Alton Y.K. Chua ◽  
Snehasish Banerjee

Purpose The purpose of this paper is to explore the use of community question answering sites (CQAs) on the topic of terrorism. Three research questions are investigated: what are the dominant themes reflected in terrorism-related questions? How do answer characteristics vary with question themes? How does users’ anonymity relate to question themes and answer characteristics? Design/methodology/approach Data include 300 questions that attracted 2,194 answers on the community question answering Yahoo! Answers. Content analysis was employed. Findings The questions reflected the community’s information needs ranging from the life of extremists to counter-terrorism policies. Answers were laden with negative emotions reflecting hate speech and Islamophobia, making claims that were rarely verifiable. Users who posted sensitive content generally remained anonymous. Practical implications This paper raises awareness of how CQAs are used to exchange information about sensitive topics such as terrorism. It calls for governments and law enforcement agencies to collaborate with major social media companies to develop a process for cross-platform blacklisting of users and content, as well as identifying those who are vulnerable. Originality/value Theoretically, it contributes to the academic discourse on terrorism in CQAs by exploring the type of questions asked, and the sort of answers they attract. Methodologically, the paper serves to enrich the literature around terrorism and social media that has hitherto mostly drawn data from Facebook and Twitter.


Sign in / Sign up

Export Citation Format

Share Document