scholarly journals Regulatory Goldilocks

2021 ◽  
Vol 8 (3) ◽  
pp. 451-494
Author(s):  
Nina Brown

Social media is a valuable tool that has allowed its users to connect and share ideas in unprecedented ways. But this ease of communication has also opened the door for rampant abuse. Indeed, social networks have become breeding grounds for hate speech, misinformation, terrorist activities, and other harmful content. The COVID-19 pandemic, growing civil unrest, and the polarization of American politics have exacerbated the toxicity in recent months and years. Although social platforms engage in content moderation, the criteria for determining what constitutes harmful content is unclear to both their users and employees tasked with removing it. This lack of transparency has afforded social platforms the flexibility of removing content as it suits them: in the way that best maximizes their profits. But it has also inspired little confidence in social platforms’ ability to solve the problem independently and has left legislators, legal scholars, and the general public calling for a more aggressive— and often a government-led—approach to content moderation. The thorn in any effort to regulate content on social platforms is, of course, the First Amendment. With this in mind, a variety of different options have been suggested to ameliorate harmful content without running afoul of the Constitution. Many legislators have suggested amending or altogether repealing section 230 of the Communications Decency Act. Section 230 is a valuable legal shield that immunizes internet service providers—like social platforms— from liability for the content that users post. This approach would likely reduce the volume of online abuses, but it would also have the practical effect of stifling harmless—and even socially beneficial—dialogue on social media. While there is a clear need for some level of content regulation for social platforms, the risks of government regulation are too great. Yet the current self-regulatory scheme has failed in that it continues to enable an abundance of harmful speech to persist online. This Article explores these models of regulation and suggests a third model: industry self-regulation. Although there is some legal scholarship on social media content moderation, none explore such a model. As this Article will demonstrate, an industry-wide governance model is the optimal solution to reduce harmful speech without hindering the free exchange of ideas on social media.

2020 ◽  
Vol 2 (1) ◽  
pp. 104-115
Author(s):  
Christine W Njuguna ◽  
Joyce Gikandi ◽  
Lucy Kathuri-Ogola ◽  
Joan Kabaria-Muriithi

There is a rise in unprecedented political infractions, disturbances and electoral violence in Africa with the youth playing a significant role. Thus, the study broadly investigated social media use and electoral violence among the youth in Kenya using two objectives that were to assess the use of social media platforms among the youth and to investigate the relationship between social media use and electoral violence among the youth. Guided by the Dependency Theory and the Social Responsibility Theory, the study was carried out in Mathare Constituency, Nairobi County, Kenya. Data collection involved questionnaires, key informant interviews and focus group discussions. Analysis of quantitative data was by descriptive statistics and regression while qualitative data was analyzed through transcription. The study findings showed that the use of social media platforms in communication has been growing with WhatsApp becoming the most ‘preferred’ platform in Kenya. The study outcome exposed the fact that social media had an important and positive effect on electoral violence among the Kenyan youth in Mathare (R = .812). On the other hand, social media (Facebook, WhatsApp, Twitter, YouTube and Instagram) had a strong explanatory strength on electoral violence among the Kenyan youth in Mathare (R2 = .659). This means that social media accounts for 65.9 percent of electoral violence among the Kenyan youth in Mathare Constituency, Nairobi County. The study, therefore, concluded that there is a relationship between social media and electoral violence among the Kenyan youth in Mathare. The study finally recommends that the government should embrace and enforce self-regulation mechanisms by Internet service providers to deter incitement. In addition, there should be increased efforts to educate and inform Internet users on the importance of assessing the credibility of information. Promotion of productive engagement as an effective instrument of dealing with online hatred is key.


Significance The new rules follow a stand-off between Twitter and the central government last month over some posts and accounts. The government has used this stand-off as an opportunity not only to tighten rules governing social media, including Twitter, WhatsApp, Facebook and LinkedIn, but also those for other digital service providers including news publishers and entertainment streaming companies. Impacts Government moves against dominant social media platforms will boost the appeal of smaller platforms with light or no content moderation. Hate speech and harmful disinformation are especially hard to control and curb on smaller platforms. The new rules will have a chilling effect on online public discourse, increasing self-censorship (at the very least). Government action against online news media would undercut fundamental democratic freedoms and the right to dissent. Since US-based companies dominate key segments of the Indian digital market, India’s restrictive rules could mar India-US ties.


2021 ◽  
Author(s):  
Lucas Rodrigues ◽  
Antonio Jacob Junior ◽  
Fábio Lobato

Posts with defamatory content or hate speech are constantly foundon social media. The results for readers are numerous, not restrictedonly to the psychological impact, but also to the growth of thissocial phenomenon. With the General Law on the Protection ofPersonal Data and the Marco Civil da Internet, service providersbecame responsible for the content in their platforms. Consideringthe importance of this issue, this paper aims to analyze the contentpublished (news and comments) on the G1 News Portal with techniquesbased on data visualization and Natural Language Processing,such as sentiment analysis and topic modeling. The results showthat even with most of the comments being neutral or negative andclassified or not as hate speech, the majority of them were acceptedby the users.


Author(s):  
Daniela Stockmann

In public discussions of social media governance, corporations such as Google, Facebook, and Twitter are often first and foremost seen as providers of information and as media. However, social media companies’ business models aim to generate income by attracting a large, growing, and active user base and by collecting and monetising personal data. This has generated concerns with respect to hate speech, disinformation, and privacy. Over time, there has been a trend away from industry self-regulation towards a strengthening of national-level and European Union-level regulations, that is, from soft to hard law. Hence, moving beyond general corporate governance codes, governments are imposing more targeted regulations that recognise these firms’ profound societal importance and wide-reaching influence. The chapter reviews these developments, highlighting the tension between companies’ commercial and public rationales, critiques the current industry-specific regulatory framework, and raises potential policy alternatives.


2019 ◽  
pp. 160-204
Author(s):  
Andrew Murray

This chapter examines defamation cases arising from traditional media sites and user-generated media entries. It first provides an overview of the tort of defamation, and the issue of who is responsible and potentially liable for an online defamatory statement. It then looks at the Defamation Act 2013, considering when defences may be raised to a claim in defamation, and how online publication and republication may result in defamation. Four cases are analysed: Dow Jones v Gutnick, Loutchansky v Times Newspapers, King v Lewis, and Jameel v Dow Jones. The chapter explores intermediary liability, particularly the liability of UK internet service providers, by citing recent decisions on intermediary liability such as Tamiz v Google, Delfi v Estonia, and MTE v Hungary, as well as specific intermediary defences found in the Defamation Act 2013. The chapter concludes by discussing key social media cases such as McAlpine v Bercow and Monroe v Hopkins.


Author(s):  
Eliamani Sedoyeka

In this article, Quality of Experience (QoE) is discussed as experienced by Tanzanian internet users for the second biannual of 2016. It presents findings of the research that aimed at among other things, finding out the QoE in internet services offered by telecommunication companies and other internet service providers in the country. A qualitative approach was used to establish practical quality of experience issues considered important by Tanzanians. Online questionnaires distributed over social media mainly WhatsApp and Facebook were used to ask users about their experiences of the services they had been receiving, in which over 2000 responses were collected from all districts of Tanzania. It was established that usability, quality of service, price and after sale support were the main issues found to influence quality of experience for many. The findings in this article are useful for academicians, QoS and QoE researchers, policy makers and ICT professionals.


2019 ◽  
Vol 22 (1) ◽  
pp. 69-80 ◽  
Author(s):  
Stefanie Ullmann ◽  
Marcus Tomalin

Abstract In this paper we explore quarantining as a more ethical method for delimiting the spread of Hate Speech via online social media platforms. Currently, companies like Facebook, Twitter, and Google generally respond reactively to such material: offensive messages that have already been posted are reviewed by human moderators if complaints from users are received. The offensive posts are only subsequently removed if the complaints are upheld; therefore, they still cause the recipients psychological harm. In addition, this approach has frequently been criticised for delimiting freedom of expression, since it requires the service providers to elaborate and implement censorship regimes. In the last few years, an emerging generation of automatic Hate Speech detection systems has started to offer new strategies for dealing with this particular kind of offensive online material. Anticipating the future efficacy of such systems, the present article advocates an approach to online Hate Speech detection that is analogous to the quarantining of malicious computer software. If a given post is automatically classified as being harmful in a reliable manner, then it can be temporarily quarantined, and the direct recipients can receive an alert, which protects them from the harmful content in the first instance. The quarantining framework is an example of more ethical online safety technology that can be extended to the handling of Hate Speech. Crucially, it provides flexible options for obtaining a more justifiable balance between freedom of expression and appropriate censorship.


2020 ◽  
Vol 7 (1) ◽  
pp. 205395172093513
Author(s):  
Kamel Ajji

This article aims at showing the similarities between the financial and the tech sectors in their use and reliance on information and algorithms and how such dependency affects their attitude towards regulation. Drawing on Pasquale’s recommendations for reform, it sets out a proposal for a constant and independent scrutiny of internet service providers.


Obiter ◽  
2021 ◽  
Vol 32 (2) ◽  
Author(s):  
Frans Marx

The article investigates the phenomenon of hate speech on social network sites and gives an overview of the national and international legal instruments which are available to combat hate speech. After an overview of the nature of hate speech andthe early international attempts to curb it, hate speech in South Africa is investigated. The question is posed whether statements of hatred made on the Internet, especially if published from sites such as Facebook which is external to South Africa, can leadto liability for perpetrators in South Africa. International responses to hate speech in cyberspace are then investigated with specific reference to the possible liability of Internet service providers for hate speech posted by third parties on their websites. Itis shown that, although service providers in the United States enjoy more protection than those in European Union, Canada and South Africa, hate speech on social network sites can be legally curbed. It is concluded that the myth that the Internet as a godless, lawless zone can and must be dismissed.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Obiajulu Joel Nwolu

The study explored the roles of social media in enhancing the acquisition of entrepreneurship skills among Nigerian youths, retrospectively during the COVID-19 lockdown of 2020 and now, its post era. The study was guided by the uses and gratification theory as well as the social category theory. Using the triangulation method, a total of 10 social media influencers were interviewed while 2000 copies of questionnaire were administered to respondents within Anambra State. Findings revealed that 71. 4% of respondents acquired skills on social media during the lockdown with YouTube, Facebook and Instagram as the main media for these acquisitions. While further research is recommended in this area, however, there is a need for Government to provide appropriate infrastructure for better internet connectivity and also subsidize data costs by reducing the taxation of internet service providers. Furthermore, since it has been found that youths are the most active users of social media in Nigeria, in addition to the provision of grants and low-interest loans, Government should use these same social media for sensitization and orientation all geared towards effective youth productivity.


Sign in / Sign up

Export Citation Format

Share Document