An Economic Analysis of Banning TikTok: How to Weigh the Competing Interests of National Security and Free Speech in Social Media Platforms

2020 ◽  
Author(s):  
Cara Groseth
2021 ◽  
Vol 1 ◽  
pp. 31
Author(s):  
Charilaos Papaevangelou

This study introduces a comprehensive yet non-exhaustive overview of literature concerning the concepts of regulation and governance, and attempts to connect them to scholarly works that deal with social media platforms’ content regulation. The paper provides fundamental definitions of regulation and governance, along with a critique of polycentricity, in order to contextualise the discussion around platform governance and online content regulation. Regulation is framed here as a governance mechanism within a polycentric governance model where stakeholders have competing interests, even if sometimes they coincide. Moreover, where traditional governance literature conceptualised stakeholders as a triangle, this article proposes imagining them as overlapping circles of governance clusters with competing interests, going beyond the triad of public, private and non-governmental actors. Finally, the paper contends that that there exists a timely need to reimagine the way in which we understand and study phenomena appertaining to public discourse by adopting the platform governance perspective, which is framed as the advancement of internet governance. Finally, the article ascertains to study the governance of online content and social media platforms not as a sub-section of internet governance but as a conceptual evolution with existential stakes.


Author(s):  
S. Uma ◽  
SenthilKumar T

Using social media has become an integral part of life for people of different demographics for information exchange, searching, maintaining contact networks, marketing, locating job opportunities etc. Social networking is used for education, research, business, advertisements and entertainment. Social media platforms are prone to cybercrime, which is a threat not only for the individual user but for national and international security according to the National Security Council Report. With the advent of big data storage and analytics abilities, decision making is a potential problem and requires smarter machines to organize data faster, make better sense of it, discover insights, learn, adapt and improve over time without direct programming. Cognitive computing makes it easy to unveil the patterns hidden in unstructured data and make more informed decision on consequential matters. In this chapter is a discussion of the challenges and opportunities in social mining services and the applications of cognitive technology towards national security.


Author(s):  
Soraya Chemaly

The toxicity of online interactions presents unprecedented challenges to traditional free speech norms. The scope and amplification properties of the internet give new dimension and power to hate speech, rape and death threats, and denigrating and reputation-destroying commentary. Social media companies and internet platforms, all of which regulate speech through moderation processes every day, walk the fine line between censorship and free speech with every decision they make, and they make millions a day. This chapter will explore how a lack of diversity in the tech industry affects the design and regulation of products and, in so doing, disproportionately negatively affects the free speech of traditionally marginalized people. During the past year there has been an explosion of research about, and public interest in, the tech industry’s persistent diversity problems. At the same time, the pervasiveness of online hate, harassment, and abuse has become evident. These problems come together on social media platforms that have institutionalized and automated the perspectives of privileged male experiences of speech and violence. The tech sector’s male dominance and the sex segregation and hierarchies of its workforce result in serious and harmful effects globally on women’s safety and free expression.


2019 ◽  
Vol 53 (4) ◽  
pp. 501-527
Author(s):  
Collins Udanor ◽  
Chinatu C. Anyanwu

Purpose Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media provides a breeding ground for hate speech and makes combating it seems like a lost battle. However, what may constitute a hate speech in a cultural or religious neutral society may not be perceived as such in a polarized multi-cultural and multi-religious society like Nigeria. Defining hate speech, therefore, may be contextual. Hate speech in Nigeria may be perceived along ethnic, religious and political boundaries. The purpose of this paper is to check for the presence of hate speech in social media platforms like Twitter, and to what degree is hate speech permissible, if available? It also intends to find out what monitoring mechanisms the social media platforms like Facebook and Twitter have put in place to combat hate speech. Lexalytics is a term coined by the authors from the words lexical analytics for the purpose of opinion mining unstructured texts like tweets. Design/methodology/approach This research developed a Python software called polarized opinions sentiment analyzer (POSA), adopting an ego social network analytics technique in which an individual’s behavior is mined and described. POSA uses a customized Python N-Gram dictionary of local context-based terms that may be considered as hate terms. It then applied the Twitter API to stream tweets from popular and trending Nigerian Twitter handles in politics, ethnicity, religion, social activism, racism, etc., and filtered the tweets against the custom dictionary using unsupervised classification of the texts as either positive or negative sentiments. The outcome is visualized using tables, pie charts and word clouds. A similar implementation was also carried out using R-Studio codes and both results are compared and a t-test was applied to determine if there was a significant difference in the results. The research methodology can be classified as both qualitative and quantitative. Qualitative in terms of data classification, and quantitative in terms of being able to identify the results as either negative or positive from the computation of text to vector. Findings The findings from two sets of experiments on POSA and R are as follows: in the first experiment, the POSA software found that the Twitter handles analyzed contained between 33 and 55 percent hate contents, while the R results show hate contents ranging from 38 to 62 percent. Performing a t-test on both positive and negative scores for both POSA and R-studio, results reveal p-values of 0.389 and 0.289, respectively, on an α value of 0.05, implying that there is no significant difference in the results from POSA and R. During the second experiment performed on 11 local handles with 1,207 tweets, the authors deduce as follows: that the percentage of hate contents classified by POSA is 40 percent, while the percentage of hate contents classified by R is 51 percent. That the accuracy of hate speech classification predicted by POSA is 87 percent, while free speech is 86 percent. And the accuracy of hate speech classification predicted by R is 65 percent, while free speech is 74 percent. This study reveals that neither Twitter nor Facebook has an automated monitoring system for hate speech, and no benchmark is set to decide the level of hate contents allowed in a text. The monitoring is rather done by humans whose assessment is usually subjective and sometimes inconsistent. Research limitations/implications This study establishes the fact that hate speech is on the increase on social media. It also shows that hate mongers can actually be pinned down, with the contents of their messages. The POSA system can be used as a plug-in by Twitter to detect and stop hate speech on its platform. The study was limited to public Twitter handles only. N-grams are effective features for word-sense disambiguation, but when using N-grams, the feature vector could take on enormous proportions and in turn increasing sparsity of the feature vectors. Practical implications The findings of this study show that if urgent measures are not taken to combat hate speech there could be dare consequences, especially in highly polarized societies that are always heated up along religious and ethnic sentiments. On daily basis tempers are flaring in the social media over comments made by participants. This study has also demonstrated that it is possible to implement a technology that can track and terminate hate speech in a micro-blog like Twitter. This can also be extended to other social media platforms. Social implications This study will help to promote a more positive society, ensuring the social media is positively utilized to the benefit of mankind. Originality/value The findings can be used by social media companies to monitor user behaviors, and pin hate crimes to specific persons. Governments and law enforcement bodies can also use the POSA application to track down hate peddlers.


2020 ◽  
Vol 59 (1) ◽  
pp. 428-443 ◽  
Author(s):  
Piia Varis

ABSTRACT Since its inception, studies on ‘digital populism’ have focused mainly on the savviness of populist movements and politicians in their use of social media. The focus in this paper is different: we know quite a bit already about what populistsdoon and through social media, but very little has been written about what populists sayaboutsocial media - how they frame them as environments for political communication, and with what kinds of implications. Social media platforms such as Facebook and Twitter have not only become central players in present-day debates regarding free speech, political correctness and truth/fake, but also become part of (populist) political discourse in terms of their content moderation policies and intervention, or lack thereof. I will explore these issues through an examination of Donald Trump’s discourse on social media as an environment for political communication, and their moderation policies.


2021 ◽  
pp. 265-292
Author(s):  
Evelyn Douek

The current system for monitoring and removal of foreign election interference on social media is a free speech blind spot. Social media platforms’ standards for what constitutes impermissible interference are vague, enforcement is seemingly ad hoc and inconsistent, and the role governments play in deciding what speech should be taken down is unclear. This extraordinary opacity—at odds with the ordinary requirements of respect for free speech—has been justified by a militarized discourse that paints such interference as highly effective, and “foreign” speech as uniquely pernicious. But, in fact, evidence of such campaigns’ effectiveness is limited, and the singling out and denigration of “foreign” speech is at odds with the traditional justifications for free expression. Hiding in the blind spot created by this foreign-threat, securitized framing are more pervasive and fundamental questions about online public discourse, such as how to define appropriate norms of online behavior more generally, who should decide them, and how they should be enforced. Without examining and answering these underlying questions, the goal that removing foreign election interference on social media is meant to achieve—re-establishing trust in the online public sphere—will remain unrealized.


2021 ◽  
pp. 026839622110133
Author(s):  
Kai Riemer ◽  
Sandra Peter

Social media platforms, such as Facebook, are today’s agoras, the spaces where public discourse takes place. Freedom of speech on social media has thus become a matter of concern, with calls for better regulation. Public debate revolves around content moderation, seen by some as necessary to remove harmful content, yet as censorship by others. In this paper we argue that the current debate is exclusively focused on the speaking side of speech but overlooks an important way in which platforms have come to interfere with free speech on the audience side. Rather than simply speaking to one’s follower network, algorithms now organise speech on social media with the aim to increase user engagement and marketability for targeted advertising. The result is that audiences for speech are now decided algorithmically, a phenomenon we term ‘algorithmic audiencing’. We put forward algorithmic audiencing as a discovery, a novel phenomenon that has been overlooked so far. We show that it interferes with free speech in unprecedented ways not possible in pre-digital times, by amplifying or suppressing speech for economic gain, which in turn distorts the free and fair exchange of ideas in public discourse. When black-boxed algorithms determine who we speak to the problematic for free speech changes from ‘what can be said’ to ‘what will be heard’ and ‘by whom’. We must urgently problematize the audience side of speech if we want to truly understand, and regulate, free speech on social media. For IS research, algorithmic audiencing opens up entirely new research avenues.


2020 ◽  
Vol 35 (3) ◽  
pp. 213-229 ◽  
Author(s):  
Richard Rogers

Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.


Significance It is illegal for an employer to allow sexual harassment in the workplace, and federal and state legislatures face rising pressure to curb it in workplaces, colleges and their own chambers. Social media platforms, the main location of sexual harassment, are failing to stop it. Impacts The 'metaverse' of virtual and augmented reality will have potential to escalate online sexual harassment to new levels. Addressing online sexual harassment under the same criminal laws as physical sexual assault, or as a hate crime, is a remote prospect. Courts will expand the definition of where intent to threaten someone encroaches on online free speech.


2021 ◽  
pp. 146144482110388
Author(s):  
Ryan Kor-Sins

In recent years, social media platforms such as Twitter have removed users that espouse alt-right narratives of White nationalism and xenophobia from their platforms. This mass removal has caused alt-right users to migrate in droves to alternative social media sites, such as Gab. This migration reflects the “platform branding” of these social media platforms which dictates users’ choices of where to migrate based on the affordances and culture of a given site. Using heterogeneous engineering, this article analyzes the contextual history, language, and technological affordances of Twitter, Reddit, and Gab. The article finds that Twitter’s focus on politics and civil conversation is unhospitable to alt-right content. Reddit’s somewhat neutral positioning and decentralized moderation system make alt-right content possible but unpopular. Finally, Gab provides a haven to alt-right beliefs, constructing its platform around “free speech” and alt-right extremism. Platforms embody holistic brand images through contextual, linguistic, and technological features.


Sign in / Sign up

Export Citation Format

Share Document