Limits of Trial Publicity and Right to Free Speech: A Diagnostic Appraisal of Influence of the (Social) Media on Judicial Proceedings

2021 ◽  
Author(s):  
SYLVESTER UDEMEZUE
2020 ◽  
Vol 37 (2) ◽  
pp. 209-236
Author(s):  
Richard Sorabji

AbstractI have argued elsewhere that in past history, freedom of speech, whether granted to few or many, was granted as bestowing some important benefit. John Stuart Mill, for example, in On Liberty, saw it as enabling us to learn from each other through discussion. By the test of benefit, I here argue that social media that are funded through trade in our personal data with advertisers, including propagandists, cannot claim to be supporting free speech. We lose our freedoms, if the personal data we entrust to online social media are used to target us with information, or disinformation, tailored as persuasive to different personalities, in order to maximize revenue from advertisers or propagandists. Among the serious consequences described, particularly dangerous because of its effect on democracy, is the use of such targeted advertisements to swing voting campaigns. Control is needed both of the social media and of any political parties that pay social media for differential targeting of voters based on personality. Using UK government documents, I recommend legislation for reform and enforcement.


2019 ◽  
Vol 53 (4) ◽  
pp. 501-527
Author(s):  
Collins Udanor ◽  
Chinatu C. Anyanwu

Purpose Hate speech in recent times has become a troubling development. It has different meanings to different people in different cultures. The anonymity and ubiquity of the social media provides a breeding ground for hate speech and makes combating it seems like a lost battle. However, what may constitute a hate speech in a cultural or religious neutral society may not be perceived as such in a polarized multi-cultural and multi-religious society like Nigeria. Defining hate speech, therefore, may be contextual. Hate speech in Nigeria may be perceived along ethnic, religious and political boundaries. The purpose of this paper is to check for the presence of hate speech in social media platforms like Twitter, and to what degree is hate speech permissible, if available? It also intends to find out what monitoring mechanisms the social media platforms like Facebook and Twitter have put in place to combat hate speech. Lexalytics is a term coined by the authors from the words lexical analytics for the purpose of opinion mining unstructured texts like tweets. Design/methodology/approach This research developed a Python software called polarized opinions sentiment analyzer (POSA), adopting an ego social network analytics technique in which an individual’s behavior is mined and described. POSA uses a customized Python N-Gram dictionary of local context-based terms that may be considered as hate terms. It then applied the Twitter API to stream tweets from popular and trending Nigerian Twitter handles in politics, ethnicity, religion, social activism, racism, etc., and filtered the tweets against the custom dictionary using unsupervised classification of the texts as either positive or negative sentiments. The outcome is visualized using tables, pie charts and word clouds. A similar implementation was also carried out using R-Studio codes and both results are compared and a t-test was applied to determine if there was a significant difference in the results. The research methodology can be classified as both qualitative and quantitative. Qualitative in terms of data classification, and quantitative in terms of being able to identify the results as either negative or positive from the computation of text to vector. Findings The findings from two sets of experiments on POSA and R are as follows: in the first experiment, the POSA software found that the Twitter handles analyzed contained between 33 and 55 percent hate contents, while the R results show hate contents ranging from 38 to 62 percent. Performing a t-test on both positive and negative scores for both POSA and R-studio, results reveal p-values of 0.389 and 0.289, respectively, on an α value of 0.05, implying that there is no significant difference in the results from POSA and R. During the second experiment performed on 11 local handles with 1,207 tweets, the authors deduce as follows: that the percentage of hate contents classified by POSA is 40 percent, while the percentage of hate contents classified by R is 51 percent. That the accuracy of hate speech classification predicted by POSA is 87 percent, while free speech is 86 percent. And the accuracy of hate speech classification predicted by R is 65 percent, while free speech is 74 percent. This study reveals that neither Twitter nor Facebook has an automated monitoring system for hate speech, and no benchmark is set to decide the level of hate contents allowed in a text. The monitoring is rather done by humans whose assessment is usually subjective and sometimes inconsistent. Research limitations/implications This study establishes the fact that hate speech is on the increase on social media. It also shows that hate mongers can actually be pinned down, with the contents of their messages. The POSA system can be used as a plug-in by Twitter to detect and stop hate speech on its platform. The study was limited to public Twitter handles only. N-grams are effective features for word-sense disambiguation, but when using N-grams, the feature vector could take on enormous proportions and in turn increasing sparsity of the feature vectors. Practical implications The findings of this study show that if urgent measures are not taken to combat hate speech there could be dare consequences, especially in highly polarized societies that are always heated up along religious and ethnic sentiments. On daily basis tempers are flaring in the social media over comments made by participants. This study has also demonstrated that it is possible to implement a technology that can track and terminate hate speech in a micro-blog like Twitter. This can also be extended to other social media platforms. Social implications This study will help to promote a more positive society, ensuring the social media is positively utilized to the benefit of mankind. Originality/value The findings can be used by social media companies to monitor user behaviors, and pin hate crimes to specific persons. Governments and law enforcement bodies can also use the POSA application to track down hate peddlers.


2019 ◽  
Vol 5 (2) ◽  
pp. 140
Author(s):  
Suleiman Usman Santuraki

The Information and Communication Technology (ICT) revolution heralding the emergence and dominance of social media has always been viewed as a turning point in free speech and communication. Indeed, the social media ordinarily represents the freedom of all people to speech and information. But then, there is also the side of the social media that has been often ignored; that it serves as platform for all and sundry to express themselves with little, if any regulation or legal consequences. This as a result has led to global explosion of hate speech and fake news. Hate speech normally lead to tension and holds in it, the potential for national or even international crisis of untold proportions. It also has the likelihood to scare people away from expressing themselves for fear of hate-filled responses and becoming a source of fake news. Using doctrinal as well as comparative methodologies, this paper appraises the trend between states of passing laws or proposing laws to regulate hate speech and fake news; it also appraises the contents of such laws from different countries with the aim of identifying how they may be used to suppress free speech under the guise of regulating hate speech and fake news. It argues that the alarming trend of hate speech and fake news presented an opportunity for leaders across the globe to curb free speech. The paper concludes that the advancement in ICT helped in a great deal to advance free speech; it may as well, because of the spread of hate speech and fake news, lead to a reverse of that success story.


2020 ◽  
pp. 009102602095450
Author(s):  
Adam M. Brewer

Public organizations are experiencing a burgeoning of workplace challenges involving employee use of social media. Comments, images, or videos ranging from racist remarks, to calls to violence, simple criticism of one’s organization, to full on whistle blowing significantly challenge public organizations’ policies for addressing speech that creates discord in the workplace. With the blurring of lines between personal and professional lives, these challenges create uncertainty for public organizations regarding how to maintain the efficient operation of the workplace, deal with the social and political fallout of such instances, and manage organizational liability. This article performs content analysis on 33 federal lower court opinions involving speech/social media workplace issues. The study analyzes the manner in which the lower courts apply free speech precedent on contemporary workplace speech cases. The findings suggest that patterns emerge from the opinions providing key insights for public managers regarding how to better manage these complex issues.


2020 ◽  
Vol 35 (3) ◽  
pp. 213-229 ◽  
Author(s):  
Richard Rogers

Extreme, anti-establishment actors are being characterized increasingly as ‘dangerous individuals’ by the social media platforms that once aided in making them into ‘Internet celebrities’. These individuals (and sometimes groups) are being ‘deplatformed’ by the leading social media companies such as Facebook, Instagram, Twitter and YouTube for such offences as ‘organised hate’. Deplatforming has prompted debate about ‘liberal big tech’ silencing free speech and taking on the role of editors, but also about the questions of whether it is effective and for whom. The research reported here follows certain of these Internet celebrities to Telegram as well as to a larger alternative social media ecology. It enquires empirically into some of the arguments made concerning whether deplatforming ‘works’ and how the deplatformed use Telegram. It discusses the effects of deplatforming for extreme Internet celebrities, alternative and mainstream social media platforms and the Internet at large. It also touches upon how social media companies’ deplatforming is affecting critical social media research, both into the substance of extreme speech as well as its audiences on mainstream as well as alternative platforms.


2016 ◽  
Vol 14 (4) ◽  
pp. 350-363 ◽  
Author(s):  
Iftikhar Alam ◽  
Roshan Lal Raina ◽  
Faizia Siddiqui

Purpose The Hon’ble Supreme Court of India, in a landmark judgment, scrapped a draconian law [Section 66 (A)] that gave the police absolute power to put behind bars anybody who was found posting offensive or annoying comments online. This paper aims to examine the take of people on the “Free Speech via Social Media” issue and their attitude towards the way sensitive messages/information are posted, shared and forwarded on social media, especially, Facebook. Design/methodology/approach The research was carried out on a sample of 200 social media users, all picked up randomly, from five Indian states/Union Territories. Data were collected through a questionnaire, and users were contacted through e-mail. Data collected were analyzed through the Kolmogorov–Smirnov (K-S) Z test. Findings The findings indicate that hate posts/messages are on the rise, and more and more users are joining in. Besides, prosecution happens only when the aggrieved party is influential or powerful. Practical implications The findings of this research give a strong insight into the social media behaviour of users in relation to hate contents/posts. The study establishes the fact that Indian people are in favour of free speech, but with a sense of restraint and responsibility. The work could form the basis for future research on various aspects of hate speech on social media. Researchers could study the trials and prosecutions that have happened over the past few years and whether punishment has acted as a deterrent. Originality/value The research is likely to be important for those involved in work on freedom of speech or hate speech through social media. Social networking sites such as Facebook would also get some insights into users’ perception towards free and hate speech mechanism on social media.


2017 ◽  
Vol 16 (1) ◽  
pp. 12-24 ◽  
Author(s):  
Nicole Behringer ◽  
Kai Sassenberg ◽  
Annika Scholl

Abstract. Knowledge exchange via social media is crucial for organizational success. Yet, many employees only read others’ contributions without actively contributing their knowledge. We thus examined predictors of the willingness to contribute knowledge. Applying social identity theory and expectancy theory to knowledge exchange, we investigated the interplay of users’ identification with their organization and perceived usefulness of a social media tool. In two studies, identification facilitated users’ willingness to contribute knowledge – provided that the social media tool seemed useful (vs. not-useful). Interestingly, identification also raised the importance of acquiring knowledge collectively, which could in turn compensate for low usefulness of the tool. Hence, considering both social and media factors is crucial to enhance employees’ willingness to share knowledge via social media.


Planta Medica ◽  
2016 ◽  
Vol 81 (S 01) ◽  
pp. S1-S381 ◽  
Author(s):  
S Cosa ◽  
AM Viljoen ◽  
SK Chaudhary ◽  
W Chen

Sign in / Sign up

Export Citation Format

Share Document