scholarly journals An Analysis of Fake Narratives on Social Media during 2019 Indonesian Presidential Election

2020 ◽  
Vol 36 (4) ◽  
pp. 351-368
Author(s):  
Vience Mutiara Rumata ◽  
◽  
Fajar Kuala Nugraha ◽  

Social media become a public sphere for political discussion in the world, with no exception in Indonesia. Social media have broadened public engagement but at the same time, it creates an inevitable effect of polarization particularly during the heightened political situation such as a presidential election. Studies found that there is a correlation between fake news and political polarization. In this paper, we identify and the pattern of fake narratives in Indonesia in three different time frames: (1) the Presidential campaign (23 September 2018 -13 April 2019); (2) the vote (14-17 April 2019); (3) the announcement (21-22 May 2019). We extracted and analyzed a data-set consisting of 806,742 Twitter messages, 143 Facebook posts, and 16,082 Instagram posts. We classified 43 fake narratives where Twitter was the most used platform to distribute fake narratives massively. The accusation of Muslim radical group behind Prabowo and Communist accusation towards the incumbent President Joko Widodo were the two top fake narratives during the campaign on Twitter and Facebook. The distribution of fake narratives to Prabowo was larger than that to Joko Widodo on those three platforms in this period. On the contrary, the distribution of fake narratives to Joko Widodo was significantly larger than that to Prabowo during the election and the announcement periods. The death threat of Joko Widodo was top fake narratives on these three platforms. Keywords: Fake narratives, Indonesian presidential election, social media, political polarization, post.

2018 ◽  
Vol 73 ◽  
pp. 14006
Author(s):  
Hedi Pudjo Santosa ◽  
Nurul Hasfi ◽  
Triyono Lukmantoro

In the internet era, a hoax is a real threat for democracy, as it spreads misleading and fake information that creats uncertain political communication. During the 2014 Indonesian presidential election, a hoax was rapidly spreading thorough social media. Morover, in Indonesian political context, a hoax construct strategically by using primordialism issue. This study uses critical discourse analysis to identify a pattern of hoax during the 2014 Indonesian presidential election, particularly to show how primordialism constructs an unequel society. The data was taken from political discussion among 8 influential Twitter accounts, two months before the election. The study found that 1) A hoax was produced by using many techniques; 2) Mainstream ‘online media’ involved in the production of the hoax, particularly by constructing sensational headline. Meanwhile, fake news commonly produced and distributed by pseudonym Twitter accounts; 3) Both hoax and fake news generally run under a mechanism of primordialism issue.


2019 ◽  
Vol 8 (1) ◽  
pp. 114-133

Since the 2016 U.S. presidential election, attacks on the media have been relentless. “Fake news” has become a household term, and repeated attempts to break the trust between reporters and the American people have threatened the validity of the First Amendment to the U.S. Constitution. In this article, the authors trace the development of fake news and its impact on contemporary political discourse. They also outline cutting-edge pedagogies designed to assist students in critically evaluating the veracity of various news sources and social media sites.


Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


2017 ◽  
Vol 37 (1) ◽  
pp. 57-65 ◽  
Author(s):  
Chamil Rathnayake ◽  
Wayne Buente

The role of automated or semiautomated social media accounts, commonly known as “bots,” in social and political processes has gained significant scholarly attention. The current body of research discusses how bots can be designed to achieve specific purposes as well as instances of unexpected negative outcomes of such use. We suggest that the interplay between social media affordances and user practices can result in incidental effects from automated agents. We examined a Twitter network data set with 1,782 nodes and 5,640 edges to demonstrate the engagement and outreach of a retweeting bot called Siripalabot that was popular among Sri Lankan Twitter users. The bot served the simple function of retweeting tweets with hashtags #SriLanka and #lk to its follower network. However, the co-use of #Sri Lanka and/or #lk with #PresPollSL, a hashtag used to discuss politics related to Sri Lanka’s presidential election in 2015, resulted in the bot incidentally amplifying the political voice of less engaged actors. The analysis demonstrated that the bot dominated the network in terms of engagement (out-degree) and the ability to connect distant clusters of actors (betweenness centrality) while more traditional actors, such as the main election candidates and news accounts, indicated more prestige (in-degree) and power (eigenvector centrality). We suggest that the study of automated agents should include designer intentions, the design and behavior of automated agents, user expectations, as well as unintended and incidental effects of interaction.


2021 ◽  
Vol 7 (3) ◽  
pp. 205630512110478
Author(s):  
Dam Hee Kim ◽  
Brian E. Weeks ◽  
Daniel S. Lane ◽  
Lauren B. Hahn ◽  
Nojin Kwak

Social media, as sources of political news and sites of political discussion, may be novel environments for political learning. Many early reports, however, failed to find that social media use promotes gains in political knowledge. Prior research has not yet fully explored the possibility based on the communication mediation model that exposure to political information on social media facilitates political expression, which may subsequently encourage political learning. We find support for this mediation model in the context of Facebook by analyzing a two-wave survey prior to the 2016 U.S. presidential election. In particular, sharing and commenting, not liking or opinion posting, may facilitate political knowledge gains.


2018 ◽  
Vol 24 (2) ◽  
pp. 135-145
Author(s):  
Geraldine Panapasa ◽  
Shailendra Singh

The rapidly-changing technology and transforming political situation across the Pacific have seen a noticeable shift towards harsher media legislation as governments facing unprecedented scrutiny try to contain the fallout from social media, citizen journalism and fake news. These developments were at the heart of the discussions at the Pacific Islands Media Association’s PINA 2018 Summit in Nuku’alofa, Tonga, in May. The biannual event is the largest gathering of Pacific Islands journalists to contemplate issues of mutual concern, formulate collective responses and chart the way forward. This article reviews this year’s meeting, where discussions centred around the opportunities and challenges of the expanding social media sphere, as well as taking a fresh look at some perennial problems, such as corruption, political pressure and gender violence.


2019 ◽  
Author(s):  
Ziv Epstein ◽  
Gordon Pennycook ◽  
David Gertler Rand

How can social media platforms fight the spread of misinformation? One possibility is to use newsfeed algorithms to downrank content from sources that users rate as untrustworthy. But will laypeople unable to identify misinformation sites due to motivated reasoning or lack of expertise? And will they “game” this crowdsourcing mechanism to promote content that aligns with their partisan agendas? We conducted a survey experiment in which N = 984 Americans indicated their trust in numerous news sites. Half of the participants were told that their survey responses would inform social media ranking algorithms - creating a potential incentive to misrepresent their beliefs. Participants trusted mainstream sources much more than hyper-partisan or fake news sources, and their ratings were highly correlated with professional fact-checker judgments. Critically, informing participants that their responses would influence ranking algorithms did not diminish this high level of discernment, despite slightly increasing the political polarization of trust ratings.


2022 ◽  
Vol 6 (1) ◽  
pp. 3
Author(s):  
Riccardo Cantini ◽  
Fabrizio Marozzo ◽  
Domenico Talia ◽  
Paolo Trunfio

Social media platforms are part of everyday life, allowing the interconnection of people around the world in large discussion groups relating to every topic, including important social or political issues. Therefore, social media have become a valuable source of information-rich data, commonly referred to as Social Big Data, effectively exploitable to study the behavior of people, their opinions, moods, interests and activities. However, these powerful communication platforms can be also used to manipulate conversation, polluting online content and altering the popularity of users, through spamming activities and misinformation spreading. Recent studies have shown the use on social media of automatic entities, defined as social bots, that appear as legitimate users by imitating human behavior aimed at influencing discussions of any kind, including political issues. In this paper we present a new methodology, namely TIMBRE (Time-aware opInion Mining via Bot REmoval), aimed at discovering the polarity of social media users during election campaigns characterized by the rivalry of political factions. This methodology is temporally aware and relies on a keyword-based classification of posts and users. Moreover, it recognizes and filters out data produced by social media bots, which aim to alter public opinion about political candidates, thus avoiding heavily biased information. The proposed methodology has been applied to a case study that analyzes the polarization of a large number of Twitter users during the 2016 US presidential election. The achieved results show the benefits brought by both removing bots and taking into account temporal aspects in the forecasting process, revealing the high accuracy and effectiveness of the proposed approach. Finally, we investigated how the presence of social bots may affect political discussion by studying the 2016 US presidential election. Specifically, we analyzed the main differences between human and artificial political support, estimating also the influence of social bots on legitimate users.


Author(s):  
Srishti Sharma ◽  
Vaishali Kalra

Owing to the rapid explosion of social media platforms in the past decade, we spread and consume information via the internet at an expeditious rate. It has caused an alarming proliferation of fake news on social networks. The global nature of social networks has facilitated international blowout of fake news. Fake news has proven to increase political polarization and partisan conflict. Fake news is also found to be more rampant on social media than mainstream media. The evil of fake news is garnering a lot of attention and research effort. In this work, we have tried to handle the spread of fake news via tweets. We have performed fake news classification by employing user characteristics as well as tweet text. Thus, trying to provide a holistic solution for fake news detection. For classifying user characteristics, we have used the XGBoost algorithm which is an ensemble of decision trees utilising the boosting method. Further to correctly classify the tweet text we used various natural language processing techniques to preprocess the tweets and then applied a sequential neural network and state-of-the-art BERT transformer to classify the tweets. The models have then been evaluated and compared with various baseline models to show that our approach effectively tackles this problemOwing to the rapid explosion of social media platforms in the past decade, we spread and consume information via the internet at an expeditious rate. It has caused an alarming proliferation of fake news on social networks. The global nature of social networks has facilitated international blowout of fake news. Fake news has proven to increase political polarization and partisan conflict. Fake news is also found to be more rampant on social media than mainstream media. The evil of fake news is garnering a lot of attention and research effort. In this work, we have tried to handle the spread of fake news via tweets. We have performed fake news classification by employing user characteristics as well as tweet text. Thus, trying to provide a holistic solution for fake news detection. For classifying user characteristics, we have used the XGBoost algorithm which is an ensemble of decision trees utilising the boosting method. Further to correctly classify the tweet text we used various natural language processing techniques to preprocess the tweets and then applied a sequential neural network and state-of-the-art BERT transformer to classify the tweets. The models have then been evaluated and compared with various baseline models to show that our approach effectively tackles this problem


Sign in / Sign up

Export Citation Format

Share Document