scholarly journals Will the crowd game the algorithm? Using layperson judgments to combat misinformation on social media by downranking distrusted sources

Author(s):  
Ziv Epstein ◽  
Gordon Pennycook ◽  
David Gertler Rand

How can social media platforms fight the spread of misinformation? One possibility is to use newsfeed algorithms to downrank content from sources that users rate as untrustworthy. But will laypeople unable to identify misinformation sites due to motivated reasoning or lack of expertise? And will they “game” this crowdsourcing mechanism to promote content that aligns with their partisan agendas? We conducted a survey experiment in which N = 984 Americans indicated their trust in numerous news sites. Half of the participants were told that their survey responses would inform social media ranking algorithms - creating a potential incentive to misrepresent their beliefs. Participants trusted mainstream sources much more than hyper-partisan or fake news sources, and their ratings were highly correlated with professional fact-checker judgments. Critically, informing participants that their responses would influence ranking algorithms did not diminish this high level of discernment, despite slightly increasing the political polarization of trust ratings.

2019 ◽  
Vol 8 (1) ◽  
pp. 114-133

Since the 2016 U.S. presidential election, attacks on the media have been relentless. “Fake news” has become a household term, and repeated attempts to break the trust between reporters and the American people have threatened the validity of the First Amendment to the U.S. Constitution. In this article, the authors trace the development of fake news and its impact on contemporary political discourse. They also outline cutting-edge pedagogies designed to assist students in critically evaluating the veracity of various news sources and social media sites.


Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


2021 ◽  
pp. 194016122110091
Author(s):  
Magdalena Wojcieszak ◽  
Ericka Menchen-Trevino ◽  
Joao F. F. Goncalves ◽  
Brian Weeks

The online environment dramatically expands the number of ways people can encounter news but there remain questions of whether these abundant opportunities facilitate news exposure diversity. This project examines key questions regarding how internet users arrive at news and what kinds of news they encounter. We account for a multiplicity of avenues to news online, some of which have never been analyzed: (1) direct access to news websites, (2) social networks, (3) news aggregators, (4) search engines, (5) webmail, and (6) hyperlinks in news. We examine the extent to which each avenue promotes news exposure and also exposes users to news sources that are left leaning, right leaning, and centrist. When combined with information on individual political leanings, we show the extent of dissimilar, centrist, or congenial exposure resulting from each avenue. We rely on web browsing history records from 636 social media users in the US paired with survey self-reports, a unique data set that allows us to examine both aggregate and individual-level exposure. Visits to news websites account for about 2 percent of the total number of visits to URLs and are unevenly distributed among users. The most widespread ways of accessing news are search engines and social media platforms (and hyperlinks within news sites once people arrive at news). The two former avenues also increase dissimilar news exposure, compared to accessing news directly, yet direct news access drives the highest proportion of centrist exposure.


2021 ◽  
pp. 1-41
Author(s):  
Donato VESE

Governments around the world are strictly regulating information on social media in the interests of addressing fake news. There is, however, a risk that the uncontrolled spread of information could increase the adverse effects of the COVID-19 health emergency through the influence of false and misleading news. Yet governments may well use health emergency regulation as a pretext for implementing draconian restrictions on the right to freedom of expression, as well as increasing social media censorship (ie chilling effects). This article seeks to challenge the stringent legislative and administrative measures governments have recently put in place in order to analyse their negative implications for the right to freedom of expression and to suggest different regulatory approaches in the context of public law. These controversial government policies are discussed in order to clarify why freedom of expression cannot be allowed to be jeopardised in the process of trying to manage fake news. Firstly, an analysis of the legal definition of fake news in academia is presented in order to establish the essential characteristics of the phenomenon (Section II). Secondly, the legislative and administrative measures implemented by governments at both international (Section III) and European Union (EU) levels (Section IV) are assessed, showing how they may undermine a core human right by curtailing freedom of expression. Then, starting from the premise of social media as a “watchdog” of democracy and moving on to the contention that fake news is a phenomenon of “mature” democracy, the article argues that public law already protects freedom of expression and ensures its effectiveness at the international and EU levels through some fundamental rules (Section V). There follows a discussion of the key regulatory approaches, and, as alternatives to government intervention, self-regulation and especially empowering users are proposed as strategies to effectively manage fake news by mitigating the risks of undue interference by regulators in the right to freedom of expression (Section VI). The article concludes by offering some remarks on the proposed solution and in particular by recommending the implementation of reliability ratings on social media platforms (Section VII).


2020 ◽  
Vol 36 (4) ◽  
pp. 351-368
Author(s):  
Vience Mutiara Rumata ◽  
◽  
Fajar Kuala Nugraha ◽  

Social media become a public sphere for political discussion in the world, with no exception in Indonesia. Social media have broadened public engagement but at the same time, it creates an inevitable effect of polarization particularly during the heightened political situation such as a presidential election. Studies found that there is a correlation between fake news and political polarization. In this paper, we identify and the pattern of fake narratives in Indonesia in three different time frames: (1) the Presidential campaign (23 September 2018 -13 April 2019); (2) the vote (14-17 April 2019); (3) the announcement (21-22 May 2019). We extracted and analyzed a data-set consisting of 806,742 Twitter messages, 143 Facebook posts, and 16,082 Instagram posts. We classified 43 fake narratives where Twitter was the most used platform to distribute fake narratives massively. The accusation of Muslim radical group behind Prabowo and Communist accusation towards the incumbent President Joko Widodo were the two top fake narratives during the campaign on Twitter and Facebook. The distribution of fake narratives to Prabowo was larger than that to Joko Widodo on those three platforms in this period. On the contrary, the distribution of fake narratives to Joko Widodo was significantly larger than that to Prabowo during the election and the announcement periods. The death threat of Joko Widodo was top fake narratives on these three platforms. Keywords: Fake narratives, Indonesian presidential election, social media, political polarization, post.


Author(s):  
Fakhra Akhtar ◽  
Faizan Ahmed Khan

<p>In the digital age, fake news has become a well-known phenomenon. The spread of false evidence is often used to confuse mainstream media and political opponents, and can lead to social media wars, hatred arguments and debates.Fake news is blurring the distinction between real and false information, and is often spread on social media resulting in negative views and opinions. Earlier Research describe the fact that false propaganda is used to create false stories on mainstream media in order to cause a revolt and tension among the masses The digital rights foundation DRF report, which builds on the experiences of 152 journalists and activists in Pakistan, presents that more than 88 % of the participants find social media platforms as the worst source for information, with Facebook being the absolute worst. The dataset used in this paper relates to Real and fake news detection. The objective of this paper is to determine the Accuracy , precision , of the entire dataset .The results are visualized in the form of graphs and the analysis was done using python. The results showed the fact that the dataset holds 95% of the accuracy. The number of actual predicted cases were 296. Results of this paper reveals that The accuracy of the model dataset is 95.26 % the precision results 95.79 % whereas recall and F-Measure shows 94.56% and 95.17% accuracy respectively.Whereas in predicted models there are 296 positive attributes , 308 negative attributes 17 false positives and 13 false negatives. This research recommends that authenticity of news should be analysed first instead of drafting an opinion, sharing fake news or false information is considered unethical journalists and news consumers both should act responsibly while sharing any news.</p>


Author(s):  
Kristy A. Hesketh

This chapter explores the Spiritualist movement and its rapid growth due to the formation of mass media and compares these events with the current rise of fake news in the mass media. The technology of cheaper publications created a media platform that featured stories about Spiritualist mediums and communications with the spirit world. These articles were published in newspapers next to regular news creating a blurred line between real and hoax news stories. Laws were later created to address instances of fraud that occurred in the medium industry. Today, social media platforms provide a similar vessel for the spread of fake news. Online fake news is published alongside legitimate news reports leaving readers unable to differentiate between real and fake articles. Around the world countries are actioning initiatives to address the proliferation of false news to prevent the spread of misinformation. This chapter compares the parallels between these events, how hoaxes and fake news begin and spread, and examines the measures governments are taking to curb the growth of misinformation.


2020 ◽  
Vol 17 (167) ◽  
pp. 20200020
Author(s):  
Michele Coscia ◽  
Luca Rossi

Many people view news on social media, yet the production of news items online has come under fire because of the common spreading of misinformation. Social media platforms police their content in various ways. Primarily they rely on crowdsourced ‘flags’: users signal to the platform that a specific news item might be misleading and, if they raise enough of them, the item will be fact-checked. However, real-world data show that the most flagged news sources are also the most popular and—supposedly—reliable ones. In this paper, we show that this phenomenon can be explained by the unreasonable assumptions that current content policing strategies make about how the online social media environment is shaped. The most realistic assumption is that confirmation bias will prevent a user from flagging a news item if they share the same political bias as the news source producing it. We show, via agent-based simulations, that a model reproducing our current understanding of the social media environment will necessarily result in the most neutral and accurate sources receiving most flags.


Sign in / Sign up

Export Citation Format

Share Document