Post-Truths and Fake News in Disinformation Contexts

Author(s):  
Maria del Mar Ramirez-Alvarado

This chapter analyses the concept of post-truth related to the circumstances in which objective facts are less influential in the formation of public opinion than emotional appeals and personal beliefs, and the subsequent projection of this phenomenon in social media, as various studies have demonstrated that some fake news stories generate more engagement from users than vetted reporting from reliable news sources. This will start from a general introduction and an associated theoretical reflection, and then focus on the case of Venezuela and its recent historical circumstances in order to analyze how fake news circulates in this country stimulated by a context of widespread disinformation.

2020 ◽  
Vol 00 (00) ◽  
pp. 1-22
Author(s):  
Yanfang Wu

This study seeks to uncover the effects of source and repetition on the illusory truth effect and the dissemination of fake news on social media with an online experiment. This study found that in a personalized source system where trustworthy traditional news sources and personal contacts converged on social media, repetition has a big influence on the trustworthiness of news source and balance of news story. Although most people intend to share real news stories with balance, the illusory truth effect causes mis-judgement, which makes fake news more likely to go viral than real news. The multi-group SEM analysis of the two groups – without source and with source – showed that readers in the no source group rated the effect of repetition on news evaluation as more significant than the with source group. The findings suggest that the effect of source has diminished in the evaluation of news quality. However, sharers on social media are becoming more influential.


2019 ◽  
Vol 8 (1) ◽  
pp. 114-133

Since the 2016 U.S. presidential election, attacks on the media have been relentless. “Fake news” has become a household term, and repeated attempts to break the trust between reporters and the American people have threatened the validity of the First Amendment to the U.S. Constitution. In this article, the authors trace the development of fake news and its impact on contemporary political discourse. They also outline cutting-edge pedagogies designed to assist students in critically evaluating the veracity of various news sources and social media sites.


2018 ◽  
Vol 41 (5) ◽  
pp. 689-707
Author(s):  
Tanya Notley ◽  
Michael Dezuanni

Social media use has redefined the production, experience and consumption of news media. These changes have made verifying and trusting news content more complicated and this has led to a number of recent flashpoints for claims and counter-claims of ‘fake news’ at critical moments during elections, natural disasters and acts of terrorism. Concerns regarding the actual and potential social impact of fake news led us to carry out the first nationally representative survey of young Australians’ news practices and experiences. Our analysis finds that while social media is one of young people’s preferred sources of news, they are not confident about spotting fake news online and many rarely or never check the source of news stories. Our findings raise important questions regarding the need for news media literacy education – both in schools and in the home. Therefore, we consider the historical development of news media literacy education and critique the relevance of dominant frameworks and pedagogies currently in use. We find that news media has become neglected in media literacy education in Australia over the past three decades, and we propose that current media literacy frameworks and pedagogies in use need to be rethought for the digital age.


Author(s):  
Kristy A. Hesketh

This chapter explores the Spiritualist movement and its rapid growth due to the formation of mass media and compares these events with the current rise of fake news in the mass media. The technology of cheaper publications created a media platform that featured stories about Spiritualist mediums and communications with the spirit world. These articles were published in newspapers next to regular news creating a blurred line between real and hoax news stories. Laws were later created to address instances of fraud that occurred in the medium industry. Today, social media platforms provide a similar vessel for the spread of fake news. Online fake news is published alongside legitimate news reports leaving readers unable to differentiate between real and fake articles. Around the world countries are actioning initiatives to address the proliferation of false news to prevent the spread of misinformation. This chapter compares the parallels between these events, how hoaxes and fake news begin and spread, and examines the measures governments are taking to curb the growth of misinformation.


2019 ◽  
Vol 6 (2) ◽  
pp. 205316801984855 ◽  
Author(s):  
Hunt Allcott ◽  
Matthew Gentzkow ◽  
Chuan Yu

In recent years, there has been widespread concern that misinformation on social media is damaging societies and democratic institutions. In response, social media platforms have announced actions to limit the spread of false content. We measure trends in the diffusion of content from 569 fake news websites and 9540 fake news stories on Facebook and Twitter between January 2015 and July 2018. User interactions with false content rose steadily on both Facebook and Twitter through the end of 2016. Since then, however, interactions with false content have fallen sharply on Facebook while continuing to rise on Twitter, with the ratio of Facebook engagements to Twitter shares decreasing by 60%. In comparison, interactions with other news, business, or culture sites have followed similar trends on both platforms. Our results suggest that the relative magnitude of the misinformation problem on Facebook has declined since its peak.


2020 ◽  
Author(s):  
Amir Bidgoly ◽  
Hossein Amirkhani ◽  
Fariba Sadeghi

Abstract Fake news detection is a challenging problem in online social media, with considerable social and political impacts. Several methods have already been proposed for the automatic detection of fake news, which are often based on the statistical features of the content or context of news. In this paper, we propose a novel fake news detection method based on Natural Language Inference (NLI) approach. Instead of using only statistical features of the content or context of the news, the proposed method exploits a human-like approach, which is based on inferring veracity using a set of reliable news. In this method, the related and similar news published in reputable news sources are used as auxiliary knowledge to infer the veracity of a given news item. We also collect and publish the first inference-based fake news detection dataset, called FNID, in two formats: the two-class version (FNID-FakeNewsNet) and the six-class version (FNID-LIAR). We use the NLI approach to boost several classical and deep machine learning models including Decision Tree, Naïve Bayes, Random Forest, Logistic Regression, k-Nearest Neighbors, Support Vector Machine, BiGRU, and BiLSTM along with different word embedding methods including Word2vec, GloVe, fastText, and BERT. The experiments show that the proposed method achieves 85.58% and 41.31% accuracies in the FNID-FakeNewsNet and FNID-LIAR datasets, respectively, which are 10.44% and 13.19% respective absolute improvements.


Author(s):  
Tewodros Tazeze ◽  
Raghavendra R

The rapid growth and expansion of social media platform has filled the gap of information exchange in the day to day life. Apparently, social media is the main arena for disseminating manipulated information in a high range and exponential rate. The fabrication of twisted information is not limited to ones language, society and domain, this is particularly observed in the ongoing COVID-19 pandemic situation. The creation and propagation of fabricated news creates an urgent demand for automatically classification and detecting such distorted news articles. Manually detecting fake news is a laborious and tiresome task and the dearth of annotated fake news dataset to automate fake news detection system is still a tremendous challenge for low-resourced Amharic language (after Arabic, the second largely spoken Semitic language group). In this study, Amharic fake news dataset are crafted from verified news sources and various social media pages and six different machine learning classifiers Naïve bays, SVM, Logistic Regression, SGD, Random Forest and Passive aggressive Classifier model are built. The experimental results show that Naïve bays and Passive Aggressive Classifier surpass the remaining models with accuracy above 96% and F1- score of 99%. The study has a significant contribution to turn down the rate of disinformation in vernacular language.


2019 ◽  
Author(s):  
Ziv Epstein ◽  
Gordon Pennycook ◽  
David Gertler Rand

How can social media platforms fight the spread of misinformation? One possibility is to use newsfeed algorithms to downrank content from sources that users rate as untrustworthy. But will laypeople unable to identify misinformation sites due to motivated reasoning or lack of expertise? And will they “game” this crowdsourcing mechanism to promote content that aligns with their partisan agendas? We conducted a survey experiment in which N = 984 Americans indicated their trust in numerous news sites. Half of the participants were told that their survey responses would inform social media ranking algorithms - creating a potential incentive to misrepresent their beliefs. Participants trusted mainstream sources much more than hyper-partisan or fake news sources, and their ratings were highly correlated with professional fact-checker judgments. Critically, informing participants that their responses would influence ranking algorithms did not diminish this high level of discernment, despite slightly increasing the political polarization of trust ratings.


Author(s):  
Alberto Ardèvol-Abreu ◽  
Patricia Delponti ◽  
Carmen Rodríguez-Wangüemert

The main social media platforms have been implementing strategies to minimize fake news dissemination. These include identifying, labeling, and penalizing –via news feed ranking algorithms– fake publications. Part of the rationale behind this approach is that the negative effects of fake content arise only when social media users are deceived. Once debunked, fake posts and news stories should therefore become harmless. Unfortunately, the literature shows that the effects of misinformation are more complex and tend to persist and even backfire after correction. Furthermore, we still do not know much about how social media users evaluate content that has been fact-checked and flagged as false. More worryingly, previous findings suggest that some people may intentionally share made up news on social media, although their motivations are not fully explained. To better understand users’ interaction with social media content identified or recognized as false, we analyze qualitative and quantitative data from five focus groups and a sub-national online survey (N = 350). Findings suggest that the label of ‘false news’ plays a role –although not necessarily central– in social media users’ evaluation of the content and their decision (not) to share it. Some participants showed distrust in fact-checkers and lack of knowledge about the fact-checking process. We also found that fake news sharing is a two-dimensional phenomenon that includes intentional and unintentional behaviors. We discuss some of the reasons why some of social media users may choose to distribute fake news content intentionally.


2020 ◽  
Vol 84 (S1) ◽  
pp. 195-215 ◽  
Author(s):  
Patrick W Kraft ◽  
Yanna Krupnikov ◽  
Kerri Milita ◽  
John Barry Ryan ◽  
Stuart Soroka

Abstract  There is reason to believe that an increasing proportion of the news consumers receive is not from news producers directly but is recirculated through social network sites and email by ordinary citizens. This may produce some fundamental changes in the information environment, but the data to examine this possibility have thus far been relatively limited. In the current paper, we examine the changing information environment by leveraging a body of data on the frequency of (a) views, and recirculations through (b) Twitter, (c) Facebook, and (d) email of New York Times stories. We expect that the distribution of sentiment (positive-negative) in news stories will shift in a positive direction as we move from (a) to (d), based in large part on the literatures on self-presentation and imagined audiences. Our findings support this expectation and have important implications for the information contexts increasingly shaping public opinion.


Sign in / Sign up

Export Citation Format

Share Document