Credibility Perceptions and Detection Accuracy of Fake News Headlines on Social Media: Effects of Truth-Bias and Endorsement Cues

2020 ◽  
pp. 009365022092132
Author(s):  
Mufan Luo ◽  
Jeffrey T. Hancock ◽  
David M. Markowitz

This article focuses on message credibility and detection accuracy of fake and real news as represented on social media. We developed a deception detection paradigm for news headlines and conducted two online experiments to examine the extent to which people (1) perceive news headlines as credible, and (2) accurately distinguish fake and real news across three general topics (i.e., politics, science, and health). Both studies revealed that people often judged news headlines as fake, suggesting a deception-bias for news in social media. Across studies, we observed an average detection accuracy of approximately 51%, a level consistent with most research using this deception detection paradigm with equal lie-truth base-rates. Study 2 evaluated the effects of endorsement cues in social media (e.g., Facebook likes) on message credibility and detection accuracy. Results showed that headlines associated with a high number of Facebook likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. These studies introduce truth-default theory to the context of news credibility and advance our understanding of how biased processing of news information can impact detection accuracy with social media endorsement cues.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Khudejah Ali ◽  
Cong Li ◽  
Khawaja Zain-ul-abdin ◽  
Muhammad Adeel Zaffar

PurposeAs the epidemic of online fake news is causing major concerns in contexts such as politics and public health, the current study aimed to elucidate the effect of certain “heuristic cues,” or key contextual features, which may increase belief in the credibility and the subsequent sharing of online fake news.Design/methodology/approachThis study employed a 2 (news veracity: real vs fake) × 2 (social endorsements: low Facebook “likes” vs high Facebook “likes”) between-subjects experimental design (N = 239).FindingsThe analysis revealed that a high number of Facebook “likes” accompanying fake news increased the perceived credibility of the material compared to a low number of “likes.” In addition, the mediation results indicated that increased perceptions of news credibility may create a situation in which readers feel that it is necessary to cognitively elaborate on the information present in the news, and this active processing finally leads to sharing.Practical implicationsThe results from this study help explicate what drives increased belief and sharing of fake news and can aid in refining interventions aimed at combating fake news for both communities and organizations.Originality/valueThe current study expands upon existing literature, linking the use of social endorsements to perceived credibility of fake news and information, and sheds light on the causal mechanisms through which people make the decision to share news articles on social media.


2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


Ethnologies ◽  
2019 ◽  
Vol 40 (2) ◽  
pp. 93-110
Author(s):  
Kari Sawden

Working within alternative belief communities, I frequently encounter a tension between what is felt to be authentic and the facts provided by external sources. Even a cursory glance at the news headlines and social media postings that saturate daily life with terms such as “fake news” and “alternative facts” reveals that this is not an isolated struggle. Focusing on the ways in which contemporary Canadian divination practitioners establish their own truth, this paper examines how these processes reflect and support folklore’s engagement with and ongoing relationship to the emergence of multiple authenticities defined by the experiential.


2017 ◽  
Vol 37 (4) ◽  
pp. 407-430 ◽  
Author(s):  
David E. Clementson

This study tests the effects of political partisanship on voters’ perception and detection of deception. Based on social identity theory, in-group members should consider their politician’s message truthful while the opposing out-group would consider the message deceptive. Truth-default theory predicts that a salient in-group would be susceptible to deception from their in-group politician. In an experiment, partisan voters in the United States ( N = 618) watched a news interview in which a politician was labeled Democratic or Republican. The politician either answered all the questions or deceptively evaded a question. Results indicated that the truth bias largely prevailed. Voters were more likely to be accurate in their detection when the politician answered and did not dodge. Truth-default theory appears robust in a political setting, as truth bias holds (as opposed to deception bias). Accuracy in detection also depends on group affiliation. In-groups are accurate when their politician answers, and inaccurate when he dodges. Out-groups are more accurate than in-groups when a politician dodges, but still exhibit truth bias.


Author(s):  
Miriam E. Armstrong ◽  
Keith S. Jones ◽  
Akbar Siami Namin

Objective To understand how aspects of vishing calls (phishing phone calls) influence perceived visher honesty. Background Little is understood about how targeted individuals behave during vishing attacks. According to truth-default theory, people assume others are being honest until something triggers their suspicion. We investigated whether that was true during vishing attacks. Methods Twenty-four participants read written descriptions of eight real-world vishing calls. Half included highly sensitive requests; the remainder included seemingly innocuous requests. Participants rated visher honesty at multiple points during conversations. Results Participants initially perceived vishers to be honest. Honesty ratings decreased before requests occurred. Honesty ratings decreased further in response to highly sensitive requests, but not seemingly innocuous requests. Honesty ratings recovered somewhat, but only after highly sensitive requests. Conclusions The present results revealed five important insights: (1) people begin vishing conversations in the truth-default state, (2) certain aspects of vishing conversations serve as triggers, (3) other aspects of vishing conversations do not serve as triggers, (4) in certain situations, people’s perceptions of visher honesty improve, and, more generally, (5) truth-default theory may be a useful tool for understanding how targeted individuals behave during vishing attacks. Application Those developing systems that help users deal with suspected vishing attacks or penetration testing plans should consider (1) targeted individuals’ truth-bias, (2) the influence of visher demeanor on the likelihood of deception detection, (3) the influence of fabricated situations surrounding vishing requests on the likelihood of deception detection, and (4) targeted individuals’ lack of concern about seemingly innocuous requests.


2021 ◽  
Vol 58 (2) ◽  
pp. 629-636
Author(s):  
Priyanka Mishra

INTRODUCTION- Misinformation. Hoaxes. Rumours. Fake news- so many terms for the same phenomenon. It is something which is not new and has been going on since as early as any of us can remember. Although recently, it has seen a sudden boom with the advent of the digital world and suddenly everyone seems to have an opinion on everything going on in the world, however ill formed it maybe. SUMMARY- In such a situation, how could the single biggest event of 2020- the corona virus or COVID 19 pandemic, be an exception to this trend. All of us have come across some piece of “information” regarding this microscopic being which while staying invisible to the naked eye has proved to be mankind’s worst nemesis till date and has brought the world down on its knees. It proved to be an evil which could exist in any form- pictures, videos, text messages, audio messages, news headlines or a simply misconstrued interpretation of something said by a public figure. There are various reasons responsible for this surge of fake news, primarily the multitude of information available today at one’s fingertips coupled with lack of scientific attitude and awareness. The proliferation of social media has democratized access to all types of information and at the same time blurred the line between truth and falsehood. Although there is evidence that social media was used as a channel to disseminate useful information such as common symptoms of COVID infection, need for social distancing etc, the consequences of false information masquerading as verifiable truth were apparent during the peak of the pandemic crisis, with false parallels being drawn between scientific evidence and uninformed opinion. CONCLUSION- Fake news needs to be scrutinised harder than ever with the world facing its biggest health crisis in centuries.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Gordon Pennycook ◽  
Jabin Binnendyk ◽  
Christie Newton ◽  
David G. Rand

Coincident with the global rise in concern about the spread of misinformation on social media, there has been influx of behavioral research on so-called “fake news” (fabricated or false news headlines that are presented as if legitimate) and other forms of misinformation. These studies often present participants with news content that varies on relevant dimensions (e.g., true v. false, politically consistent v. inconsistent, etc.) and ask participants to make judgments (e.g., accuracy) or choices (e.g., whether they would share it on social media). This guide is intended to help researchers navigate the unique challenges that come with this type of research. Principle among these issues is that the nature of news content that is being spread on social media (whether it is false, misleading, or true) is a moving target that reflects current affairs in the context of interest. Steps are required if one wishes to present stimuli that allow generalization from the study to the real-world phenomenon of online misinformation. Furthermore, the selection of content to include can be highly consequential for the study’s outcome, and researcher biases can easily result in biases in a stimulus set. As such, we advocate for pretesting materials and, to this end, report our own pretest of 224 recent true and false news headlines, both relating to U.S. political issues and the COVID-19 pandemic. These headlines may be of use in the short term, but, more importantly, the pretest is intended to serve as an example of best practices in a quickly evolving area of research.


2020 ◽  
Author(s):  
Gordon Pennycook ◽  
Jabin Binnendyk ◽  
Christie Newton ◽  
David Gertler Rand

Coincident with the global rise in concern about the spread of misinformation on social media, there has been influx of behavioural research on so-called “fake news” (fabricated or false news headlines that are presented as if legitimate) and other forms of misinformation. These studies often present participants with news content that varies on relevant dimensions (e.g., true v. false, politically consistent v. inconsistent, etc.) and ask participants to make judgments (e.g., accuracy) or choices (e.g., whether they would share it on social media). This guide is intended to help researchers navigate the unique challenges that come with this type of research. Principle among these issues is that the nature of news content that is being spread on social media (whether it is false, misleading, or true) is a moving target that reflects current affairs in the context of interest. Steps are required if one wishes to present stimuli that allow generalization from the study to the real-world phenomenon. Furthermore, the selection of content to include can be highly consequential for the study’s outcome, and researcher biases can easily result in biases in a stimulus set. As such, we advocate for pretesting materials and, to this end, report our own pretest of 225 recent true and false news headlines, both relating to U.S. political issues and the COVID-19 pandemic. These headlines may be of use in the short term, but, more importantly, the pretest is intended to serve as an example of best practices in a quickly evolving area of research.


2021 ◽  
Vol 27 (9) ◽  
pp. 979-998
Author(s):  
Riri Fitri Sari ◽  
Asri Ilmananda ◽  
Daniela Romano

In the current digital era, information exchanges can be done easily through the Internet and social media. However, the actual truth of the news on social media platforms is hard to prove, and social media platforms are susceptible to the spreading of hoaxes. As a remedy, Blockchain technology can be used to ensure the reliability of shared information and can create a trusted communications environment. In this study, we propose a social media news spreading model by adapting an epidemic methodology and a scale-free network. A Blockchain-based news verification system is implemented to identify the credibility of the news and its sources. The effectiveness of the model is investigated by utilizing agent-based modelling using NetLogo software. In the simulations, fake news with a truth level of 20% are assigned a low News Credibility Indicator (NCI ± -0.637) value for all of the different network dimensions. Moreover, the Producer Reputation Credit is also decreased (PRC ± 0.213) so that the trust factor value is reduced. Our epidemic approach for news verification has also been implemented using Ethereum Smart Contract and several tools such as React with Solidity, IPFS, Web3.js, and Metamask. By showing the measurements of the credibility indicator and reputation credit to the user during the news dissemination process, this proposed smart contract can effectively limit user behaviour in spreading fake news and improve the content quality on social media.


Sign in / Sign up

Export Citation Format

Share Document