FakeNewsNet: A Data Repository with News Content, Social Context, and Spatiotemporal Information for Studying Fake News on Social Media

Big Data ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 171-188 ◽  
Author(s):  
Kai Shu ◽  
Deepak Mahudeswaran ◽  
Suhang Wang ◽  
Dongwon Lee ◽  
Huan Liu
2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


2021 ◽  
pp. 128-141
Author(s):  
Catherine Sotirakou ◽  
Anastasia Karampela ◽  
Constantinos Mourlas
Keyword(s):  

2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Gordon Pennycook ◽  
Jabin Binnendyk ◽  
Christie Newton ◽  
David G. Rand

Coincident with the global rise in concern about the spread of misinformation on social media, there has been influx of behavioral research on so-called “fake news” (fabricated or false news headlines that are presented as if legitimate) and other forms of misinformation. These studies often present participants with news content that varies on relevant dimensions (e.g., true v. false, politically consistent v. inconsistent, etc.) and ask participants to make judgments (e.g., accuracy) or choices (e.g., whether they would share it on social media). This guide is intended to help researchers navigate the unique challenges that come with this type of research. Principle among these issues is that the nature of news content that is being spread on social media (whether it is false, misleading, or true) is a moving target that reflects current affairs in the context of interest. Steps are required if one wishes to present stimuli that allow generalization from the study to the real-world phenomenon of online misinformation. Furthermore, the selection of content to include can be highly consequential for the study’s outcome, and researcher biases can easily result in biases in a stimulus set. As such, we advocate for pretesting materials and, to this end, report our own pretest of 224 recent true and false news headlines, both relating to U.S. political issues and the COVID-19 pandemic. These headlines may be of use in the short term, but, more importantly, the pretest is intended to serve as an example of best practices in a quickly evolving area of research.


2020 ◽  
Author(s):  
Gordon Pennycook ◽  
Jabin Binnendyk ◽  
Christie Newton ◽  
David Gertler Rand

Coincident with the global rise in concern about the spread of misinformation on social media, there has been influx of behavioural research on so-called “fake news” (fabricated or false news headlines that are presented as if legitimate) and other forms of misinformation. These studies often present participants with news content that varies on relevant dimensions (e.g., true v. false, politically consistent v. inconsistent, etc.) and ask participants to make judgments (e.g., accuracy) or choices (e.g., whether they would share it on social media). This guide is intended to help researchers navigate the unique challenges that come with this type of research. Principle among these issues is that the nature of news content that is being spread on social media (whether it is false, misleading, or true) is a moving target that reflects current affairs in the context of interest. Steps are required if one wishes to present stimuli that allow generalization from the study to the real-world phenomenon. Furthermore, the selection of content to include can be highly consequential for the study’s outcome, and researcher biases can easily result in biases in a stimulus set. As such, we advocate for pretesting materials and, to this end, report our own pretest of 225 recent true and false news headlines, both relating to U.S. political issues and the COVID-19 pandemic. These headlines may be of use in the short term, but, more importantly, the pretest is intended to serve as an example of best practices in a quickly evolving area of research.


Episteme ◽  
2018 ◽  
Vol 17 (2) ◽  
pp. 141-161 ◽  
Author(s):  
C. Thi Nguyen

ABSTRACTDiscussion of the phenomena of post-truth and fake news often implicates the closed epistemic networks of social media. The recent conversation has, however, blurred two distinct social epistemic phenomena. An epistemic bubble is a social epistemic structure in which other relevant voices have been left out, perhaps accidentally. An echo chamber is a social epistemic structure from which other relevant voices have been actively excluded and discredited. Members of epistemic bubbles lack exposure to relevant information and arguments. Members of echo chambers, on the other hand, have been brought to systematically distrust all outside sources. In epistemic bubbles, other voices are not heard; in echo chambers, other voices are actively undermined. It is crucial to keep these phenomena distinct. First, echo chambers can explain the post-truth phenomena in a way that epistemic bubbles cannot. Second, each type of structure requires a distinct intervention. Mere exposure to evidence can shatter an epistemic bubble, but may actually reinforce an echo chamber. Finally, echo chambers are much harder to escape. Once in their grip, an agent may act with epistemic virtue, but social context will pervert those actions. Escape from an echo chamber may require a radical rebooting of one's belief system.


Author(s):  
Divya Tiwari ◽  
Surbhi Thorat

Fake news dissemination is a critical issue in today’s fast-changing network environment. The issues of online fake news have attained an increasing eminence in the diffusion of shaping news stories online. This paper deals with the categorical cyber terrorism threats on social media and preventive approach to minimize their issues. Misleading or unreliable information in form of videos, posts, articles, URLs are extensively disseminated through popular social media platforms such as Facebook, Twitter, etc. As a result, editors and journalists are in need of new tools that can help them to pace up the verification process for the content that has been originated from social media. existing classification models for fake news detection have not completely stopped the spread because of their inability to accurately classify news, thus leading to a high false alarm rate. This study proposed a model that can accurately identify and classify deceptive news articles content infused on social media by malicious users. The news content, social-context features and the respective classification of reported news was extracted from the PHEME dataset using entropy-based feature selection. The selected features were normalized using Min-Max Normalization techniques. The model was simulated and its performance was evaluated by benchmarking with an existing model using detection accuracy, sensitivity, and precision as metrics. The result of the evaluation showed a higher 17.25% detection accuracy, 15.78% sensitivity, but lesser 0.2% precision than the existing model, Thus, the proposed model detects more fake news instances accurately based on news content and social content perspectives. This indicates that the proposed classification model has a better detection rate, reduces the false alarm rate of news instances and thus detects fake news more accurately.


MIS Quarterly ◽  
2019 ◽  
Vol 43 (3) ◽  
pp. 1025-1039 ◽  
Author(s):  
Antino Kim ◽  
◽  
Alan R. Dennis ◽  

2018 ◽  
Author(s):  
Andrea Pereira ◽  
Jay Joseph Van Bavel ◽  
Elizabeth Ann Harris

Political misinformation, often called “fake news”, represents a threat to our democracies because it impedes citizens from being appropriately informed. Evidence suggests that fake news spreads more rapidly than real news—especially when it contains political content. The present article tests three competing theoretical accounts that have been proposed to explain the rise and spread of political (fake) news: (1) the ideology hypothesis— people prefer news that bolsters their values and worldviews; (2) the confirmation bias hypothesis—people prefer news that fits their pre-existing stereotypical knowledge; and (3) the political identity hypothesis—people prefer news that allows their political in-group to fulfill certain social goals. We conducted three experiments in which American participants read news that concerned behaviors perpetrated by their political in-group or out-group and measured the extent to which they believed the news (Exp. 1, Exp. 2, Exp. 3), and were willing to share the news on social media (Exp. 2 and 3). Results revealed that Democrats and Republicans were both more likely to believe news about the value-upholding behavior of their in-group or the value-undermining behavior of their out-group, supporting a political identity hypothesis. However, although belief was positively correlated with willingness to share on social media in all conditions, we also found that Republicans were more likely to believe and want to share apolitical fake new. We discuss the implications for theoretical explanations of political beliefs and application of these concepts in in polarized political system.


2019 ◽  
Vol 8 (1) ◽  
pp. 114-133

Since the 2016 U.S. presidential election, attacks on the media have been relentless. “Fake news” has become a household term, and repeated attempts to break the trust between reporters and the American people have threatened the validity of the First Amendment to the U.S. Constitution. In this article, the authors trace the development of fake news and its impact on contemporary political discourse. They also outline cutting-edge pedagogies designed to assist students in critically evaluating the veracity of various news sources and social media sites.


Sign in / Sign up

Export Citation Format

Share Document