scholarly journals A hybrid model for fake news detection: Leveraging news content and user comments in fake news

2021 ◽  
Vol 15 (2) ◽  
pp. 169-177
Author(s):  
Marwan Albahar
2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


2020 ◽  
Vol 39 (4) ◽  
Author(s):  
Uğur Mertoğlu ◽  
Burkay Genç

The transformation of printed media into digital environment and the extensive use of social media have changed the concept of media literacy and people’s habit of consuming news. While this faster, easier, and comparatively cheaper opportunity offers convenience in terms of people's access to information, it comes with a certain significant problem: Fake News. Due to the free production and consumption of large amounts of data, fact-checking systems powered by human efforts are not enough to question the credibility of the information provided, or to prevent its rapid dissemination like a virus. Libraries, known as sources of trusted information for ages, are facing with the problem because of this difficulty. Considering that libraries are undergoing digitisation processes all over the world and providing digital media to their users, it is very likely that unchecked digital content will be served by world’s libraries. The solution is to develop automated mechanisms that can check the credibility of digital content served in libraries without manual validation. For this purpose, we developed an automated fake news detection system based on the Turkish digital news content. Our approach can be modified for any other language if there is labelled training material. The developed model can be integrated into libraries’ digital systems to label served news content as potentially fake whenever necessary, preventing uncontrolled falsehood dissemination via libraries.


Societies ◽  
2021 ◽  
Vol 11 (4) ◽  
pp. 119
Author(s):  
Robert B. Michael ◽  
Mevagh Sanson

People have access to more news from more sources than ever before. At the same time, they increasingly distrust traditional media and are exposed to more misinformation. To help people better distinguish real news from “fake news,” we must first understand how they judge whether news is real or fake. One possibility is that people adopt a relatively effortful, analytic approach, judging news based on its content. However, another possibility—consistent with psychological research—is that people adopt a relatively effortless, heuristic approach, drawing on cues outside of news content. One such cue is where the news comes from: its source. Beliefs about news sources depend on people’s political affiliation, with U.S. liberals tending to trust sources that conservatives distrust, and vice versa. Therefore, if people take this heuristic approach, then judgments of news from different sources should depend on political affiliation and lead to a confirmation bias of pre-existing beliefs. Similarly, political affiliation could affect the likelihood that people mistake real news for fake news. We tested these ideas in two sets of experiments. In the first set, we asked University of Louisiana at Lafayette undergraduates (Experiment 1a n = 376) and Mechanical Turk workers in the United States (Experiment 1a n = 205; Experiment 1b n = 201) to rate how “real” versus “fake” a series of unfamiliar news headlines were. We attributed each headline to one of several news sources of varying political slant. As predicted, we found that source information influenced people’s ratings in line with their own political affiliation, although this influence was relatively weak. In the second set, we asked Mechanical Turk workers in the United States (Experiment 2a n = 300; Experiment 2b n = 303) and University of Louisiana at Lafayette undergraduates (Experiment 2b n = 182) to watch a highly publicized “fake news” video involving doctored footage of a journalist. We found that people’s political affiliation influenced their beliefs about the event, but the doctored footage itself had only a trivial influence. Taken together, these results suggest that adults across a range of ages rely on information other than news content—such as how they feel about its source—when judging whether news is real or fake. Moreover, our findings help explain how people experiencing the same news content can arrive at vastly different conclusions. Finally, efforts aimed at educating the public in combatting fake news need to consider how political affiliation affects the psychological processes involved in forming beliefs about the news.


Big Data ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. 171-188 ◽  
Author(s):  
Kai Shu ◽  
Deepak Mahudeswaran ◽  
Suhang Wang ◽  
Dongwon Lee ◽  
Huan Liu

2021 ◽  
pp. 128-141
Author(s):  
Catherine Sotirakou ◽  
Anastasia Karampela ◽  
Constantinos Mourlas
Keyword(s):  

Author(s):  
Andrea Karnyoto ◽  
Chengjie Sun ◽  
Bingquan Liu ◽  
Xiaolong Wang

The spread of fake news on online media is very dangerous and can lead to casualties, effects on psychology, character assassination, elections for political parties, and state chaos. Fake news that concerning Covid-19 massively spread during the pandemic. Detecting misinformation on the Internet is an essential and challenging task since humans face difficulty detecting fake news. We applied BERT and GPT2 as pre-trained using the BiGRU-Att-CapsuleNet model and BiGRU-CRF features augmentation to solve Fake News detection in Constraint @ AAAI2021 - COVID19 Fake News Detection in English Dataset. This research proved that our hybrid model with augmentation got better accuracy compared to our baseline model. It also showed that BERT gave a better result than GPT2 in all models; the highest accuracy we achieved for BERT is 0.9196, and GPT2 is 0.8986.


The uncontrollable spread of fake news through the net is irresistible in this globalization era. Fake news dissemination cannot be tolerated as the bad impacts of it to the society is really worrying. Furthermore, this will lead to more significant problems and potential threat such as confusion, misconceptions, slandering and luring users to share provocative lies made from fabricated news through their social media to occur. Within Malaysia context, there is lack in platform for fake news detection in Malay language articles and most of Malaysians received news through their social messaging applications. Fake news can be certainly solved by the aid of artificial intelligence which includes machine learning algorithms. The objective of this project is to propose a fake news detection model using Logistic Regression, to evaluate the performance of Logistic Regression as fake news detection model and to develop a web application that allows entry of a news content or news URL. In this study, Logistic Regression was applied in detecting fake news. Model development methodology is referenced and followed in this project. Based on existing studies, Logistic Regression showed a good performance in classification task. In addition, stancedetection approach is added to improve the accuracy of the model performance. Based on analysis made, this model within stance detection approach yields an excellent accuracy using TF-IDF feature in constructing this fake news model. This model is then integrated with web service that accepts input either news URL or news content in text which is then checked for its truth level through “FAKEBUSTER” application.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Gordon Pennycook ◽  
Jabin Binnendyk ◽  
Christie Newton ◽  
David G. Rand

Coincident with the global rise in concern about the spread of misinformation on social media, there has been influx of behavioral research on so-called “fake news” (fabricated or false news headlines that are presented as if legitimate) and other forms of misinformation. These studies often present participants with news content that varies on relevant dimensions (e.g., true v. false, politically consistent v. inconsistent, etc.) and ask participants to make judgments (e.g., accuracy) or choices (e.g., whether they would share it on social media). This guide is intended to help researchers navigate the unique challenges that come with this type of research. Principle among these issues is that the nature of news content that is being spread on social media (whether it is false, misleading, or true) is a moving target that reflects current affairs in the context of interest. Steps are required if one wishes to present stimuli that allow generalization from the study to the real-world phenomenon of online misinformation. Furthermore, the selection of content to include can be highly consequential for the study’s outcome, and researcher biases can easily result in biases in a stimulus set. As such, we advocate for pretesting materials and, to this end, report our own pretest of 224 recent true and false news headlines, both relating to U.S. political issues and the COVID-19 pandemic. These headlines may be of use in the short term, but, more importantly, the pretest is intended to serve as an example of best practices in a quickly evolving area of research.


Sign in / Sign up

Export Citation Format

Share Document