scholarly journals exBAKE: Automatic Fake News Detection Model Based on Bidirectional Encoder Representations from Transformers (BERT)

2019 ◽  
Vol 9 (19) ◽  
pp. 4062 ◽  
Author(s):  
Heejung Jwa ◽  
Dongsuk Oh ◽  
Kinam Park ◽  
Jang Kang ◽  
Hueiseok Lim

News currently spreads rapidly through the internet. Because fake news stories are designed to attract readers, they tend to spread faster. For most readers, detecting fake news can be challenging and such readers usually end up believing that the fake news story is fact. Because fake news can be socially problematic, a model that automatically detects such fake news is required. In this paper, we focus on data-driven automatic fake news detection methods. We first apply the Bidirectional Encoder Representations from Transformers model (BERT) model to detect fake news by analyzing the relationship between the headline and the body text of news. To further improve performance, additional news data are gathered and used to pre-train this model. We determine that the deep-contextualizing nature of BERT is best suited for this task and improves the 0.14 F-score over older state-of-the-art models.

2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Zhongmin Liu ◽  
Zhicai Chen ◽  
Zhanming Li ◽  
Wenjin Hu

In recent years, techniques based on the deep detection model have achieved overwhelming improvements in the accuracy of detection, which makes them being the most adapted for the applications, such as pedestrian detection. However, speed and accuracy are a pair of contradictions that always exist and have long puzzled researchers. How to achieve the good trade-off between them is a problem we must consider while designing the detectors. To this end, we employ the general detector YOLOv2, a state-of-the-art method in the general detection tasks, in the pedestrian detection. Then we modify the network parameters and structures, according to the characteristics of the pedestrians, making this method more suitable for detecting pedestrians. Experimental results in INRIA pedestrian detection dataset show that it has a fairly high detection speed with a small precision gap compared with the state-of-the-art pedestrian detection methods. Furthermore, we add weak semantic segmentation networks after shared convolution layers to illuminate pedestrians and employ a scale-aware structure in our model according to the characteristics of the wide size range in Caltech pedestrian detection dataset, which make great progress under the original improvement.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 2145 ◽  
Author(s):  
Guoxu Liu ◽  
Joseph Christian Nouaze ◽  
Philippe Lyonel Touko Mbouembe ◽  
Jae Ho Kim

Automatic fruit detection is a very important benefit of harvesting robots. However, complicated environment conditions, such as illumination variation, branch, and leaf occlusion as well as tomato overlap, have made fruit detection very challenging. In this study, an improved tomato detection model called YOLO-Tomato is proposed for dealing with these problems, based on YOLOv3. A dense architecture is incorporated into YOLOv3 to facilitate the reuse of features and help to learn a more compact and accurate model. Moreover, the model replaces the traditional rectangular bounding box (R-Bbox) with a circular bounding box (C-Bbox) for tomato localization. The new bounding boxes can then match the tomatoes more precisely, and thus improve the Intersection-over-Union (IoU) calculation for the Non-Maximum Suppression (NMS). They also reduce prediction coordinates. An ablation study demonstrated the efficacy of these modifications. The YOLO-Tomato was compared to several state-of-the-art detection methods and it had the best detection performance.


2014 ◽  
Vol 2014 ◽  
pp. 1-9
Author(s):  
Guoyang Yan ◽  
Jiangyuan Mei ◽  
Shen Yin ◽  
Hamid Reza Karimi

Fault detection is fundamental to many industrial applications. With the development of system complexity, the number of sensors is increasing, which makes traditional fault detection methods lose efficiency. Metric learning is an efficient way to build the relationship between feature vectors with the categories of instances. In this paper, we firstly propose a metric learning-based fault detection framework in fault detection. Meanwhile, a novel feature extraction method based on wavelet transform is used to obtain the feature vector from detection signals. Experiments on Tennessee Eastman (TE) chemical process datasets demonstrate that the proposed method has a better performance when comparing with existing methods, for example, principal component analysis (PCA) and fisher discriminate analysis (FDA).


Author(s):  
Cameron Martel ◽  
Gordon Pennycook ◽  
David G. Rand

Abstract What is the role of emotion in susceptibility to believing fake news? Prior work on the psychology of misinformation has focused primarily on the extent to which reason and deliberation hinder versus help the formation of accurate beliefs. Several studies have suggested that people who engage in more reasoning are less likely to fall for fake news. However, the role of reliance on emotion in belief in fake news remains unclear. To shed light on this issue, we explored the relationship between experiencing specific emotions and believing fake news (Study 1; N = 409). We found that across a wide range of specific emotions, heightened emotionality at the outset of the study was predictive of greater belief in fake (but not real) news posts. Then, in Study 2, we measured and manipulated reliance on emotion versus reason across four experiments (total N = 3884). We found both correlational and causal evidence that reliance on emotion increases belief in fake news: self-reported use of emotion was positively associated with belief in fake (but not real) news, and inducing reliance on emotion resulted in greater belief in fake (but not real) news stories compared to a control or to inducing reliance on reason. These results shed light on the unique role that emotional processing may play in susceptibility to fake news.


2019 ◽  
Vol 13 (2) ◽  
pp. 78-92
Author(s):  
Elise M. Stevens ◽  
Karen McIntyre

The Onion is a satirical news site that has been growing in popularity over the last two decades. Based on theories in affect and social sharing, the current studies examined the impact of this online satirical news to understand its impact on affective states and online sharing. In Study 1, participants ( N = 147) either viewed a satirical or serious (frame) news story and then were asked about affective states and sharing behaviors. In Study 2, participants ( N = 143) viewed one of the two frames but on Instagram. In Study 1, results showed that serious news stories increased both positive and negative affect. Only positive affect mediated the relationship between frame and sharing. In Study 2, results showed that satirical Instagram posts were positively associated with negative affect, which mediated the relationship between frame and sharing. This study shows the important implications of online satirical news and illuminates how different platforms can affect audiences.


2020 ◽  
pp. 146144482096989
Author(s):  
Sacha Altay ◽  
Anne-Sophie Hacquin ◽  
Hugo Mercier

In spite of the attractiveness of fake news stories, most people are reluctant to share them. Why? Four pre-registered experiments ( N = 3,656) suggest that sharing fake news hurt one’s reputation in a way that is difficult to fix, even for politically congruent fake news. The decrease in trust a source (media outlet or individual) suffers when sharing one fake news story against a background of real news is larger than the increase in trust a source enjoys when sharing one real news story against a background of fake news. A comparison with real-world media outlets showed that only sources sharing no fake news at all had similar trust ratings to mainstream media. Finally, we found that the majority of people declare they would have to be paid to share fake news, even when the news is politically congruent, and more so when their reputation is at stake.


2020 ◽  
Vol 45 (s1) ◽  
pp. 694-717
Author(s):  
Nicoleta Corbu ◽  
Alina Bârgăoanu ◽  
Raluca Buturoiu ◽  
Oana Ștefăniță

AbstractThis study examines the potential of fake news to produce effects on social media engagement as well as the moderating role of education and government approval. We report on a 2x2x2 online experiment conducted in Romania (N=813), in which we manipulated the level of facticity of a news story, its valence, and intention to deceive. Results show that ideologically driven news with a negative valence (rather than fabricated news or other genres, such as satire and parody) have a greater virality potential. However, neither the level of education nor government approval moderate this effect. Additionally, both positive and negative ideologically driven news stories enhance the probability that people will sign a document to support the government (i. e., potential for political engagement on social media). These latter effects are moderated by government approval: Lower levels of government approval lead to less support for the government on social media, as a consequence of fake news exposure.


2019 ◽  
Author(s):  
Sacha Altay ◽  
Anne-Sophie Hacquin ◽  
Hugo Mercier

In spite of the attractiveness of fake news stories, most people are reluctant to share them. Why? Four pre-registered experiments (N = 3656) suggest that sharing fake news hurt one’s reputation in a way that is difficult to fix, even for politically congruent fake news. The decrease in trust a source (media outlet or individual) suffers when sharing one fake news story against a background of real news is larger than the increase in trust a source enjoys when sharing one real news story against a background of fake news. A comparison with real-world media outlets showed that only sources sharing no fake news at all had similar trust ratings to mainstream media. Finally, we found that the majority of people declare they would have to be paid to share fake news, even when the news is politically congruent, and more so when their reputation is at stake.


2019 ◽  
Author(s):  
Cameron Martel ◽  
Gordon Pennycook ◽  
David Gertler Rand

What is the role of emotion in susceptibility to believing fake news? Prior work on the psychology of misinformation has focused primarily on the extent to which reason and deliberation hinder versus help the formation of accurate beliefs. Several studies have suggested that people who engage in more reasoning are less likely to fall for fake news. However, the role of reliance on emotion in belief in fake news remains unclear. To shed light on this issue, we explored the relationship between experiencing specific emotions and believing fake news (Study 1; N = 409). We found that across a wide range of specific emotions, heightened emotionality was predictive of increased belief in fake (but not real) news. Then, in Study 2, we measured and manipulated reliance on emotion versus reason across four experiments (total N = 3884). We found both correlational and causal evidence that reliance on emotion increases belief in fake news: Self-reported use of emotion was positively associated with belief in fake (but not real) news, and inducing reliance on emotion resulted in greater belief in fake (but not real) news stories compared to a control or to inducing reliance on reason. These results shed light on the unique role that emotional processing may play in susceptibility to fake news.


Information ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 20
Author(s):  
Fantahun Gereme ◽  
William Zhu ◽  
Tewodros Ayall ◽  
Dagmawi Alemu

The need to fight the progressive negative impact of fake news is escalating, which is evident in the strive to do research and develop tools that could do this job. However, a lack of adequate datasets and good word embeddings have posed challenges to make detection methods sufficiently accurate. These resources are even totally missing for “low-resource” African languages, such as Amharic. Alleviating these critical problems should not be left for tomorrow. Deep learning methods and word embeddings contributed a lot in devising automatic fake news detection mechanisms. Several contributions are presented, including an Amharic fake news detection model, a general-purpose Amharic corpus (GPAC), a novel Amharic fake news detection dataset (ETH_FAKE), and Amharic fasttext word embedding (AMFTWE). Our Amharic fake news detection model, evaluated with the ETH_FAKE dataset and using the AMFTWE, performed very well.


Sign in / Sign up

Export Citation Format

Share Document