fake news
Recently Published Documents


TOTAL DOCUMENTS

5056
(FIVE YEARS 4191)

H-INDEX

44
(FIVE YEARS 23)

2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Giannis Bekoulis ◽  
Christina Papagiannopoulou ◽  
Nikos Deligiannis

We study the fact-checking problem, which aims to identify the veracity of a given claim. Specifically, we focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset. The task consists of the subtasks of retrieving the relevant documents (and sentences) from Wikipedia and validating whether the information in the documents supports or refutes a given claim. This task is essential and can be the building block of applications such as fake news detection and medical claim verification. In this article, we aim at a better understanding of the challenges of the task by presenting the literature in a structured and comprehensive way. We describe the proposed methods by analyzing the technical perspectives of the different approaches and discussing the performance results on the FEVER dataset, which is the most well-studied and formally structured dataset on the fact extraction and verification task. We also conduct the largest experimental study to date on identifying beneficial loss functions for the sentence retrieval component. Our analysis indicates that sampling negative sentences is important for improving the performance and decreasing the computational complexity. Finally, we describe open issues and future challenges, and we motivate future research in the task.


2022 ◽  
Vol 127 ◽  
pp. 107032
Author(s):  
Alexandra Maftei ◽  
Andrei-Corneliu Holman ◽  
Ioan-Alex Merlici

Author(s):  
Arkadipta De ◽  
Dibyanayan Bandyopadhyay ◽  
Baban Gain ◽  
Asif Ekbal

Fake news classification is one of the most interesting problems that has attracted huge attention to the researchers of artificial intelligence, natural language processing, and machine learning (ML). Most of the current works on fake news detection are in the English language, and hence this has limited its widespread usability, especially outside the English literate population. Although there has been a growth in multilingual web content, fake news classification in low-resource languages is still a challenge due to the non-availability of an annotated corpus and tools. This article proposes an effective neural model based on the multilingual Bidirectional Encoder Representations from Transformer (BERT) for domain-agnostic multilingual fake news classification. Large varieties of experiments, including language-specific and domain-specific settings, are conducted. The proposed model achieves high accuracy in domain-specific and domain-agnostic experiments, and it also outperforms the current state-of-the-art models. We perform experiments on zero-shot settings to assess the effectiveness of language-agnostic feature transfer across different languages, showing encouraging results. Cross-domain transfer experiments are also performed to assess language-independent feature transfer of the model. We also offer a multilingual multidomain fake news detection dataset of five languages and seven different domains that could be useful for the research and development in resource-scarce scenarios.


Author(s):  
Sakshi Dhall ◽  
Ashutosh Dhar Dwivedi ◽  
Saibal K. Pal ◽  
Gautam Srivastava

With social media becoming the most frequently used mode of modern-day communications, the propagation of fake or vicious news through such modes of communication has emerged as a serious problem. The scope of the problem of fake or vicious news may range from rumour-mongering, with intent to defame someone, to manufacturing false opinions/trends impacting elections and stock exchanges to much more alarming and mala fide repercussions of inciting violence by bad actors, especially in sensitive law-and-order situations. Therefore, curbing fake or vicious news and identifying the source of such news to ensure strict accountability is the need of the hour. Researchers have been working in the area of using text analysis, labelling, artificial intelligence, and machine learning techniques for detecting fake news, but identifying the source or originator of such news for accountability is still a big challenge for which no concrete approach exists as of today. Also, there is another common problematic trend on social media whereby targeted vicious content goes viral to mobilize or instigate people with malicious intent to destabilize normalcy in society. In the proposed solution, we treat both problems of fake news and vicious news together. We propose a blockchain and keyed watermarking-based framework for social media/messaging platforms that will allow the integrity of the posted content as well as ensure accountability on the owner/user of the post. Intrinsic properties of blockchain-like transparency and immutability are advantageous for curbing fake or vicious news. After identification of fake or vicious news, its spread will be immediately curbed through backtracking as well as forward tracking. Also, observing transactions on the blockchain, the density and rate of forwarding of a particular original message going beyond a threshold can easily be checked, which could be identified as a possible malicious attempt to spread objectionable content. If the content is deemed dangerous or inappropriate, its spread will be curbed immediately. The use of the Raft consensus algorithm and bloXroute servers is proposed to enhance throughput and network scalability, respectively. Thus, the framework offers a proactive as well as reactive, practically feasible, and effective solution for curtailment of fake or vicious news on social media/messaging platforms. The proposed work is a framework for solving fake or vicious news spread problems on social media; the complete design specifications are beyond scope of the current work and will be addressed in the future.


Author(s):  
Ramsha Saeed ◽  
Hammad Afzal ◽  
Haider Abbas ◽  
Maheen Fatima

Increased connectivity has contributed greatly in facilitating rapid access to information and reliable communication. However, the uncontrolled information dissemination has also resulted in the spread of fake news. Fake news might be spread by a group of people or organizations to serve ulterior motives such as political or financial gains or to damage a country’s public image. Given the importance of timely detection of fake news, the research area has intrigued researchers from all over the world. Most of the work for detecting fake news focuses on the English language. However, automated detection of fake news is important irrespective of the language used for spreading false information. Recognizing the importance of boosting research on fake news detection for low resource languages, this work proposes a novel semantically enriched technique to effectively detect fake news in Urdu—a low resource language. A model based on deep contextual semantics learned from the convolutional neural network is proposed. The features learned from the convolutional neural network are combined with other n-gram-based features and are fed to a conventional majority voting ensemble classifier fitted with three base learners: Adaptive Boosting, Gradient Boosting, and Multi-Layer Perceptron. Experiments are performed with different models, and results show that enriching the traditional ensemble learner with deep contextual semantics along with other standard features shows the best results and outperforms the state-of-the-art Urdu fake news detection model.


Author(s):  
Mohammadreza Samadi ◽  
Maryam Mousavian ◽  
Saeedeh Momtazi

Nowadays, broadcasting news on social media and websites has grown at a swifter pace, which has had negative impacts on both the general public and governments; hence, this has urged us to build a fake news detection system. Contextualized word embeddings have achieved great success in recent years due to their power to embed both syntactic and semantic features of textual contents. In this article, we aim to address the problem of the lack of fake news datasets in Persian by introducing a new dataset crawled from different news agencies, and propose two deep models based on the Bidirectional Encoder Representations from Transformers model (BERT), which is a deep contextualized pre-trained model for extracting valuable features. In our proposed models, we benefit from two different settings of BERT, namely pool-based representation, which provides a representation for the whole document, and sequence representation, which provides a representation for each token of the document. In the former one, we connect a Single Layer Perceptron (SLP) to the BERT to use the embedding directly for detecting fake news. The latter one uses Convolutional Neural Network (CNN) after the BERT’s embedding layer to extract extra features based on the collocation of words in a corpus. Furthermore, we present the TAJ dataset, which is a new Persian fake news dataset crawled from news agencies’ websites. We evaluate our proposed models on the newly provided TAJ dataset as well as the two different Persian rumor datasets as baselines. The results indicate the effectiveness of using deep contextualized embedding approaches for the fake news detection task. We also show that both BERT-SLP and BERT-CNN models achieve superior performance to the previous baselines and traditional machine learning models, with 15.58% and 17.1% improvement compared to the reported results by Zamani et al. [ 30 ], and 11.29% and 11.18% improvement compared to the reported results by Jahanbakhsh-Nagadeh et al. [ 9 ].


Author(s):  
Rachna Jain ◽  
Deepak Kumar Jain ◽  
Dharana ◽  
Nitika Sharma

Social media can render content circulating to reach millions with a knack to influence people, despite the questionable authencity of the facts. Internet sources are the most convenient and easy approach to obtain any information these days. Fake news has become the topic of interest for academicians and the rest of society. This kind of propaganda has the power to influence the general perception, offering political groups the ability to control the results of democratic affairs such as elections. Automatic identification of fake news has emerged as one of the significant problems due to the high risks involved. It is challenging in a way because of the complexity levels of accurately interpreting the data. An extensive search has already been performed on English language news data. Our work presents a comparative analysis of fake news classifiers on the low resource Bengali language ‘ban fake news’ dataset from Kaggle. The analysis presented compares deep learning techniques such as LSTM (Long short-term Memory) and BiLSTM (Bi-directional Long short-term Memory) and machine learning methods like Naive Bayes, Passive Aggressive Classifier (PAC), and Random Forest. The comparison has been drawn based on classification metrics such as accuracy, precision, recall, and F1 score. The deep learning method BiLSTM shows 55.92% accuracy while Random Forest, in contrast, has outperformed all the other methods with an accuracy of 62.37%. The work presented in this paper sets a basis for researchers to select the optimum classifiers for their approach towards fake news detection.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Brinda Sampat ◽  
Sahil Raj

Purpose“Fake news” or misinformation sharing using social media sites into public discourse or politics has increased dramatically, over the last few years, especially in the current COVID-19 pandemic causing concern. However, this phenomenon is inadequately researched. This study examines fake news sharing with the lens of stimulus-organism-response (SOR) theory, uses and gratification theory (UGT) and big five personality traits (BFPT) theory to understand the motivations for sharing fake news and the personality traits that do so. The stimuli in the model comprise gratifications (pass time, entertainment, socialization, information sharing and information seeking) and personality traits (agreeableness, conscientiousness, extraversion, openness and neuroticism). The feeling of authenticating or instantly sharing news is the organism leading to sharing fake news, which forms the response in the study.Design/methodology/approachThe conceptual model was tested by the data collected from a sample of 221 social media users in India. The data were analyzed with partial least squares structural equation modeling to determine the effects of UGT and personality traits on fake news sharing. The moderating role of the platform WhatsApp or Facebook was studied.Findings The results suggest that pass time, information sharing and socialization gratifications lead to instant sharing news on social media platforms. Individuals who exhibit extraversion, neuroticism and openness share news on social media platforms instantly. In contrast, agreeableness and conscientiousness personality traits lead to authentication news before sharing on the social media platform.Originality/value This study contributes to social media literature by identifying the user gratifications and personality traits that lead to sharing fake news on social media platforms. Furthermore, the study also sheds light on the moderating influence of the choice of the social media platform for fake news sharing.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Krishnadas Nanath ◽  
Supriya Kaitheri ◽  
Sonia Malik ◽  
Shahid Mustafa

Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.


Sign in / Sign up

Export Citation Format

Share Document