scholarly journals WELFake: Word Embedding Over Linguistic Features for Fake News Detection

Author(s):  
Pawan Kumar Verma ◽  
Prateek Agrawal ◽  
Ivone Amorim ◽  
Radu Prodan
Author(s):  
T. V. Divya ◽  
Barnali Gupta Banik

Fake news detection on job advertisements has grabbed the attention of many researchers over past decade. Various classifiers such as Support Vector Machine (SVM), XGBoost Classifier and Random Forest (RF) methods are greatly utilized for fake and real news detection pertaining to job advertisement posts in social media. Bi-Directional Long Short-Term Memory (Bi-LSTM) classifier is greatly utilized for learning word representations in lower-dimensional vector space and learning significant words word embedding or terms revealed through Word embedding algorithm. The fake news detection is greatly achieved along with real news on job post from online social media is achieved by Bi-LSTM classifier and thereby evaluating corresponding performance. The performance metrics such as Precision, Recall, F1-score, and Accuracy are assessed for effectiveness by fraudulency based on job posts. The outcome infers the effectiveness and prominence of features for detecting false news. .


2021 ◽  
Author(s):  
Mohammad Mahyoob ◽  
Jeehaan Algaraady ◽  
Musaad Alrahaili

The tremendous growth and impact of fake news as a hot research field gained the public’s attention and threatened their safety in recent years. However, there is a wide range of developed fashions to detect fake contents, either those human-based approaches or machine-based approaches; both have shown inadequacy and limitations, especially those fully automatic approaches. The purpose of this analytic study of media news language is to investigate and identify the linguistic features and their contribution in analyzing data to detect, filter, and differentiate between fake and authentic news texts. This study outlines promising uses of linguistic indicators and adds a rather unconventional outlook to prior literature. It utilizes qualitative and quantitative data analysis as an analytic method to identify systematic nuances between fake and factual news in terms of detecting and comparing 16 attributes under three main linguistic features categories (lexical, grammatical, and syntactic features) assigned manually to news texts. The obtained datasets consist of publicly available right documents on the Politi-fact website and the raw (test) data set collected randomly from news posts on Facebook pages. The results show that linguistic features, especially grammatical features, help determine untrustworthy texts and demonstrate that most of the test news tends to be unreliable articles.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Krishnadas Nanath ◽  
Supriya Kaitheri ◽  
Sonia Malik ◽  
Shahid Mustafa

Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.


2021 ◽  
pp. 6-14
Author(s):  
Olena Gryshchenko ◽  
Galyna Tsapro

Fake news is a widespread element of the nowadays news websites. The research focuses on students’ abilities to detect fake news stories. 72 % of respondents successfully identified fake texts. The experiment proves that students concentrate on reading texts carefully, check their credibility, facts and pictures that accompany news texts. Students believe that among linguistic features that contribute to creating fake news texts there is repetition of lexemes, illogical structure of narration, exaggeration, confusing numbers. It was also pointed out that photos do not illustrate information given in fake texts.


2020 ◽  
Vol 11 (1) ◽  
pp. 99
Author(s):  
Mohammad Mahyoob ◽  
Jeehaan Algaraady ◽  
Musaad Alrahaili

The tremendous growth and impact of fake news as a hot research field gained the public’s attention and threatened their safety in recent years. However, there is a wide range of developed fashions to detect fake contents, either those human-based approaches or machine-based approaches; both have shown inadequacy and limitations, especially those fully automatic approaches. The purpose of this analytic study of media news language is to investigate and identify the linguistic features and their contribution in analyzing data to detect, filter, and differentiate between fake and authentic news texts. This study outlines promising uses of linguistic indicators and adds a rather unconventional outlook to prior literature. It utilizes qualitative and quantitative data analysis as an analytic method to identify systematic nuances between fake and factual news in terms of detecting and comparing 16 attributes under three main linguistic features categories (lexical, grammatical, and syntactic features) assigned manually to news texts. The obtained datasets consist of publicly available right documents on the Politi-fact website and the raw (test) data set collected randomly from news posts on Facebook pages. The results show that linguistic features, especially grammatical features, help determine untrustworthy texts and demonstrate that most of the test news tends to be unreliable articles.


2021 ◽  
Vol 174 (10) ◽  
pp. 1-5
Author(s):  
Muhammad Usama Islam ◽  
Md. Mobarak Hossain ◽  
Mohammod Abul Kashem
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document