scholarly journals Fake News Detection on Social Media Using A Natural Language Inference Approach

Author(s):  
Amir Bidgoly ◽  
Hossein Amirkhani ◽  
Fariba Sadeghi

Abstract Fake news detection is a challenging problem in online social media, with considerable social and political impacts. Several methods have already been proposed for the automatic detection of fake news, which are often based on the statistical features of the content or context of news. In this paper, we propose a novel fake news detection method based on Natural Language Inference (NLI) approach. Instead of using only statistical features of the content or context of the news, the proposed method exploits a human-like approach, which is based on inferring veracity using a set of reliable news. In this method, the related and similar news published in reputable news sources are used as auxiliary knowledge to infer the veracity of a given news item. We also collect and publish the first inference-based fake news detection dataset, called FNID, in two formats: the two-class version (FNID-FakeNewsNet) and the six-class version (FNID-LIAR). We use the NLI approach to boost several classical and deep machine learning models including Decision Tree, Naïve Bayes, Random Forest, Logistic Regression, k-Nearest Neighbors, Support Vector Machine, BiGRU, and BiLSTM along with different word embedding methods including Word2vec, GloVe, fastText, and BERT. The experiments show that the proposed method achieves 85.58% and 41.31% accuracies in the FNID-FakeNewsNet and FNID-LIAR datasets, respectively, which are 10.44% and 13.19% respective absolute improvements.


2020 ◽  
Vol 17 (167) ◽  
pp. 20200020
Author(s):  
Michele Coscia ◽  
Luca Rossi

Many people view news on social media, yet the production of news items online has come under fire because of the common spreading of misinformation. Social media platforms police their content in various ways. Primarily they rely on crowdsourced ‘flags’: users signal to the platform that a specific news item might be misleading and, if they raise enough of them, the item will be fact-checked. However, real-world data show that the most flagged news sources are also the most popular and—supposedly—reliable ones. In this paper, we show that this phenomenon can be explained by the unreasonable assumptions that current content policing strategies make about how the online social media environment is shaped. The most realistic assumption is that confirmation bias will prevent a user from flagging a news item if they share the same political bias as the news source producing it. We show, via agent-based simulations, that a model reproducing our current understanding of the social media environment will necessarily result in the most neutral and accurate sources receiving most flags.



Author(s):  
T. V. Divya ◽  
Barnali Gupta Banik

Fake news detection on job advertisements has grabbed the attention of many researchers over past decade. Various classifiers such as Support Vector Machine (SVM), XGBoost Classifier and Random Forest (RF) methods are greatly utilized for fake and real news detection pertaining to job advertisement posts in social media. Bi-Directional Long Short-Term Memory (Bi-LSTM) classifier is greatly utilized for learning word representations in lower-dimensional vector space and learning significant words word embedding or terms revealed through Word embedding algorithm. The fake news detection is greatly achieved along with real news on job post from online social media is achieved by Bi-LSTM classifier and thereby evaluating corresponding performance. The performance metrics such as Precision, Recall, F1-score, and Accuracy are assessed for effectiveness by fraudulency based on job posts. The outcome infers the effectiveness and prominence of features for detecting false news. .



Author(s):  
Dipti Chaudhari ◽  
Krina Rana ◽  
Radhika Tannu ◽  
Snehal Yadav

Most of the smart phone users prefer to read the news via social media over internet. The news websites are publishing the news and provide the source of authentication. The question is how to authenticate the news and the articles which are circulated among the social media like WhatsApp groups, Facebook Pages, Twitter and other micro blogs and social networking sites. It can be considered that social media has replaced the traditional media and become one of the main platforms for spreading news. News on social media trends to travel faster and easier than traditional news sources due to the internet accessibility and convenience. It is harmful for the society to believe on the rumors and pretend to be a news. The basic need of an hour is to stop the rumors especially in the developing countries like India, and focus on the correct, authenticated news articles. This paper demonstrates a model and methodology for fake news detection. With the help of Machine Learning, we tried to aggregate the news and later determine whether the news is real or fake using Support Vector Machine. Even we have presented the mechanism to identify the significant Tweet's attribute and application architecture to systematically automate the classification of the online news.



2019 ◽  
Vol 8 (2) ◽  
pp. 1139-1143

As social media is in boom, it is becoming very easier for customers to share their views and comments and express their feelings regarding any products which are present in online social media. . If these data can be analyzed efficiently different suggestions can be provided to the company regarding to improvise their products sale. It becomes easier for the company to understand the customer’s reaction after seeing the advertisements of the products posted on social media. This research focuses on analyzing the sentiments of customers based on the comments and reviews of products available in Facebook. Sentimental Analysis is performed to analyze the customer comments as positive, negative and neutral and later they are labeled as 0 or 1. After the labeling process, a comparative analysis is performed using different classification algorithms. The classification algorithms used are K Nearest Neighbors (KNN), Support Vector Machine (SVM) and Naïve Bayes Classifier. The classification algorithm with the highest accuracy is identified to predict the sales of online products



Spreading of fake news in online social media is a major nuisance to the public and there is no state of art tool to detect whether a news is a fake or an original one in an automated manner. Hence, this paper analyses the online social media and the news feeds for detection of fake news. The work proposes solution using Natural Language Processing and Deep Learning techniques for detecting the fake news in online social media.



2020 ◽  
Vol 12 (5) ◽  
pp. 87 ◽  
Author(s):  
Hugo Queiroz Abonizio ◽  
Janaina Ignacio de Morais ◽  
Gabriel Marques Tavares ◽  
Sylvio Barbon Junior

Online Social Media (OSM) have been substantially transforming the process of spreading news, improving its speed, and reducing barriers toward reaching out to a broad audience. However, OSM are very limited in providing mechanisms to check the credibility of news propagated through their structure. The majority of studies on automatic fake news detection are restricted to English documents, with few works evaluating other languages, and none comparing language-independent characteristics. Moreover, the spreading of deceptive news tends to be a worldwide problem; therefore, this work evaluates textual features that are not tied to a specific language when describing textual data for detecting news. Corpora of news written in American English, Brazilian Portuguese, and Spanish were explored to study complexity, stylometric, and psychological text features. The extracted features support the detection of fake, legitimate, and satirical news. We compared four machine learning algorithms (k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), Random Forest (RF), and Extreme Gradient Boosting (XGB)) to induce the detection model. Results show our proposed language-independent features are successful in describing fake, satirical, and legitimate news across three different languages, with an average detection accuracy of 85.3% with RF.



2019 ◽  
Vol 8 (1) ◽  
pp. 114-133

Since the 2016 U.S. presidential election, attacks on the media have been relentless. “Fake news” has become a household term, and repeated attempts to break the trust between reporters and the American people have threatened the validity of the First Amendment to the U.S. Constitution. In this article, the authors trace the development of fake news and its impact on contemporary political discourse. They also outline cutting-edge pedagogies designed to assist students in critically evaluating the veracity of various news sources and social media sites.



2021 ◽  
Vol 10 (5) ◽  
pp. 170
Author(s):  
Reinald Besalú ◽  
Carles Pont-Sorribes

In the context of the dissemination of fake news and the traditional media outlets’ loss of centrality, the credibility of digital news emerges as a key factor for today’s democracies. The main goal of this paper was to identify the levels of credibility that Spanish citizens assign to political news in the online environment. A national survey (n = 1669) was designed to assess how the news format affected credibility and likelihood of sharing. Four different news formats were assessed, two of them linked to traditional media (digital newspapers and digital television) and two to social media (Facebook and WhatsApp). Four experimental groups assigned a credibility score and a likelihood of sharing score to four different political news items presented in the aforementioned digital formats. The comparison between the mean credibility scores assigned to the same news item presented in different formats showed significant differences among groups, as did the likelihood of sharing the news. News items shown in a traditional media format, especially digital television, were assigned more credibility than news presented in a social media format, and participants were also more likely to share the former, revealing a more cautious attitude towards social media as a source of news.



Author(s):  
Muskan Patidar

Abstract: Social networking platforms have given us incalculable opportunities than ever before, and its benefits are undeniable. Despite benefits, people may be humiliated, insulted, bullied, and harassed by anonymous users, strangers, or peers. Cyberbullying refers to the use of technology to humiliate and slander other people. It takes form of hate messages sent through social media and emails. With the exponential increase of social media users, cyberbullying has been emerged as a form of bullying through electronic messages. We have tried to propose a possible solution for the above problem, our project aims to detect cyberbullying in tweets using ML Classification algorithms like Naïve Bayes, KNN, Decision Tree, Random Forest, Support Vector etc. and also we will apply the NLTK (Natural language toolkit) which consist of bigram, trigram, n-gram and unigram on Naïve Bayes to check its accuracy. Finally, we will compare the results of proposed and baseline features with other machine learning algorithms. Findings of the comparison indicate the significance of the proposed features in cyberbullying detection. Keywords: Cyber bullying, Machine Learning Algorithms, Twitter, Natural Language Toolkit



In today’s world social media is one of the most important tool for communication that helps people to interact with each other and share their thoughts, knowledge or any other information. Some of the most popular social media websites are Facebook, Twitter, Whatsapp and Wechat etc. Since, it has a large impact on people’s daily life it can be used a source for any fake or misinformation. So it is important that any information presented on social media should be evaluated for its genuineness and originality in terms of the probability of correctness and reliability to trust the information exchange. In this work we have identified the features that can be helpful in predicting whether a given Tweet is Rumor or Information. Two machine learning algorithm are executed using WEKA tool for the classification that is Decision Tree and Support Vector Machine.



Sign in / Sign up

Export Citation Format

Share Document