FNDNLSTM

2021 ◽  
pp. 218-232
Author(s):  
Steni Mol T. S. ◽  
P. S. Sreeja

In the present scenario, social media platforms have become more accessible sources for news. Social media posts need not always be truthful information. These posts are widely disseminated with little regard for the truth. It is necessary to realize the evolution and origins of false news patterns in order to improve the progression of quality news and combat fake news on social media. This chapter discusses the most frequently used social media (Facebook) and the type of information exchanged to solve this issue. This chapter proposes a novel framework based on the “Fake News Detection Network – Long Short-Term Memory” (FNDN-LSTM) model to discriminate between fake news and real news. The social media news dataset is to be taken and preprocessed using the TF BERT model (technique). The preprocessed data will be passed through a feature selection model, which will select the significant features for classification. The selected features will be passed through the FNDN-LSTM classification model for identifying fake news.

Author(s):  
T. V. Divya ◽  
Barnali Gupta Banik

Fake news detection on job advertisements has grabbed the attention of many researchers over past decade. Various classifiers such as Support Vector Machine (SVM), XGBoost Classifier and Random Forest (RF) methods are greatly utilized for fake and real news detection pertaining to job advertisement posts in social media. Bi-Directional Long Short-Term Memory (Bi-LSTM) classifier is greatly utilized for learning word representations in lower-dimensional vector space and learning significant words word embedding or terms revealed through Word embedding algorithm. The fake news detection is greatly achieved along with real news on job post from online social media is achieved by Bi-LSTM classifier and thereby evaluating corresponding performance. The performance metrics such as Precision, Recall, F1-score, and Accuracy are assessed for effectiveness by fraudulency based on job posts. The outcome infers the effectiveness and prominence of features for detecting false news. .


Author(s):  
Auliya Rahman Isnain ◽  
Agus Sihabuddin ◽  
Yohanes Suyanto

Currently, the discussion about hate speech in Indonesia is warm, primarily through social media. Hate speech is communication that disparages a person or group based on characteristics such as (race, ethnicity, gender, citizenship, religion and organization). Twitter is one of the social media that someone uses to express their feelings and opinions through tweets, including tweets that contain expressions of hatred because Twitter has a significant influence on the success or destruction of one's image.This study aims to detect hate speech or not hate Indonesian speech tweets by using the Bidirectional Long Short Term Memory method and the word2vec feature extraction method with Continuous bag-of-word (CBOW) architecture. For testing the BiLSTM purpose with the calculation of the value of accuracy, precision, recall, and F-measure.The use of word2vec and the Bidirectional Long Short Term Memory method with CBOW architecture, with epoch 10, learning rate 0.001 and the number of neurons 200 on the hidden layer, produce an accuracy rate of 94.66%, with each precision value of 99.08%, recall 93, 74% and F-measure 96.29%. In contrast, the Bidirectional Long Short Term Memory with three layers has an accuracy of 96.93%. The addition of one layer to BiLSTM increased by 2.27%.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


Author(s):  
Kristy A. Hesketh

This chapter explores the Spiritualist movement and its rapid growth due to the formation of mass media and compares these events with the current rise of fake news in the mass media. The technology of cheaper publications created a media platform that featured stories about Spiritualist mediums and communications with the spirit world. These articles were published in newspapers next to regular news creating a blurred line between real and hoax news stories. Laws were later created to address instances of fraud that occurred in the medium industry. Today, social media platforms provide a similar vessel for the spread of fake news. Online fake news is published alongside legitimate news reports leaving readers unable to differentiate between real and fake articles. Around the world countries are actioning initiatives to address the proliferation of false news to prevent the spread of misinformation. This chapter compares the parallels between these events, how hoaxes and fake news begin and spread, and examines the measures governments are taking to curb the growth of misinformation.


2018 ◽  
Vol 10 (11) ◽  
pp. 113 ◽  
Author(s):  
Yue Li ◽  
Xutao Wang ◽  
Pengjian Xu

Text classification is of importance in natural language processing, as the massive text information containing huge amounts of value needs to be classified into different categories for further use. In order to better classify text, our paper tries to build a deep learning model which achieves better classification results in Chinese text than those of other researchers’ models. After comparing different methods, long short-term memory (LSTM) and convolutional neural network (CNN) methods were selected as deep learning methods to classify Chinese text. LSTM is a special kind of recurrent neural network (RNN), which is capable of processing serialized information through its recurrent structure. By contrast, CNN has shown its ability to extract features from visual imagery. Therefore, two layers of LSTM and one layer of CNN were integrated to our new model: the BLSTM-C model (BLSTM stands for bi-directional long short-term memory while C stands for CNN.) LSTM was responsible for obtaining a sequence output based on past and future contexts, which was then input to the convolutional layer for extracting features. In our experiments, the proposed BLSTM-C model was evaluated in several ways. In the results, the model exhibited remarkable performance in text classification, especially in Chinese texts.


2021 ◽  
Vol 13 (10) ◽  
pp. 244
Author(s):  
Mohammed N. Alenezi ◽  
Zainab M. Alqenaei

Social media platforms such as Facebook, Instagram, and Twitter are an inevitable part of our daily lives. These social media platforms are effective tools for disseminating news, photos, and other types of information. In addition to the positives of the convenience of these platforms, they are often used for propagating malicious data or information. This misinformation may misguide users and even have dangerous impact on society’s culture, economics, and healthcare. The propagation of this enormous amount of misinformation is difficult to counter. Hence, the spread of misinformation related to the COVID-19 pandemic, and its treatment and vaccination may lead to severe challenges for each country’s frontline workers. Therefore, it is essential to build an effective machine-learning (ML) misinformation-detection model for identifying the misinformation regarding COVID-19. In this paper, we propose three effective misinformation detection models. The proposed models are long short-term memory (LSTM) networks, which is a special type of RNN; a multichannel convolutional neural network (MC-CNN); and k-nearest neighbors (KNN). Simulations were conducted to evaluate the performance of the proposed models in terms of various evaluation metrics. The proposed models obtained superior results to those from the literature.


2021 ◽  
Vol 4 (1) ◽  
pp. 121-128
Author(s):  
A Iorliam ◽  
S Agber ◽  
MP Dzungwe ◽  
DK Kwaghtyo ◽  
S Bum

Social media provides opportunities for individuals to anonymously communicate and express hateful feelings and opinions at the comfort of their rooms. This anonymity has become a shield for many individuals or groups who use social media to express deep hatred for other individuals or groups, tribes or race, religion, gender, as well as belief systems. In this study, a comparative analysis is performed using Long Short-Term Memory and Convolutional Neural Network deep learning techniques for Hate Speech classification. This analysis demonstrates that the Long Short-Term Memory classifier achieved an accuracy of 92.47%, while the Convolutional Neural Network classifier achieved an accuracy of 92.74%. These results showed that deep learning techniques can effectively classify hate speech from normal speech.


PLoS ONE ◽  
2022 ◽  
Vol 17 (1) ◽  
pp. e0261559
Author(s):  
Ali Ghaddar ◽  
Sanaa Khandaqji ◽  
Zeinab Awad ◽  
Rawad Kansoun

Background The massive, free and unrestricted exchange of information on the social media during the Covid-19 pandemic has set fertile grounds for fear, uncertainty and the rise of fake news related to the virus. This “viral” spread of fake news created an “infodemic” that threatened the compliance with public health guidelines and recommendations. Objective This study aims to describe the trust in social media platforms and the exposure to fake news about COVID-19 in Lebanon and to explore their association with vaccination intent. Methods In this cross-sectional study conducted in Lebanon during July–August, 2020, a random sample of 1052 participants selected from a mobile-phone database responded to an anonymous structured questionnaire after obtaining informed consent (response rate = 40%). The questionnaire was conducted by telephone and measured socio-demographics, sources and trust in sources of information and exposure to fake news, social media activity, perceived threat and vaccination intent. Results Results indicated that the majority of participants (82%) believed that COVID-19 is a threat and 52% had intention to vaccinate. Exposure to fake/ unverified news was high (19.7% were often and 63.8% were sometimes exposed, mainly to fake news shared through Watsapp and Facebook). Trust in certain information sources (WHO, MoPH and TV) increased while trust in others (Watsapp, Facebook) reduced vaccination intent against Covid-19. Believing in the man-made theory and the business control theory significantly reduced the likelihood of vaccination intent (Beta = 0.43; p = 0.01 and Beta = -0.29; p = 0.05) respectively. Conclusion In the context of the infodemic, understanding the role of exposure to fake news and of conspiracy believes in shaping healthy behavior is important for increasing vaccination intent and planning adequate response to tackle the Covid-19 pandemic.


Author(s):  
Alberto Ardèvol-Abreu ◽  
Patricia Delponti ◽  
Carmen Rodríguez-Wangüemert

The main social media platforms have been implementing strategies to minimize fake news dissemination. These include identifying, labeling, and penalizing –via news feed ranking algorithms– fake publications. Part of the rationale behind this approach is that the negative effects of fake content arise only when social media users are deceived. Once debunked, fake posts and news stories should therefore become harmless. Unfortunately, the literature shows that the effects of misinformation are more complex and tend to persist and even backfire after correction. Furthermore, we still do not know much about how social media users evaluate content that has been fact-checked and flagged as false. More worryingly, previous findings suggest that some people may intentionally share made up news on social media, although their motivations are not fully explained. To better understand users’ interaction with social media content identified or recognized as false, we analyze qualitative and quantitative data from five focus groups and a sub-national online survey (N = 350). Findings suggest that the label of ‘false news’ plays a role –although not necessarily central– in social media users’ evaluation of the content and their decision (not) to share it. Some participants showed distrust in fact-checkers and lack of knowledge about the fact-checking process. We also found that fake news sharing is a two-dimensional phenomenon that includes intentional and unintentional behaviors. We discuss some of the reasons why some of social media users may choose to distribute fake news content intentionally.


Sign in / Sign up

Export Citation Format

Share Document