scholarly journals Terrorism And Fake News Detection

Author(s):  
Divya Tiwari ◽  
Surbhi Thorat

Fake news dissemination is a critical issue in today’s fast-changing network environment. The issues of online fake news have attained an increasing eminence in the diffusion of shaping news stories online. This paper deals with the categorical cyber terrorism threats on social media and preventive approach to minimize their issues. Misleading or unreliable information in form of videos, posts, articles, URLs are extensively disseminated through popular social media platforms such as Facebook, Twitter, etc. As a result, editors and journalists are in need of new tools that can help them to pace up the verification process for the content that has been originated from social media. existing classification models for fake news detection have not completely stopped the spread because of their inability to accurately classify news, thus leading to a high false alarm rate. This study proposed a model that can accurately identify and classify deceptive news articles content infused on social media by malicious users. The news content, social-context features and the respective classification of reported news was extracted from the PHEME dataset using entropy-based feature selection. The selected features were normalized using Min-Max Normalization techniques. The model was simulated and its performance was evaluated by benchmarking with an existing model using detection accuracy, sensitivity, and precision as metrics. The result of the evaluation showed a higher 17.25% detection accuracy, 15.78% sensitivity, but lesser 0.2% precision than the existing model, Thus, the proposed model detects more fake news instances accurately based on news content and social content perspectives. This indicates that the proposed classification model has a better detection rate, reduces the false alarm rate of news instances and thus detects fake news more accurately.

2021 ◽  
Vol 13 (9) ◽  
pp. 1703
Author(s):  
He Yan ◽  
Chao Chen ◽  
Guodong Jin ◽  
Jindong Zhang ◽  
Xudong Wang ◽  
...  

The traditional method of constant false-alarm rate detection is based on the assumption of an echo statistical model. The target recognition accuracy rate and the high false-alarm rate under the background of sea clutter and other interferences are very low. Therefore, computer vision technology is widely discussed to improve the detection performance. However, the majority of studies have focused on the synthetic aperture radar because of its high resolution. For the defense radar, the detection performance is not satisfactory because of its low resolution. To this end, we herein propose a novel target detection method for the coastal defense radar based on faster region-based convolutional neural network (Faster R-CNN). The main processing steps are as follows: (1) the Faster R-CNN is selected as the sea-surface target detector because of its high target detection accuracy; (2) a modified Faster R-CNN based on the characteristics of sparsity and small target size in the data set is employed; and (3) soft non-maximum suppression is exploited to eliminate the possible overlapped detection boxes. Furthermore, detailed comparative experiments based on a real data set of coastal defense radar are performed. The mean average precision of the proposed method is improved by 10.86% compared with that of the original Faster R-CNN.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


2018 ◽  
Vol 41 (5) ◽  
pp. 689-707
Author(s):  
Tanya Notley ◽  
Michael Dezuanni

Social media use has redefined the production, experience and consumption of news media. These changes have made verifying and trusting news content more complicated and this has led to a number of recent flashpoints for claims and counter-claims of ‘fake news’ at critical moments during elections, natural disasters and acts of terrorism. Concerns regarding the actual and potential social impact of fake news led us to carry out the first nationally representative survey of young Australians’ news practices and experiences. Our analysis finds that while social media is one of young people’s preferred sources of news, they are not confident about spotting fake news online and many rarely or never check the source of news stories. Our findings raise important questions regarding the need for news media literacy education – both in schools and in the home. Therefore, we consider the historical development of news media literacy education and critique the relevance of dominant frameworks and pedagogies currently in use. We find that news media has become neglected in media literacy education in Australia over the past three decades, and we propose that current media literacy frameworks and pedagogies in use need to be rethought for the digital age.


2019 ◽  
Author(s):  
Robert M Ross ◽  
David Gertler Rand ◽  
Gordon Pennycook

Why is misleading partisan content believed and shared? An influential account posits that political partisanship pervasively biases reasoning, such that engaging in analytic thinking exacerbates motivated reasoning and, in turn, the acceptance of hyperpartisan content. Alternatively, it may be that susceptibility to hyperpartisan misinformation is explained by a lack of reasoning. Across two studies using different subject pools (total N = 1977), we had participants assess true, false, and hyperpartisan headlines taken from social media. We found no evidence that analytic thinking was associated with increased polarization for either judgments about the accuracy of the headlines or willingness to share the news content on social media. Instead, analytic thinking was broadly associated with an increased capacity to discern between true headlines and either false or hyperpartisan headlines. These results suggest that reasoning typically helps people differentiate between low and high quality news content, rather than facilitating political bias.


2012 ◽  
Vol 2012 ◽  
pp. 1-10 ◽  
Author(s):  
S. Ganapathy ◽  
P. Yogesh ◽  
A. Kannan

Intrusion detection systems were used in the past along with various techniques to detect intrusions in networks effectively. However, most of these systems are able to detect the intruders only with high false alarm rate. In this paper, we propose a new intelligent agent-based intrusion detection model for mobile ad hoc networks using a combination of attribute selection, outlier detection, and enhanced multiclass SVM classification methods. For this purpose, an effective preprocessing technique is proposed that improves the detection accuracy and reduces the processing time. Moreover, two new algorithms, namely, an Intelligent Agent Weighted Distance Outlier Detection algorithm and an Intelligent Agent-based Enhanced Multiclass Support Vector Machine algorithm are proposed for detecting the intruders in a distributed database environment that uses intelligent agents for trust management and coordination in transaction processing. The experimental results of the proposed model show that this system detects anomalies with low false alarm rate and high-detection rate when tested with KDD Cup 99 data set.


2018 ◽  
Vol 10 (9) ◽  
pp. 3301 ◽  
Author(s):  
Honglyun Park ◽  
Jaewan Choi ◽  
Wanyong Park ◽  
Hyunchun Park

This study aims to reduce the false alarm rate due to relief displacement and seasonal effects of high-spatial-resolution multitemporal satellite images in change detection algorithms. Cross-sharpened images were used to increase the accuracy of unsupervised change detection results. A cross-sharpened image is defined as a combination of synthetically pan-sharpened images obtained from the pan-sharpening of multitemporal images (two panchromatic and two multispectral images) acquired before and after the change. A total of four cross-sharpened images were generated and used in combination for change detection. Sequential spectral change vector analysis (S2CVA), which comprises the magnitude and direction information of the difference image of the multitemporal images, was applied to minimize the false alarm rate using cross-sharpened images. Specifically, the direction information of S2CVA was used to minimize the false alarm rate when applying S2CVA algorithms to cross-sharpened images. We improved the change detection accuracy by integrating the magnitude and direction information obtained using S2CVA for the cross-sharpened images. In the experiment using KOMPSAT-2 satellite imagery, the false alarm rate of the change detection results decreased with the use of cross-sharpened images compared to that with the use of only the magnitude information from the original S2CVA.


Author(s):  
P. Manoj Kumar ◽  
M. Parvathy ◽  
C. Abinaya Devi

Intrusion Detection Systems (IDS) is one of the important aspects of cyber security that can detect the anomalies in the network traffic. IDS are a part of Second defense line of a system that can be deployed along with other security measures such as access control, authentication mechanisms and encryption techniques to secure the systems against cyber-attacks. However, IDS suffers from the problem of handling large volume of data and in detecting zero-day attacks (new types of attacks) in a real-time traffic environment. To overcome this problem, an intelligent Deep Learning approach for Intrusion Detection is proposed based on Convolutional Neural Network (CNN-IDS). Initially, the model is trained and tested under a new real-time traffic dataset, CSE-CIC-IDS 2018 dataset. Then, the performance of CNN-IDS model is studied based on three important performance metrics namely, accuracy / training time, detection rate and false alarm rate. Finally, the experimental results are compared with those of various Deep Discriminative models including Recurrent Neural network (RNN), Deep Neural Network (DNN) etc., proposed for IDS under the same dataset. The Comparative results show that the proposed CNN-IDS model is very much suitable for modelling a classification model both in terms of binary and multi-class classification with higher detection rate, accuracy, and lower false alarm rate. The CNN-IDS model improves the accuracy of intrusion detection and provides a new research method for intrusion detection.


Author(s):  
Kristy A. Hesketh

This chapter explores the Spiritualist movement and its rapid growth due to the formation of mass media and compares these events with the current rise of fake news in the mass media. The technology of cheaper publications created a media platform that featured stories about Spiritualist mediums and communications with the spirit world. These articles were published in newspapers next to regular news creating a blurred line between real and hoax news stories. Laws were later created to address instances of fraud that occurred in the medium industry. Today, social media platforms provide a similar vessel for the spread of fake news. Online fake news is published alongside legitimate news reports leaving readers unable to differentiate between real and fake articles. Around the world countries are actioning initiatives to address the proliferation of false news to prevent the spread of misinformation. This chapter compares the parallels between these events, how hoaxes and fake news begin and spread, and examines the measures governments are taking to curb the growth of misinformation.


2019 ◽  
Vol 6 (2) ◽  
pp. 205316801984855 ◽  
Author(s):  
Hunt Allcott ◽  
Matthew Gentzkow ◽  
Chuan Yu

In recent years, there has been widespread concern that misinformation on social media is damaging societies and democratic institutions. In response, social media platforms have announced actions to limit the spread of false content. We measure trends in the diffusion of content from 569 fake news websites and 9540 fake news stories on Facebook and Twitter between January 2015 and July 2018. User interactions with false content rose steadily on both Facebook and Twitter through the end of 2016. Since then, however, interactions with false content have fallen sharply on Facebook while continuing to rise on Twitter, with the ratio of Facebook engagements to Twitter shares decreasing by 60%. In comparison, interactions with other news, business, or culture sites have followed similar trends on both platforms. Our results suggest that the relative magnitude of the misinformation problem on Facebook has declined since its peak.


2020 ◽  
pp. 009365022092132
Author(s):  
Mufan Luo ◽  
Jeffrey T. Hancock ◽  
David M. Markowitz

This article focuses on message credibility and detection accuracy of fake and real news as represented on social media. We developed a deception detection paradigm for news headlines and conducted two online experiments to examine the extent to which people (1) perceive news headlines as credible, and (2) accurately distinguish fake and real news across three general topics (i.e., politics, science, and health). Both studies revealed that people often judged news headlines as fake, suggesting a deception-bias for news in social media. Across studies, we observed an average detection accuracy of approximately 51%, a level consistent with most research using this deception detection paradigm with equal lie-truth base-rates. Study 2 evaluated the effects of endorsement cues in social media (e.g., Facebook likes) on message credibility and detection accuracy. Results showed that headlines associated with a high number of Facebook likes increased credibility, thereby enhancing detection accuracy for real news but undermining accuracy for fake news. These studies introduce truth-default theory to the context of news credibility and advance our understanding of how biased processing of news information can impact detection accuracy with social media endorsement cues.


Sign in / Sign up

Export Citation Format

Share Document