scholarly journals Anomaly detection optimization using big data and deep learning to reduce false-positive

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Khloud Al Jallad ◽  
Mohamad Aljnidi ◽  
Mohammad Said Desouki
Author(s):  
Valliammal Narayan ◽  
Shanmugapriya D.

Information is vital for any organization to communicate through any network. The growth of internet utilization and the web users increased the cyber threats. Cyber-attacks in the network change the traffic flow of each system. Anomaly detection techniques have been developed for different types of cyber-attack or anomaly strategies. Conventional ADS protect information transferred through the network or cyber attackers. The stable prevention of anomalies by machine and deep-learning algorithms are applied for cyber-security. Big data solutions handle voluminous data in a short span of time. Big data management is the organization and manipulation of huge volumes of structured data, semi-structured data and unstructured data, but it does not handle a data imbalance problem during the training process. Big data-based machine and deep-learning algorithms for anomaly detection involve the classification of decision boundary between normal traffic flow and anomaly traffic flow. The performance of anomaly detection is efficiently increased by different algorithms.


2022 ◽  
pp. 678-707
Author(s):  
Valliammal Narayan ◽  
Shanmugapriya D.

Information is vital for any organization to communicate through any network. The growth of internet utilization and the web users increased the cyber threats. Cyber-attacks in the network change the traffic flow of each system. Anomaly detection techniques have been developed for different types of cyber-attack or anomaly strategies. Conventional ADS protect information transferred through the network or cyber attackers. The stable prevention of anomalies by machine and deep-learning algorithms are applied for cyber-security. Big data solutions handle voluminous data in a short span of time. Big data management is the organization and manipulation of huge volumes of structured data, semi-structured data and unstructured data, but it does not handle a data imbalance problem during the training process. Big data-based machine and deep-learning algorithms for anomaly detection involve the classification of decision boundary between normal traffic flow and anomaly traffic flow. The performance of anomaly detection is efficiently increased by different algorithms.


2019 ◽  
Vol 6 (1) ◽  
Author(s):  
Khloud Al Jallad ◽  
Mohamad Aljnidi ◽  
Mohammad Said Desouki

Abstract With the growing use of information technology in all life domains, hacking has become more negatively effective than ever before. Also with developing technologies, attacks numbers are growing exponentially every few months and become more sophisticated so that traditional IDS becomes inefficient detecting them. This paper proposes a solution to detect not only new threats with higher detection rate and lower false positive than already used IDS, but also it could detect collective and contextual security attacks. We achieve those results by using Networking Chatbot, a deep recurrent neural network: Long Short Term Memory (LSTM) on top of Apache Spark Framework that has an input of flow traffic and traffic aggregation and the output is a language of two words, normal or abnormal. We propose merging the concepts of language processing, contextual analysis, distributed deep learning, big data, anomaly detection of flow analysis. We propose a model that describes the network abstract normal behavior from a sequence of millions of packets within their context and analyzes them in near real-time to detect point, collective and contextual anomalies. Experiments are done on MAWI dataset, and it shows better detection rate not only than signature IDS, but also better than traditional anomaly IDS. The experiment shows lower false positive, higher detection rate and better point anomalies detection. As for prove of contextual and collective anomalies detection, we discuss our claim and the reason behind our hypothesis. But the experiment is done on random small subsets of the dataset because of hardware limitations, so we share experiment and our future vision thoughts as we wish that full prove will be done in future by other interested researchers who have better hardware infrastructure than ours.


2021 ◽  
Author(s):  
Kanimozhi V ◽  
T. Prem Jacob

Abstract Although numerous profound learning models have been proposed, this research article contributed to symbolize the investigation of artificial deep learning models on sensible IoT gadgets to perform online protection in IoT network traffic by using the realistic IoT-23 dataset. This dataset is a recent network traffic dataset generated from the real-time network traffic data of IoT appliances. IoT products are utilized in various program applications such as home, commercial, mechanization, and various forms of wearable technologies. IoT security is more critical than network security because of its massive attack surface and multiplied weak spots of IoT gadgets. Universally, the general amount of IoT gadgets conveyed by 2025 is foreseen to achieve 41600 million. Henceforth, IoT anomaly detection systems based on the realistic Iot-23 big data for detecting IoT-based attacks on the artificial neural networks of Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and Multilayer perceptron (MLP) in IoT- cybersecurity has implemented and executed in this research article. As a result, Convolutional Neural Networks produces an outstanding performance of metric accuracy score is 0.998234, and minimal loss function is 0.008842, compare to Multilayer perceptron and Recurrent Neural Networks in IoT Anomaly Detection. Also generated well-displayed graph plots of Model_Accuracy, Learning curve of artificial Intelligence deep learning algorithms such as MLP, CNN, and RNN.


2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Sign in / Sign up

Export Citation Format

Share Document