An empirical model in intrusion detection systems using principal component analysis and deep learning models

Author(s):  
Hariharan Rajadurai ◽  
Usha Devi Gandhi
Author(s):  
Feyzan Saruhan-Ozdag ◽  
Derya Yiltas-Kaplan ◽  
Tolga Ensari

Intrusion detection systems are one of the most important tools used against the threats to network security in ever-evolving network structures. Along with evolving technology, it has become a necessity to design powerful intrusion detection systems and integrate them into network systems. The main purpose of this research is to develop a new method by using different techniques together to increase the attack detection rates. Negative selection algorithm, a type of artificial immune system algorithms, is used and improved at the stage of detector generation. In phase of the preparation of the data, information gain is used as feature selection and principal component analysis is used as dimensionality reduction method. The first method is the random detector generation and the other one is the method developed by combining the information gain, principal component analysis, and genetic algorithm. The methods were tested using the KDD CUP 99 data set. Different performance values are measured, and the results are compared with different machine learning algorithms.


2021 ◽  
Vol 1 (2) ◽  
pp. 252-273
Author(s):  
Pavlos Papadopoulos ◽  
Oliver Thornewill von Essen ◽  
Nikolaos Pitropakis ◽  
Christos Chrysoulas ◽  
Alexios Mylonas ◽  
...  

As the internet continues to be populated with new devices and emerging technologies, the attack surface grows exponentially. Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought. Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy. Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision. Nevertheless, machine learning models are also vulnerable to attacks. Adversarial examples can be used to evaluate the robustness of a designed model before it is deployed. Further, using adversarial examples is critical to creating a robust model designed for an adversarial environment. Our work evaluates both traditional machine learning and deep learning models’ robustness using the Bot-IoT dataset. Our methodology included two main approaches. First, label poisoning, used to cause incorrect classification by the model. Second, the fast gradient sign method, used to evade detection measures. The experiments demonstrated that an attacker could manipulate or circumvent detection with significant probability.


Author(s):  
Mohsen Moshki ◽  
Mehran Garmehi ◽  
Peyman Kabiri

In this chapter, application of Principal Component Analysis (PCA) and one of its extensions on intrusion detection is investigated. This extended version of PCA is modified to cover an important shortcoming of traditional PCA. In order to evaluate these modifications, it is mathematically proved that these modifications are beneficial and later on a known dataset such as the DARPA99 dataset is used to verify results experimentally. To verify this approach, initially the traditional PCA is used to preprocess the dataset. Later on, using a simple classifier such as KNN, the effectiveness of the multiclass classification is studied. In the reported work, instead of traditional PCA, a revised version of PCA named Weighted PCA (WPCA) will be used for feature extraction. The results from applying the aforementioned method to the DARPA99 dataset show that this approach results in better accuracy than the traditional PCA when a number of features are limited, a number of classes are large, and a population of classes is unbalanced. In some situations WPCA outperforms traditional PCA by more than 1% in accuracy.


Sign in / Sign up

Export Citation Format

Share Document