Semi-supervised Learning with Concept Drift Using Particle Dynamics Applied to Network Intrusion Detection Data

Author(s):  
Fabricio Breve ◽  
Liang Zhao
2020 ◽  
Vol 9 ◽  
pp. e00497 ◽  
Author(s):  
J. Olamantanmi Mebawondu ◽  
Olufunso D. Alowolodu ◽  
Jacob O. Mebawondu ◽  
Adebayo O. Adetunmbi

2019 ◽  
Vol 37 (5) ◽  
Author(s):  
Seongchul Park ◽  
Sanghyun Seo ◽  
Changhoon Jeong ◽  
Juntae Kim

2005 ◽  
Vol 13 (2) ◽  
pp. 179-212 ◽  
Author(s):  
Matthew Glickman ◽  
Justin Balthrop ◽  
Stephanie Forrest

ARTIS is an artificial immune system framework which contains several adaptive mechanisms. LISYS is a version of ARTIS specialized for the problem of network intrusion detection. The adaptive mechanisms of LISYS are characterized in terms of their machine-learning counterparts, and a series of experiments is described, each of which isolates a different mechanism of LISYS and studies its contribution to the system's overall performance. The experiments were conducted on a new data set, which is more recent and realistic than earlier data sets. The network intrusion detection problem is challenging because it requires one-class learning in an on-line setting with concept drift. The experiments confirm earlier experimental results with LISYS, and they study in detail how LISYS achieves success on the new data set.


2022 ◽  
Vol 2161 (1) ◽  
pp. 012030
Author(s):  
R Garg ◽  
S Mukherjee

Abstract A user connects to hundreds of remote networks daily, some of which can be corrupted by malicious sources. To overcome this problem, a variety of Network Intrusion Detection systems are built, which aim to detect harmful networks before they establish a connection with the user’s local system. This paper focuses on proposing a model for Anomaly based Network Intrusion Detection systems (NIDS), by performing comparisons of various Supervised Learning Algorithms on metric of their accuracy. Two datasets were used and analysed, each having different properties in terms of the volume of data they contain and their use cases. Feature engineering was done to retrieve the most optimum features of both the datasets and only the top 25% best features were used to build the models – a smaller subset of features not only aids in decreasing the capital required to collect the data but also gets rid of redundant and noisy information. Two different splicing methods were used to train the data and each method showed different trends on the ML models.


Sign in / Sign up

Export Citation Format

Share Document