scholarly journals Generative adversarial attacks against intrusion detection systems using active learning

Author(s):  
Dule Shu ◽  
Nandi O. Leslie ◽  
Charles A. Kamhoua ◽  
Conrad S. Tucker
Author(s):  
JUN LONG ◽  
WENTAO ZHAO ◽  
FANGZHOU ZHU ◽  
ZHIPING CAI

Intrusion detection systems play an important role in computer security. To make intrusion detection systems adaptive to changing environments, supervised learning techniques had been applied in intrusion detection. However, supervised learning needs a large amount of training instances to obtain classifiers with high accuracy. Limited to lack of high quality labeled instances, some researchers focused on semi-supervised learning to utilize unlabeled instances enhancing classification. But involving the unlabeled instances into the learning process also introduces vulnerability: attackers can generate fake unlabeled instances to mislead the final classifier so that a few intrusions can not be detected. In this paper we show that the attacker could mislead the semi-supervised intrusion detection classifier by poisoning the unlabeled instances. And we propose a defend method based on active learning to defeat the poisoning attack. Experiments show that the poisoning attack can reduce the accuracy of the semi-supervised learning classifier and the proposed defending method based on active learning can obtain higher accuracy than the original semi-supervised learner under the presented poisoning attack.


2006 ◽  
Vol 65 (10) ◽  
pp. 929-936
Author(s):  
A. V. Agranovskiy ◽  
S. A. Repalov ◽  
R. A. Khadi ◽  
M. B. Yakubets

Sign in / Sign up

Export Citation Format

Share Document