scholarly journals A comparative analysis of CGAN‐based oversampling for anomaly detection

Author(s):  
Rahbar Ahsan ◽  
Wei Shi ◽  
Xiangyu Ma ◽  
William Lee Croft
Author(s):  
Matteo Olivato ◽  
Omar Cotugno ◽  
Lorenzo Brigato ◽  
Domenico Bloisi ◽  
Alessandro Farinelli ◽  
...  

2007 ◽  
Author(s):  
Stefania Matteoli ◽  
Francesca Carnesecchi ◽  
Marco Diani ◽  
Giovanni Corsini ◽  
Leandro Chiarantini

Author(s):  
Kotikapaludi Sriram ◽  
Oliver Borchert ◽  
Okhee Kim ◽  
Patrick Gleichmann ◽  
Doug Montgomery

Author(s):  
Andriy Lishchytovych ◽  
Volodymyr Pavlenko ◽  
Alexander Shmatok ◽  
Yuriy Finenko

This paper provides with the description, comparative analysis of multiple commonly used approaches of the analysis of system logs, and streaming data massively generated by company IT infrastructure with an unattended anomaly detection feature. An importance of the anomaly detection is dictated by the growing costs of system downtime due to the events that would have been predicted based on the log entries with the abnormal data reported. Anomaly detection systems are built using standard workflow of the data collection, parsing, information extraction and detection steps. Most of the document is related to the anomaly detection step and algorithms like regression, decision tree, SVM, clustering, principal components analysis, invariants mining and hierarchical temporal memory model. Model-based anomaly algorithms and hierarchical temporary memory algorithms were used to process HDFS, BGL and NAB datasets with ~16m log messages and 365k data points of the streaming data. The data was manually labeled to enable the training of the models and accuracy calculation. According to the results, supervised anomaly detection systems achieve high precision but require significant training effort, while HTM-based algorithm shows the highest detection precision with zero training. Detection of the abnormal system behavior plays an important role in large-scale incident management systems. Timely detection allows IT administrators to quickly identify issues and resolve them immediately. This approach reduces the system downtime dramatically.Most of the IT systems generate logs with the detailed information of the operations. Therefore, the logs become an ideal data source of the anomaly detection solutions. The volume of the logs makes it impossible to analyze them manually and requires automated approaches.


Author(s):  
Srijan Das ◽  
Arpita Dutta ◽  
Saurav Sharma ◽  
Sangharatna Godboley

Anomaly Detection is an important research domain of Pattern Recognition due to its effects of classification and clustering problems. In this paper, an anomaly detection algorithm is proposed using different primitive cost functions such as Normal Perceptron, Relaxation Criterion, Mean Square Error (MSE) and Ho-Kashyap. These criterion functions are minimized to locate the decision boundary in the data space so as to classify the normal data objects and the anomalous data objects. The authors proposed algorithm uses the concept of supervised classification, though it is very different from solving normal supervised classification problems. This proposed algorithm using different criterion functions has been compared with the accuracy of the Neural Networks (NN) in order to bring out a comparative analysis between them and discuss some advantages.


Sign in / Sign up

Export Citation Format

Share Document