Deep Learning Anomaly Detection methods to passively detect COVID-19 from Audio

Author(s):  
Shreesha Narasimha Murthy ◽  
Emmanuel Agu
Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2451 ◽  
Author(s):  
Mohsin Munir ◽  
Shoaib Ahmed Siddiqui ◽  
Muhammad Ali Chattha ◽  
Andreas Dengel ◽  
Sheraz Ahmed

The need for robust unsupervised anomaly detection in streaming data is increasing rapidly in the current era of smart devices, where enormous data are gathered from numerous sensors. These sensors record the internal state of a machine, the external environment, and the interaction of machines with other machines and humans. It is of prime importance to leverage this information in order to minimize downtime of machines, or even avoid downtime completely by constant monitoring. Since each device generates a different type of streaming data, it is normally the case that a specific kind of anomaly detection technique performs better than the others depending on the data type. For some types of data and use-cases, statistical anomaly detection techniques work better, whereas for others, deep learning-based techniques are preferred. In this paper, we present a novel anomaly detection technique, FuseAD, which takes advantage of both statistical and deep-learning-based approaches by fusing them together in a residual fashion. The obtained results show an increase in area under the curve (AUC) as compared to state-of-the-art anomaly detection methods when FuseAD is tested on a publicly available dataset (Yahoo Webscope benchmark). The obtained results advocate that this fusion-based technique can obtain the best of both worlds by combining their strengths and complementing their weaknesses. We also perform an ablation study to quantify the contribution of the individual components in FuseAD, i.e., the statistical ARIMA model as well as the deep-learning-based convolutional neural network (CNN) model.


2021 ◽  
Author(s):  
Ali Moradi Vartouni ◽  
Matin Shokri ◽  
Mohammad Teshnehlab

Protecting websites and applications from cyber-threats is vital for any organization. A Web application firewall (WAF) prevents attacks to damaging applications. This provides a web security by filtering and monitoring traffic network to protect against attacks. A WAF solution based on the anomaly detection can identify zero-day attacks. Deep learning is the state-of-the-art method that is widely used to detect attacks in the anomaly-based WAF area. Although deep learning has demonstrated excellent results on anomaly detection tasks in web requests, there is trade-off between false-positive and missed-attack rates which is a key problem in WAF systems. On the other hand, anomaly detection methods suffer adjusting threshold-level to distinguish attack and normal traffic. In this paper, first we proposed a model based on Deep Support Vector Data Description (Deep SVDD), then we compare two feature extraction strategies, one-hot and bigram, on the raw requests. Second to overcome threshold challenges, we introduce a novel end-to-end algorithm Auto-Threshold Deep SVDD (ATDSVDD) to determine an appropriate threshold during the learning process. As a result we compare our model with other deep models on CSIC-2010 and ECML/PKDD-2007 datasets. Results show ATDSVDD on bigram feature data have better performance in terms of accuracy and generalization. <br>


2021 ◽  
Vol 54 (5) ◽  
pp. 1-36
Author(s):  
Yuan Luo ◽  
Ya Xiao ◽  
Long Cheng ◽  
Guojun Peng ◽  
Danfeng (Daphne) Yao

Anomaly detection is crucial to ensure the security of cyber-physical systems (CPS). However, due to the increasing complexity of CPSs and more sophisticated attacks, conventional anomaly detection methods, which face the growing volume of data and need domain-specific knowledge, cannot be directly applied to address these challenges. To this end, deep learning-based anomaly detection (DLAD) methods have been proposed. In this article, we review state-of-the-art DLAD methods in CPSs. We propose a taxonomy in terms of the type of anomalies, strategies, implementation, and evaluation metrics to understand the essential properties of current methods. Further, we utilize this taxonomy to identify and highlight new characteristics and designs in each CPS domain. Also, we discuss the limitations and open problems of these methods. Moreover, to give users insights into choosing proper DLAD methods in practice, we experimentally explore the characteristics of typical neural models, the workflow of DLAD methods, and the running performance of DL models. Finally, we discuss the deficiencies of DL approaches, our findings, and possible directions to improve DLAD methods and motivate future research.


2021 ◽  
Vol 2132 (1) ◽  
pp. 012012
Author(s):  
Jiaqi Zhou

Abstract Time series anomaly detection has always been an important research direction. The early time series anomaly detection methods are mainly statistical methods and machine learning methods. With the powerful functions of deep neural network being continuously mined by researchers, the effect of deep neural network in anomaly detection task has been significantly better than the traditional methods. In view of the continuous development and application of deep neural networks such as transformer and graph neural network (GNN) in time series anomaly detection in recent years, the body of research lacks a comparative evaluation of deep learning methods in recent years. This paper studies various deep neural networks suitable for time series, which are divided into three categories according to anomaly detection methods. The evaluation is conducted on public datasets. By analyzing the evaluation criteria, this paper discusses the performance of each model, as well as the problems and development direction in the field of time series anomaly detection in the future. This study found that in the time series anomaly detection task, transformer is suitable for dealing with long-time series prediction, and studying the graph structure of time series may be the best way to deal with time series anomaly detection in the future


2021 ◽  
Author(s):  
Ali Moradi Vartouni ◽  
Matin Shokri ◽  
Mohammad Teshnehlab

Protecting websites and applications from cyber-threats is vital for any organization. A Web application firewall (WAF) prevents attacks to damaging applications. This provides a web security by filtering and monitoring traffic network to protect against attacks. A WAF solution based on the anomaly detection can identify zero-day attacks. Deep learning is the state-of-the-art method that is widely used to detect attacks in the anomaly-based WAF area. Although deep learning has demonstrated excellent results on anomaly detection tasks in web requests, there is trade-off between false-positive and missed-attack rates which is a key problem in WAF systems. On the other hand, anomaly detection methods suffer adjusting threshold-level to distinguish attack and normal traffic. In this paper, first we proposed a model based on Deep Support Vector Data Description (Deep SVDD), then we compare two feature extraction strategies, one-hot and bigram, on the raw requests. Second to overcome threshold challenges, we introduce a novel end-to-end algorithm Auto-Threshold Deep SVDD (ATDSVDD) to determine an appropriate threshold during the learning process. As a result we compare our model with other deep models on CSIC-2010 and ECML/PKDD-2007 datasets. Results show ATDSVDD on bigram feature data have better performance in terms of accuracy and generalization. <br>


Author(s):  
Gabriel Rodriguez Garcia ◽  
Gabriel Michau ◽  
Mélanie Ducoffe ◽  
Jayant Sen Gupta ◽  
Olga Fink

The ability to detect anomalies in time series is considered highly valuable in numerous application domains. The sequential nature of time series objects is responsible for an additional feature complexity, ultimately requiring specialized approaches in order to solve the task. Essential characteristics of time series, situated outside the time domain, are often difficult to capture with state-of-the-art anomaly detection methods when no transformations have been applied to the time series. Inspired by the success of deep learning methods in computer vision, several studies have proposed transforming time series into image-like representations, used as inputs for deep learning models, and have led to very promising results in classification tasks. In this paper, we first review the signal to image encoding approaches found in the literature. Second, we propose modifications to some of their original formulations to make them more robust to the variability in large datasets. Third, we compare them on the basis of a common unsupervised task to demonstrate how the choice of the encoding can impact the results when used in the same deep learning architecture. We thus provide a comparison between six encoding algorithms with and without the proposed modifications. The selected encoding methods are Gramian Angular Field, Markov Transition Field, recurrence plot, grey scale encoding, spectrogram, and scalogram. We also compare the results achieved with the raw signal used as input for another deep learning model. We demonstrate that some encodings have a competitive advantage and might be worth considering within a deep learning framework. The comparison is performed on a dataset collected and released by Airbus SAS, containing highly complex vibration measurements from real helicopter flight tests. The different encodings provide competitive results for anomaly detection.


Author(s):  
M. N. Favorskaya ◽  
L. C. Jain

Introduction:Saliency detection is a fundamental task of computer vision. Its ultimate aim is to localize the objects of interest that grab human visual attention with respect to the rest of the image. A great variety of saliency models based on different approaches was developed since 1990s. In recent years, the saliency detection has become one of actively studied topic in the theory of Convolutional Neural Network (CNN). Many original decisions using CNNs were proposed for salient object detection and, even, event detection.Purpose:A detailed survey of saliency detection methods in deep learning era allows to understand the current possibilities of CNN approach for visual analysis conducted by the human eyes’ tracking and digital image processing.Results:A survey reflects the recent advances in saliency detection using CNNs. Different models available in literature, such as static and dynamic 2D CNNs for salient object detection and 3D CNNs for salient event detection are discussed in the chronological order. It is worth noting that automatic salient event detection in durable videos became possible using the recently appeared 3D CNN combining with 2D CNN for salient audio detection. Also in this article, we have presented a short description of public image and video datasets with annotated salient objects or events, as well as the often used metrics for the results’ evaluation.Practical relevance:This survey is considered as a contribution in the study of rapidly developed deep learning methods with respect to the saliency detection in the images and videos.


Author(s):  
Gunjan Saraogi ◽  
Deepa Gupta ◽  
Lavanya Sharma ◽  
Ajay Rana

Background: Backorders are an accepted abnormality affecting accumulation alternation and logistics, sales, chump service, and manufacturing, which generally leads to low sales and low chump satisfaction. A predictive archetypal can analyse which articles are best acceptable to acquaintance backorders giving the alignment advice and time to adjust, thereby demography accomplishes to aerate their profit. Objective: To address the issue of predicting backorders, this paper has proposed an un-supervised approach to backorder prediction using Deep Autoencoder. Method: In this paper, artificial intelligence paradigms are researched in order to introduce a predictive model for the present unbalanced data issues, where the number of products going on backorder is rare. Result: Un-supervised anomaly detection using deep auto encoders has shown better Area under the Receiver Operating Characteristic and precision-recall curves than supervised classification techniques employed with resampling techniques for imbalanced data problems. Conclusion: We demonstrated that Un-supervised anomaly detection methods specifically deep auto-encoders can be used to learn a good representation of the data. The method can be used as predictive model for inventory management and help to reduce bullwhip effect, raise customer satisfaction as well as improve operational management in the organization. This technology is expected to create the sentient supply chain of the future – able to feel, perceive and react to situations at an extraordinarily granular level


Sign in / Sign up

Export Citation Format

Share Document