Crowd anomaly detection with LSTMs using optical features and domain knowledge for improved inferring

Author(s):  
Mohammad Sabih ◽  
Dinesh Kumar Vishwakarma
Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1164
Author(s):  
João Henriques ◽  
Filipe Caldeira ◽  
Tiago Cruz ◽  
Paulo Simões

Computing and networking systems traditionally record their activity in log files, which have been used for multiple purposes, such as troubleshooting, accounting, post-incident analysis of security breaches, capacity planning and anomaly detection. In earlier systems those log files were processed manually by system administrators, or with the support of basic applications for filtering, compiling and pre-processing the logs for specific purposes. However, as the volume of these log files continues to grow (more logs per system, more systems per domain), it is becoming increasingly difficult to process those logs using traditional tools, especially for less straightforward purposes such as anomaly detection. On the other hand, as systems continue to become more complex, the potential of using large datasets built of logs from heterogeneous sources for detecting anomalies without prior domain knowledge becomes higher. Anomaly detection tools for such scenarios face two challenges. First, devising appropriate data analysis solutions for effectively detecting anomalies from large data sources, possibly without prior domain knowledge. Second, adopting data processing platforms able to cope with the large datasets and complex data analysis algorithms required for such purposes. In this paper we address those challenges by proposing an integrated scalable framework that aims at efficiently detecting anomalous events on large amounts of unlabeled data logs. Detection is supported by clustering and classification methods that take advantage of parallel computing environments. We validate our approach using the the well known NASA Hypertext Transfer Protocol (HTTP) logs datasets. Fourteen features were extracted in order to train a k-means model for separating anomalous and normal events in highly coherent clusters. A second model, making use of the XGBoost system implementing a gradient tree boosting algorithm, uses the previous binary clustered data for producing a set of simple interpretable rules. These rules represent the rationale for generalizing its application over a massive number of unseen events in a distributed computing environment. The classified anomaly events produced by our framework can be used, for instance, as candidates for further forensic and compliance auditing analysis in security management.


2011 ◽  
Vol 55 (5) ◽  
pp. 11:1-11:11 ◽  
Author(s):  
M. S. Beigi ◽  
S.-F. Chang ◽  
S. Ebadollahi ◽  
D. C. Verma

2018 ◽  
Vol 16 (1) ◽  
pp. 27-39 ◽  
Author(s):  
Yu Hao ◽  
Zhi-Jie Xu ◽  
Ying Liu ◽  
Jing Wang ◽  
Jiu-Lun Fan

Author(s):  
Junjie Ma ◽  
◽  
Yaping Dai ◽  
Kaoru Hirota

Population growth has made the probability of incidents at large-scale crowd events higher than ever. In the past decades, automated crowd scene analysis done by computer vision has attracted attention. However, severe occlusions and complex crowd behaviors make such analysis a challenge. As a key aspect of crowd scene analysis, a number of works dealing with dense crowd anomaly detection based on computer vision have been presented. This work is a survey of computer vision techniques for analyzing dense crowd scenes. It covers two aspects: crowd density estimation and abnormal event detection. Some problems and perspectives are discussed at the end.


2021 ◽  
Vol 14 (10) ◽  
pp. 1717-1729
Author(s):  
Paul Boniol ◽  
John Paparrizos ◽  
Themis Palpanas ◽  
Michael J. Franklin

With the increasing demand for real-time analytics and decision making, anomaly detection methods need to operate over streams of values and handle drifts in data distribution. Unfortunately, existing approaches have severe limitations: they either require prior domain knowledge or become cumbersome and expensive to use in situations with recurrent anomalies of the same type. In addition, subsequence anomaly detection methods usually require access to the entire dataset and are not able to learn and detect anomalies in streaming settings. To address these problems, we propose SAND, a novel online method suitable for domain-agnostic anomaly detection. SAND aims to detect anomalies based on their distance to a model that represents normal behavior. SAND relies on a novel steaming methodology to incrementally update such model, which adapts to distribution drifts and omits obsolete data. The experimental results on several real-world datasets demonstrate that SAND correctly identifies single and recurrent anomalies without prior knowledge of the characteristics of these anomalies. SAND outperforms by a large margin the current state-of-the-art algorithms in terms of accuracy while achieving orders of magnitude speedups.


2017 ◽  
Vol 77 (14) ◽  
pp. 17755-17777 ◽  
Author(s):  
Joelmir Ramos ◽  
Nadia Nedjah ◽  
Luiza de Macedo Mourelle ◽  
Brij B. Gupta

Sign in / Sign up

Export Citation Format

Share Document