Anomaly Detection from Log Files Using Multidimensional Analysis Model

Author(s):  
Yassine Azizi ◽  
Mostafa Azizi ◽  
Mohamed Elboukhari
2021 ◽  
Vol 12 (8) ◽  
Author(s):  
David Della-Morte ◽  
Francesca Pacifici ◽  
Camillo Ricordi ◽  
Renato Massoud ◽  
Valentina Rovella ◽  
...  

AbstractThe pathophysiology of coronavirus disease 2019 (COVID-19), caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and especially of its complications is still not fully understood. In fact, a very high number of patients with COVID-19 die because of thromboembolic causes. A role of plasminogen, as precursor of fibrinolysis, has been hypothesized. In this study, we aimed to investigate the association between plasminogen levels and COVID-19-related outcomes in a population of 55 infected Caucasian patients (mean age: 69.8 ± 14.3, 41.8% female). Low levels of plasminogen were significantly associated with inflammatory markers (CRP, PCT, and IL-6), markers of coagulation (D-dimer, INR, and APTT), and markers of organ dysfunctions (high fasting blood glucose and decrease in the glomerular filtration rate). A multidimensional analysis model, including the correlation of the expression of coagulation with inflammatory parameters, indicated that plasminogen tended to cluster together with IL-6, hence suggesting a common pathway of activation during disease’s complication. Moreover, low levels of plasminogen strongly correlated with mortality in COVID-19 patients even after multiple adjustments for presence of confounding. These data suggest that plasminogen may play a pivotal role in controlling the complex mechanisms beyond the COVID-19 complications, and may be useful both as biomarker for prognosis and for therapeutic target against this extremely aggressive infection.


Electronics ◽  
2020 ◽  
Vol 9 (7) ◽  
pp. 1164
Author(s):  
João Henriques ◽  
Filipe Caldeira ◽  
Tiago Cruz ◽  
Paulo Simões

Computing and networking systems traditionally record their activity in log files, which have been used for multiple purposes, such as troubleshooting, accounting, post-incident analysis of security breaches, capacity planning and anomaly detection. In earlier systems those log files were processed manually by system administrators, or with the support of basic applications for filtering, compiling and pre-processing the logs for specific purposes. However, as the volume of these log files continues to grow (more logs per system, more systems per domain), it is becoming increasingly difficult to process those logs using traditional tools, especially for less straightforward purposes such as anomaly detection. On the other hand, as systems continue to become more complex, the potential of using large datasets built of logs from heterogeneous sources for detecting anomalies without prior domain knowledge becomes higher. Anomaly detection tools for such scenarios face two challenges. First, devising appropriate data analysis solutions for effectively detecting anomalies from large data sources, possibly without prior domain knowledge. Second, adopting data processing platforms able to cope with the large datasets and complex data analysis algorithms required for such purposes. In this paper we address those challenges by proposing an integrated scalable framework that aims at efficiently detecting anomalous events on large amounts of unlabeled data logs. Detection is supported by clustering and classification methods that take advantage of parallel computing environments. We validate our approach using the the well known NASA Hypertext Transfer Protocol (HTTP) logs datasets. Fourteen features were extracted in order to train a k-means model for separating anomalous and normal events in highly coherent clusters. A second model, making use of the XGBoost system implementing a gradient tree boosting algorithm, uses the previous binary clustered data for producing a set of simple interpretable rules. These rules represent the rationale for generalizing its application over a massive number of unseen events in a distributed computing environment. The classified anomaly events produced by our framework can be used, for instance, as candidates for further forensic and compliance auditing analysis in security management.


Author(s):  
M.A Ganzhur ◽  
◽  
A.P. Ganzhur ◽  
D.L. Romanov

. The article provides a multidimensional analysis of the Anomaly detection process in "smart field" data processing systems. Simulation of anomaly


2012 ◽  
Vol 1 (1) ◽  
Author(s):  
Anass El haddadi ◽  
Bernard Dousset ◽  
Ilham Berrada

The strategy concept has changed dramatically: from a long range planning to strategic planning then to strategic responsiveness. This response implies moving from a concept of change to a concept of continuous evolution. In our context, the competitive intelligence system presented aims to improve decision‐making in all aspects of business life, particularly for offensive and innovative decisions. In the paper we present XPlor EveryWhere, our competitive intelligence system based on a multidimensional analysis model for mobile devices. The objective of this system is to capture the information environment in all dimensions of a decision problem, with the exploitation of information by analyzing the evolution of their interactions.


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Anomaly detection is a very important step in building a secure and trustworthy system. Manually it is daunting to analyze and detect failures and anomalies. In this paper, we proposed an approach that leverages the pattern matching capabilities of Convolution Neural Network (CNN) for anomaly detection in system logs. Features from log files are extracted using a windowing technique. Based on this feature, a one-dimensional image (1×n dimension) is generated where the pixel values of an image correlate with the features of the logs. On these images, the 1D Convolution operation is applied followed by max pooling. Followed by Convolution layers, a multi-layer feed-forward neural network is used as a classifier that learns to classify the logs as normal or abnormal from the representation created by the convolution layers. The model learns the variation in log pattern for normal and abnormal behavior. The proposed approach achieved improved accuracy compared to existing approaches for anomaly detection in Hadoop Distributed File System (HDFS) logs.


Sign in / Sign up

Export Citation Format

Share Document