log file
Recently Published Documents


TOTAL DOCUMENTS

325
(FIVE YEARS 114)

H-INDEX

18
(FIVE YEARS 3)

2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

There is a need for automatic log file template detection tool to find out all the log messages through search space. On the other hand, the template detection tool should cope with two constraints: (i) it could not be too general and (ii) it could not be too specific These constraints are, contradict to one another and can be considered as a multi-objective optimization problem. Thus, a novel multi-objective optimization based log-file template detection approach named LTD-MO is proposed in this paper. It uses a new multi-objective based swarm intelligence algorithm called chicken swarm optimization for solving the hard optimization issue. Moreover, it analyzes all templates in the search space and selects a Pareto front optimal solution set for multi-objective compensation. The proposed approach is implemented and evaluated on eight publicly available benchmark log datasets. The empirical analysis shows LTD-MO detects large number of appropriate templates by significantly outperforming the existing techniques on all datasets.


2022 ◽  
Vol 13 (1) ◽  
pp. 0-0

There is a need for automatic log file template detection tool to find out all the log messages through search space. On the other hand, the template detection tool should cope with two constraints: (i) it could not be too general and (ii) it could not be too specific These constraints are, contradict to one another and can be considered as a multi-objective optimization problem. Thus, a novel multi-objective optimization based log-file template detection approach named LTD-MO is proposed in this paper. It uses a new multi-objective based swarm intelligence algorithm called chicken swarm optimization for solving the hard optimization issue. Moreover, it analyzes all templates in the search space and selects a Pareto front optimal solution set for multi-objective compensation. The proposed approach is implemented and evaluated on eight publicly available benchmark log datasets. The empirical analysis shows LTD-MO detects large number of appropriate templates by significantly outperforming the existing techniques on all datasets.


Data in Brief ◽  
2021 ◽  
Vol 39 ◽  
pp. 107672 ◽  
Author(s):  
Michal Munk ◽  
Anna Pilkova ◽  
Ľubomír Benko ◽  
Petra Blazekova ◽  
Peter Svec

Author(s):  
Rasim M. Alguliyev ◽  
◽  
Fargana J. Abdullayeva ◽  
Sabira S. Ojagverdiyeva

2021 ◽  
Vol 4 ◽  
Author(s):  
Rashid Zaman ◽  
Marwan Hassani ◽  
Boudewijn F. Van Dongen

In the context of process mining, event logs consist of process instances called cases. Conformance checking is a process mining task that inspects whether a log file is conformant with an existing process model. This inspection is additionally quantifying the conformance in an explainable manner. Online conformance checking processes streaming event logs by having precise insights into the running cases and timely mitigating non-conformance, if any. State-of-the-art online conformance checking approaches bound the memory by either delimiting storage of the events per case or limiting the number of cases to a specific window width. The former technique still requires unbounded memory as the number of cases to store is unlimited, while the latter technique forgets running, not yet concluded, cases to conform to the limited window width. Consequently, the processing system may later encounter events that represent some intermediate activity as per the process model and for which the relevant case has been forgotten, to be referred to as orphan events. The naïve approach to cope with an orphan event is to either neglect its relevant case for conformance checking or treat it as an altogether new case. However, this might result in misleading process insights, for instance, overestimated non-conformance. In order to bound memory yet effectively incorporate the orphan events into processing, we propose an imputation of missing-prefix approach for such orphan events. Our approach utilizes the existing process model for imputing the missing prefix. Furthermore, we leverage the case storage management to increase the accuracy of the prefix prediction. We propose a systematic forgetting mechanism that distinguishes and forgets the cases that can be reliably regenerated as prefix upon receipt of their future orphan event. We evaluate the efficacy of our proposed approach through multiple experiments with synthetic and three real event logs while simulating a streaming setting. Our approach achieves considerably higher realistic conformance statistics than the state of the art while requiring the same storage.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Daniel Hofer ◽  
Markus Jäger ◽  
Aya Khaled Youssef Sayed Mohamed ◽  
Josef Küng

Purpose For aiding computer security experts in their study, log files are a crucial piece of information. Especially the time domain is very important for us because in most cases, timestamps are the only linking points between events caused by attackers, faulty systems or simple errors and their corresponding entries in log files. With the idea of storing and analyzing this log information in graph databases, we need a suitable model to store and connect timestamps and their events. This paper aims to find and evaluate different approaches how to store timestamps in graph databases and their individual benefits and drawbacks. Design/methodology/approach We analyse three different approaches, how timestamp information can be represented and stored in graph databases. For checking the models, we set up four typical questions that are important for log file analysis and tested them for each of the models. During the evaluation, we used the performance and other properties as metrics, how suitable each of the models is for representing the log files’ timestamp information. In the last part, we try to improve one promising looking model. Findings We come to the conclusion, that the simplest model with the least graph database-specific concepts in use is also the one yielding the simplest and fastest queries. Research limitations/implications Limitations to this research are that only one graph database was studied and also improvements to the query engine might change future results. Originality/value In the study, we addressed the issue of storing timestamps in graph databases in a meaningful, practical and efficient way. The results can be used as a pattern for similar scenarios and applications.


2021 ◽  
Vol 161 ◽  
pp. S627-S628
Author(s):  
S. Cilla ◽  
P. Viola ◽  
V.E. Morabito ◽  
C. Romano ◽  
M. Craus ◽  
...  

Author(s):  
Seng Boh Lim ◽  
Paola Godoy Scripes ◽  
Mary Napolitano ◽  
Ergys Subashi ◽  
Neelam Tyagi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document