process discovery
Recently Published Documents


TOTAL DOCUMENTS

252
(FIVE YEARS 99)

H-INDEX

21
(FIVE YEARS 3)

2022 ◽  
Vol 183 (3-4) ◽  
pp. 293-317
Author(s):  
Anna Kalenkova ◽  
Josep Carmona ◽  
Artem Polyvyanyy ◽  
Marcello La Rosa

State-of-the-art process discovery methods construct free-choice process models from event logs. Consequently, the constructed models do not take into account indirect dependencies between events. Whenever the input behaviour is not free-choice, these methods fail to provide a precise model. In this paper, we propose a novel approach for enhancing free-choice process models by adding non-free-choice constructs discovered a-posteriori via region-based techniques. This allows us to benefit from the performance of existing process discovery methods and the accuracy of the employed fundamental synthesis techniques. We prove that the proposed approach preserves fitness with respect to the event log while improving the precision when indirect dependencies exist. The approach has been implemented and tested on both synthetic and real-life datasets. The results show its effectiveness in repairing models discovered from event logs.


Author(s):  
Yutika Amelia Effendi ◽  
Riyanarto Sarno ◽  
Danica Virlianda Marsha

2021 ◽  
Author(s):  
Dominique Sommers ◽  
Vlado Menkovski ◽  
Dirk Fahland

2021 ◽  
Vol 5 (4) ◽  
pp. 1-13
Author(s):  
Muhammad Faizan ◽  
Megat F. Zuhairi ◽  
Shahrinaz Ismail

The potential in process mining is progressively growing due to the increasing amount of event-data. Process mining strategies use event-logs to automatically classify process models, recommend improvements, predict processing times, check conformance, and recognize anomalies/deviations and bottlenecks. However, proper handling of event-logs while evaluating and using them as input is crucial to any process mining technique. When process mining techniques are applied to flexible systems with a large number of decisions to take at runtime, the outcome is often unstructured or semi-structured process models that are hard to comprehend. Existing approaches are good at discovering and visualizing structured processes but often struggle with less structured ones. Surprisingly, process mining is most useful in domains where flexibility is desired. A good illustration is the "patient treatment" process in a hospital, where the ability to deviate from dealing with changing conditions is crucial. It is useful to have insights into actual operations. However, there is a significant amount of diversity, which contributes to complicated, difficult-to-understand models. Trace clustering is a method for decreasing the complexity of process models in this context while also increasing their comprehensibility and accuracy. This paper discusses process mining, event-logs, and presenting a clustering approach to pre-process event-logs, i.e., a homogeneous subset of the event-log is created. A process model is generated for each subset. These homogeneous subsets are then evaluated independently from each other, which significantly improving the quality of mining results in flexible environments. The presented approach improves the fitness and precision of a discovered model while reducing its complexity, resulting in well-structured and easily understandable process discovery results.


2021 ◽  
Author(s):  
Zhenyu Zhang ◽  
Ryan Hildebrant ◽  
Fatemeh Asgarinejad ◽  
Nalini Venkatasubramanian ◽  
Shangping Ren

2021 ◽  
Vol 10 (09) ◽  
pp. 116-121
Author(s):  
Huiling LI ◽  
Shuaipeng ZHANG ◽  
Xuan SU

The information system collects a large number of business process event logs, and process discovery aims to discover process models from the event logs. Many process discovery methods have been proposed, but most of them still have problems when processing event logs, such as low mining efficiency and poor process model quality. The trace clustering method allows to decompose original log to effectively solve these problems. There are many existing trace clustering methods, such as clustering based on vector space approaches, context-aware trace clustering, model-based sequence clustering, etc. The clustering effects obtained by different trace clustering methods are often different. Therefore, this paper proposes a preprocessing method to improve the performance of process discovery, called as trace clustering. Firstly, the event log is decomposed into a set of sub-logs by trace clustering method, Secondly, the sub-logs generate process models respectively by the process mining method. The experimental analysis on the datasets shows that the method proposed not only effectively improves the time performance of process discovery, but also improves the quality of the process model.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Riyanarto Sarno ◽  
Kelly Rossa Sungkono ◽  
Muhammad Taufiqulsa’di ◽  
Hendra Darmawan ◽  
Achmad Fahmi ◽  
...  

AbstractProcess discovery helps companies automatically discover their existing business processes based on the vast, stored event log. The process discovery algorithms have been developed rapidly to discover several types of relations, i.e., choice relations, non-free choice relations with invisible tasks. Invisible tasks in non-free choice, introduced by $$\alpha ^{\$ }$$ α $ method, is a type of relationship that combines the non-free choice and the invisible task. $$\alpha ^{\$ }$$ α $ proposed rules of ordering relations of two activities for determining invisible tasks in non-free choice. The event log records sequences of activities, so the rules of $$\alpha ^{\$ }$$ α $ check the combination of invisible task within non-free choice. The checking processes are time-consuming and result in high computing times of $$\alpha ^{\$ }$$ α $ . This research proposes Graph-based Invisible Task (GIT) method to discover efficiently invisible tasks in non-free choice. GIT method develops sequences of business activities as graphs and determines rules to discover invisible tasks in non-free choice based on relationships of the graphs. The analysis of the graph relationships by rules of GIT is more efficient than the iterative process of checking combined activities by $$\alpha ^{\$ }$$ α $ . This research measures the time efficiency of storing the event log and discovering a process model to evaluate GIT algorithm. Graph database gains highest storing computing time of batch event logs; however, this database obtains low storing computing time of streaming event logs. Furthermore, based on an event log with 99 traces, GIT algorithm discovers a process model 42 times faster than α++ and 43 times faster than α$. GIT algorithm can also handle 981 traces, while α++ and α$ has maximum traces at 99 traces. Discovering a process model by GIT algorithm has less time complexity than that by $$\alpha ^{\$ }$$ α $ , wherein GIT obtains $$O(n^{3} )$$ O ( n 3 ) and $$\alpha ^{\$ }$$ α $ obtains $$O(n^{4} )$$ O ( n 4 ) . Those results of the evaluation show a significant improvement of GIT method in term of time efficiency.


Sign in / Sign up

Export Citation Format

Share Document