scholarly journals Conformance Checking of Dwelling Time Using a Token-based Method

Author(s):  
Bambang Jokonowo ◽  
Nenden Siti Fatonah ◽  
Emelia Akashah Patah Akhir

Background: Standard operating procedure (SOP) is a series of business activities to achieve organisational goals, with each activity carried to be recorded and stored in the information system together with its location (e.g., SCM, ERP, LMS, CRM). The activity is known as event data and is stored in a database known as an event log.Objective: Based on the event log, we can calculate the fitness to determine whether the business process SOP is following the actual business process.Methods: This study obtains the event log from a terminal operating system (TOS), which records the dwelling time at the container port. The conformance checking using token-based replay method calculates fitness by comparing the event log with the process model.Results: The findings using the Alpha algorithm resulted in the most traversed traces (a, b, n, o, p). The fitness calculation returns 1.0 were produced, missing, and remaining tokens are replied to each of the other traces.Conclusion: Thus, if the process mining produces a fitness of more than 0.80, this shows that the process model is following the actual business process. Keywords: Conformance Checking, Dwelling time, Event log, Fitness, Process Discovery, Process Mining

2021 ◽  
Vol 11 (22) ◽  
pp. 10556
Author(s):  
Heidy M. Marin-Castro ◽  
Edgar Tello-Leal

Process Mining allows organizations to obtain actual business process models from event logs (discovery), to compare the event log or the resulting process model in the discovery task with the existing reference model of the same process (conformance), and to detect issues in the executed process to improve (enhancement). An essential element in the three tasks of process mining (discovery, conformance, and enhancement) is data cleaning, used to reduce the complexity inherent to real-world event data, to be easily interpreted, manipulated, and processed in process mining tasks. Thus, new techniques and algorithms for event data preprocessing have been of interest in the research community in business process. In this paper, we conduct a systematic literature review and provide, for the first time, a survey of relevant approaches of event data preprocessing for business process mining tasks. The aim of this work is to construct a categorization of techniques or methods related to event data preprocessing and to identify relevant challenges around these techniques. We present a quantitative and qualitative analysis of the most popular techniques for event log preprocessing. We also study and present findings about how a preprocessing technique can improve a process mining task. We also discuss the emerging future challenges in the domain of data preprocessing, in the context of process mining. The results of this study reveal that the preprocessing techniques in process mining have demonstrated a high impact on the performance of the process mining tasks. The data cleaning requirements are dependent on the characteristics of the event logs (voluminous, a high variability in the set of traces size, changes in the duration of the activities. In this scenario, most of the surveyed works use more than a single preprocessing technique to improve the quality of the event log. Trace-clustering and trace/event level filtering resulted in being the most commonly used preprocessing techniques due to easy of implementation, and they adequately manage noise and incompleteness in the event logs.


2021 ◽  
Author(s):  
Ashok Kumar Saini ◽  
Ruchi Kamra ◽  
Utpal Shrivastava

Conformance Checking (CC) techniques enable us to gives the deviation between modelled behavior and actual execution behavior. The majority of organizations have Process-Aware Information Systems for recording the insights of the system. They have the process model to show how the process will be executed. The key intention of Process Mining is to extracting facts from the event log and used them for analysis, ratification, improvement, and redesigning of a process. Researchers have proposed various CC techniques for specific applications and process models. This paper has a detailed study of key concepts and contributions of Process Mining. It also helps in achieving business goals. The current challenges and opportunities in Process Mining are also discussed. The survey is based on CC techniques proposed by researchers with key objectives like quality parameters, perspective, algorithm types, tools, and achievements.


2021 ◽  
Vol 16 ◽  
pp. 1-14
Author(s):  
Zineb Lamghari

Process discovery technique aims at automatically generating a process model that accurately describes a Business Process (BP) based on event data. Related discovery algorithms consider recorded events are only resulting from an operational BP type. While the management community defines three BP types, which are: Management, Support and Operational. They distinguish each BP type by different proprieties like the main business process objective as domain knowledge. This puts forward the lack of process discovery technique in obtaining process models according to business process types (Management and Support). In this paper, we demonstrate that business process types can guide the process discovery technique in generating process models. A special interest is given to the use of process mining to deal with this challenge.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6630
Author(s):  
Marcin Szpyrka ◽  
Edyta Brzychczy ◽  
Aneta Napieraj ◽  
Jacek Korski ◽  
Grzegorz J. Nalepa

Conformance checking is a process mining technique that compares a process model with an event log of the same process to check whether the current execution stored in the log conforms to the model and vice versa. This paper deals with the conformance checking of a longwall shearer process. The approach uses place-transition Petri nets with inhibitor arcs for modeling purposes. We use event log files collected from a few coal mines located in Poland by Famur S.A., one of the global suppliers of coal mining machines. One of the main advantages of the approach is the possibility for both offline and online analysis of the log data. The paper presents a detailed description of the longwall process, an original formal model we developed, selected elements of the approach’s implementation and the results of experiments.


2021 ◽  
Vol 7 ◽  
pp. e731
Author(s):  
Miguel Morales-Sandoval ◽  
José A. Molina ◽  
Heidy M. Marin-Castro ◽  
Jose Luis Gonzalez-Compean

In an Inter-Organizational Business Process (IOBP), independent organizations (collaborators) exchange messages to perform business transactions. With process mining, the collaborators could know what they are actually doing from process execution data and take actions for improving the underlying business process. However, process mining assumes that the knowledge of the entire process is available, something that is difficult to achieve in IOBPs since process execution data generally is not shared among the collaborating entities due to regulations and confidentiality policies (exposure of customers’ data or business secrets). Additionally, there is an inherently lack-of-trust problem in IOBP as the collaborators are mutually untrusted and executed IOBP can be subject to dispute on counterfeiting actions. Recently, Blockchain has been suggested for IOBP execution management to mitigate the lack-of-trust problem. Independently, some works have suggested the use of Blockchain to support process mining tasks. In this paper, we study and address the problem of IOBP mining whose management and execution is supported by Blockchain. As contribution, we present an approach that takes advantage of Blockchain capabilities to tackle, at the same time, the lack-of-trust problem (management and execution) and confident execution data collection for process mining (discovery and conformance) of IOBPs. We present a method that (i) ensures the business rules for the correct execution and monitoring of the IOBP by collaborators, (ii) creates the event log, with data cleaning integrated, at the time the IOBP executes, and (iii) produces useful event log in XES and CSV format for the discovery and conformance checking tasks in process mining. By a set of experiments on real IOBPs, we validate our method and evaluate its impact in the resulting discovered models (fitness and precision metrics). Results revealed the effectiveness of our method to cope with both the lack-of-trust problem in IOBPs at the time that contributes to collect the data for process mining. Our method was implemented as a software tool available to the community as open-source code.


2021 ◽  
Vol 4 ◽  
Author(s):  
Rashid Zaman ◽  
Marwan Hassani ◽  
Boudewijn F. Van Dongen

In the context of process mining, event logs consist of process instances called cases. Conformance checking is a process mining task that inspects whether a log file is conformant with an existing process model. This inspection is additionally quantifying the conformance in an explainable manner. Online conformance checking processes streaming event logs by having precise insights into the running cases and timely mitigating non-conformance, if any. State-of-the-art online conformance checking approaches bound the memory by either delimiting storage of the events per case or limiting the number of cases to a specific window width. The former technique still requires unbounded memory as the number of cases to store is unlimited, while the latter technique forgets running, not yet concluded, cases to conform to the limited window width. Consequently, the processing system may later encounter events that represent some intermediate activity as per the process model and for which the relevant case has been forgotten, to be referred to as orphan events. The naïve approach to cope with an orphan event is to either neglect its relevant case for conformance checking or treat it as an altogether new case. However, this might result in misleading process insights, for instance, overestimated non-conformance. In order to bound memory yet effectively incorporate the orphan events into processing, we propose an imputation of missing-prefix approach for such orphan events. Our approach utilizes the existing process model for imputing the missing prefix. Furthermore, we leverage the case storage management to increase the accuracy of the prefix prediction. We propose a systematic forgetting mechanism that distinguishes and forgets the cases that can be reliably regenerated as prefix upon receipt of their future orphan event. We evaluate the efficacy of our proposed approach through multiple experiments with synthetic and three real event logs while simulating a streaming setting. Our approach achieves considerably higher realistic conformance statistics than the state of the art while requiring the same storage.


Author(s):  
Bruna Brandão ◽  
Flávia Santoro ◽  
Leonardo Azevedo

In business process models, elements can be scattered (repeated) within different processes, making it difficult to handle changes, analyze process for improvements, or check crosscutting impacts. These scattered elements are named as Aspects. Similar to the aspect-oriented paradigm in programming languages, in BPM, aspect handling has the goal to modularize the crosscutting concerns spread across the models. This process modularization facilitates the management of the process (reuse, maintenance and understanding). The current approaches for aspect identification are made manually; thus, resulting in the problem of subjectivity and lack of systematization. This paper proposes a method to automatically identify aspects in business process from its event logs. The method is based on mining techniques and it aims to solve the problem of the subjectivity identification made by specialists. The initial results from a preliminary evaluation showed evidences that the method identified correctly the aspects present in the process model.


2021 ◽  
Vol 10 (9) ◽  
pp. 144-147
Author(s):  
Huiling LI ◽  
Xuan SU ◽  
Shuaipeng ZHANG

Massive amounts of business process event logs are collected and stored by modern information systems. Model discovery aims to discover a process model from such event logs, however, most of the existing approaches still suffer from low efficiency when facing large-scale event logs. Event log sampling techniques provide an effective scheme to improve the efficiency of process discovery, but the existing techniques still cannot guarantee the quality of model mining. Therefore, a sampling approach based on set coverage algorithm named set coverage sampling approach is proposed. The proposed sampling approach has been implemented in the open-source process mining toolkit ProM. Furthermore, experiments using a real event log data set from conformance checking and time performance analysis show that the proposed event log sampling approach can greatly improve the efficiency of log sampling on the premise of ensuring the quality of model mining.


2018 ◽  
Vol 27 (02) ◽  
pp. 1850002
Author(s):  
Sung-Hyun Sim ◽  
Hyerim Bae ◽  
Yulim Choi ◽  
Ling Liu

In Big data and IoT environments, process execution generates huge-sized data some of which is subsequently obtained by sensors. The main issue in such areas has been the necessity of analyzing data in order to suggest enhancements to processes. In this regard, evaluation of process model conformance to the execution log is of great importance. For this purpose, previous reports on process mining approaches have advocated conformance checking by fitness measure, which is a process that uses token replay and node-arc relations based on Petri net. However, fitness measure so far has not considered statistical significance, but just offers a numeric ratio. We herein propose a statistical verification method based on the Kolmogorov–Smirnov (K–S) test to judge whether two different log datasets follow the same process model. Our method can be easily extended to determinations that process execution actually follows a process model, by playing out the model and generating event log data from it. Additionally, in order to solve the problem of the trade-off between model abstraction and process conformance, we also propose the new concepts of Confidence Interval of Abstraction Value (CIAV) and Maximum Confidence Abstraction Value (MCAV). We showed that our method can be applied to any process mining algorithm (e.g. heuristic mining, fuzzy mining) that has parameters related to model abstraction. We expect that our method will come to be widely utilized in many applications dealing with business process enhancement involving process-model and execution-log analyses.


Author(s):  
Pavlos Delias ◽  
Kleanthi Lakiotaki

Automated discovery of a process model is a major task of Process Mining that means to produce a process model from an event log, without any a-priori information. However, when an event log contains a large number of distinct activities, process discovery can be real challenging. The goal of this article is to facilitate process discovery in such cases when a process is expected to contain a large set of unique activities. To this end, this article proposes a clustering approach that recommends horizontal boundaries for the process. The proposed approach ultimately partitions the event log in a way that human interpretation efforts are decomposed. In addition, it makes automated discovery more efficient as well as effective by simultaneously considering two quality criteria: informativeness and robustness of the derived groups of activities. The authors conducted several experiments to test the behavior of the algorithm under different settings, and to compare it against other techniques. Finally, they provide a set of recommendations that may help process analysts during the process discovery endeavor.


Sign in / Sign up

Export Citation Format

Share Document