An Integrated Approach for Discovering Process Models According to Business Process Types

2021 ◽  
Vol 16 ◽  
pp. 1-14
Author(s):  
Zineb Lamghari

Process discovery technique aims at automatically generating a process model that accurately describes a Business Process (BP) based on event data. Related discovery algorithms consider recorded events are only resulting from an operational BP type. While the management community defines three BP types, which are: Management, Support and Operational. They distinguish each BP type by different proprieties like the main business process objective as domain knowledge. This puts forward the lack of process discovery technique in obtaining process models according to business process types (Management and Support). In this paper, we demonstrate that business process types can guide the process discovery technique in generating process models. A special interest is given to the use of process mining to deal with this challenge.

2020 ◽  
Vol 17 (3) ◽  
pp. 927-958
Author(s):  
Mohammadreza Sani ◽  
Sebastiaan van Zelst ◽  
Aalst van der

Process discovery algorithms automatically discover process models based on event data that is captured during the execution of business processes. These algorithms tend to use all of the event data to discover a process model. When dealing with large event logs, it is no longer feasible using standard hardware in limited time. A straightforward approach to overcome this problem is to down-size the event data by means of sampling. However, little research has been conducted on selecting the right sample, given the available time and characteristics of event data. This paper evaluates various subset selection methods and evaluates their performance on real event data. The proposed methods have been implemented in both the ProM and the RapidProM platforms. Our experiments show that it is possible to considerably speed up discovery using instance selection strategies. Furthermore, results show that applying biased selection of the process instances compared to random sampling will result in simpler process models with higher quality.


Process models are the analytical illustration of an organization’s activity. They are very primordial to map out the current business process of an organization, build a baseline of process enhancement and construct future processes where the enhancements are incorporated. To achieve this, in the field of process mining, algorithms have been proposed to build process models using the information recorded in the event logs. However, for complex process configurations, these algorithms cannot correctly build complex process structures. These structures are invisible tasks, non-free choice constructs, and short loops. The ability of each discovery algorithm in discovering the process constructs is different. In this work, we propose a framework responsible of detecting from event logs the complex constructs existing in the data. By identifying the existing constructs, one can choose the process discovery techniques suitable for the event data in question. The proposed framework has been implemented in ProM as a plugin. The evaluation results demonstrate that the constructs can correctly be identified.


2021 ◽  
Vol 11 (22) ◽  
pp. 10556
Author(s):  
Heidy M. Marin-Castro ◽  
Edgar Tello-Leal

Process Mining allows organizations to obtain actual business process models from event logs (discovery), to compare the event log or the resulting process model in the discovery task with the existing reference model of the same process (conformance), and to detect issues in the executed process to improve (enhancement). An essential element in the three tasks of process mining (discovery, conformance, and enhancement) is data cleaning, used to reduce the complexity inherent to real-world event data, to be easily interpreted, manipulated, and processed in process mining tasks. Thus, new techniques and algorithms for event data preprocessing have been of interest in the research community in business process. In this paper, we conduct a systematic literature review and provide, for the first time, a survey of relevant approaches of event data preprocessing for business process mining tasks. The aim of this work is to construct a categorization of techniques or methods related to event data preprocessing and to identify relevant challenges around these techniques. We present a quantitative and qualitative analysis of the most popular techniques for event log preprocessing. We also study and present findings about how a preprocessing technique can improve a process mining task. We also discuss the emerging future challenges in the domain of data preprocessing, in the context of process mining. The results of this study reveal that the preprocessing techniques in process mining have demonstrated a high impact on the performance of the process mining tasks. The data cleaning requirements are dependent on the characteristics of the event logs (voluminous, a high variability in the set of traces size, changes in the duration of the activities. In this scenario, most of the surveyed works use more than a single preprocessing technique to improve the quality of the event log. Trace-clustering and trace/event level filtering resulted in being the most commonly used preprocessing techniques due to easy of implementation, and they adequately manage noise and incompleteness in the event logs.


Computing ◽  
2021 ◽  
Author(s):  
Mohammadreza Fani Sani ◽  
Sebastiaan J. van Zelst ◽  
Wil M. P. van der Aalst

AbstractWith Process discovery algorithms, we discover process models based on event data, captured during the execution of business processes. The process discovery algorithms tend to use the whole event data. When dealing with large event data, it is no longer feasible to use standard hardware in a limited time. A straightforward approach to overcome this problem is to down-size the data utilizing a random sampling method. However, little research has been conducted on selecting the right sample, given the available time and characteristics of event data. This paper systematically evaluates various biased sampling methods and evaluates their performance on different datasets using four different discovery techniques. Our experiments show that it is possible to considerably speed up discovery techniques using biased sampling without losing the resulting process model quality. Furthermore, due to the implicit filtering (removing outliers) obtained by applying the sampling technique, the model quality may even be improved.


Author(s):  
Bruna Brandão ◽  
Flávia Santoro ◽  
Leonardo Azevedo

In business process models, elements can be scattered (repeated) within different processes, making it difficult to handle changes, analyze process for improvements, or check crosscutting impacts. These scattered elements are named as Aspects. Similar to the aspect-oriented paradigm in programming languages, in BPM, aspect handling has the goal to modularize the crosscutting concerns spread across the models. This process modularization facilitates the management of the process (reuse, maintenance and understanding). The current approaches for aspect identification are made manually; thus, resulting in the problem of subjectivity and lack of systematization. This paper proposes a method to automatically identify aspects in business process from its event logs. The method is based on mining techniques and it aims to solve the problem of the subjectivity identification made by specialists. The initial results from a preliminary evaluation showed evidences that the method identified correctly the aspects present in the process model.


2020 ◽  
Vol 10 (4) ◽  
pp. 1493 ◽  
Author(s):  
Kwanghoon Pio Kim

In this paper, we propose an integrated approach for seamlessly and effectively providing the mining and the analyzing functionalities to redesigning work for very large-scale and massively parallel process models that are discovered from their enactment event logs. The integrated approach especially aims at analyzing not only their structural complexity and correctness but also their animation-based behavioral properness, and becomes concretized to a sophisticated analyzer. The core function of the analyzer is to discover a very large-scale and massively parallel process model from a process log dataset and to validate the structural complexity and the syntactical and behavioral properness of the discovered process model. Finally, this paper writes up the detailed description of the system architecture with its functional integration of process mining and process analyzing. More precisely, we excogitate a series of functional algorithms for extracting the structural constructs and for visualizing the behavioral properness of those discovered very large-scale and massively parallel process models. As experimental validation, we apply the proposed approach and analyzer to a couple of process enactment event log datasets available on the website of the 4TU.Centre for Research Data.


Author(s):  
Kwanghoon Kim

Process (or business process) management systems fulfill defining, executing, monitoring and managing process models deployed on process-aware enterprises. Accordingly, the functional formation of the systems is made up of three subsystems such as modeling subsystem, enacting subsystem and mining subsystem. In recent times, the mining subsystem has been becoming an essential subsystem. Many enterprises have successfully completed the introduction and application of the process automation technology through the modeling subsystem and the enacting subsystem. According as the time has come to the phase of redesigning and reengineering the deployed process models, from now on it is important for the mining subsystem to cooperate with the analyzing subsystem; the essential cooperation capability is to provide seamless integrations between the designing works with the modeling subsystem and the redesigning work with the mining subsystem. In other words, we need to seamlessly integrate the discovery functionality of the mining subsystem and the analyzing functionality of the modeling subsystem. This integrated approach might be suitable very well when those deployed process models discovered by the mining subsystem are complex and very large-scaled, in particular. In this paper, we propose an integrated approach for seamlessly as well as effectively providing the mining and the analyzing functionalities to the redesigning work on very large-scale and massively parallel process models that are discovered from their enactment event logs. The integrated approach especially aims at analyzing not only their structural complexity and correctness but also their animation-based behavioral properness, and becomes concretized to a sophisticated analyzer. The core function of the analyzer is to discover a very large-scale and massively parallel process model from a process log dataset and to validate the structural complexity and the syntactical and behavioral properness of the discovered process model. Finally, this paper writes up the detailed description of the system architecture with its functional integration of process mining and process analyzing. And more precisely, we excogitate a series of functional algorithms for extracting the structural constructs as well as for visualizing the behavioral properness on those discovered very large-scale and massively parallel process models. As experimental validation, we apply the proposed approach and analyzer to a couple of process enactment event log datasets available on the website of the 4TU.Centre for Research Data.


2018 ◽  
Vol 7 (4) ◽  
pp. 2446
Author(s):  
Muktikanta Sahu ◽  
Rupjit Chakraborty ◽  
Gopal Krishna Nayak

Building process models from the available data in the event logs is the primary objective of Process discovery. Alpha algorithm is one of the popular algorithms accessible for ascertaining a process model from the event logs in process mining. The steps involved in the Alpha algorithm are computationally rigorous and this problem further manifolds with the exponentially increasing event log data. In this work, we have exploited task parallelism in the Alpha algorithm for process discovery by using MPI programming model. The proposed work is based on distributed memory parallelism available in MPI programming for performance improvement. Independent and computationally intensive steps in the Alpha algorithm are identified and task parallelism is exploited. The execution time of serial as well as parallel implementation of Alpha algorithm are measured and used for calculating the extent of speedup achieved. The maximum and minimum speedups obtained are 3.97x and 3.88x respectively with an average speedup of 3.94x.


Author(s):  
Alessandro Marchetto ◽  
Chiara Di Francescomarino

Web Applications (WAs) have been often used to expose business processes to the users. WA modernization and evolution are complex and time-consuming activities that can be supported by software documentation (e.g., process models). When, as often happens, documentation is missing or is incomplete, documentation recovery and mining represent an important opportunity for reconstructing or completing it. Existing process-mining approaches, however, tend to recover models that are quite complex, rich, and intricate, thus difficult to understand and use for analysts and developers. Model refinement approaches have been presented in the literature to reduce the model complexity and intricateness while preserving the capability of representing the relevant information. In this chapter, the authors summarize approaches to mine first and refine later business process models from existing WAs. In particular, they present two process model refinement approaches: (1) re-modularization and (2) reduction. The authors introduce the techniques and show how to apply them to WAs.


2015 ◽  
Vol 21 (4) ◽  
pp. 820-836 ◽  
Author(s):  
Jantima Polpinij ◽  
Aditya Ghose ◽  
Hoa Khanh Dam

Purpose – Business process has become the core assets of many organizations and it becomes increasing common for most medium to large organizations to have collections of hundreds or even thousands of business process models. The purpose of this paper is to explore an alternative dimension to process mining in which the objective is to extract process constraints (or business rules) as opposed to business process models. It also focusses on an alternative data set – process models as opposed to process instances (i.e. event logs). Design/methodology/approach – The authors present a new method of knowledge discovery to find business activity sequential patterns embedded in process model repositories. The extracted sequential patterns are considered as business rules. Findings – The authors find significant knowledge hidden in business processes model repositories. The hidden knowledge is considered as business rules. The business rules extracted from process models are significant and valid sequential correlations among business activities belonging to a particular organization. Such business rules represent business constraints that have been encoded in business process models. Experimental results have indicated the effectiveness and accuracy of the approach in extracting business rules from repositories of business process models. Social implications – This research will assist organizations to extract business rules from their existing business process models. The discovered business rules are very important for any organization, where rules can be used to help organizations better achieve goals, remove obstacles to market growth, reduce costly mistakes, improve communication, comply with legal requirements, and increase customer loyalty. Originality/value – There has very been little work in mining business process models as opposed to an increasing number of very large collections of business process models. This work has filled this gap with the focus on extracting business rules.


Sign in / Sign up

Export Citation Format

Share Document