scholarly journals Process Mining Based Software for Automated Business Process Discovery

Automated Business Process Discovery is a rising field that depends vigorously on computer software. Software do the automatic analysis of the several documents such as audits and event logs and generate useful,novel,hidden and fascinating information from that. The information produce from the software recognize the process model as well as investigates varieties and gives clients a vastly improved picture of what a particular business process resembles, and how changes would influence the business in general. This paper presents the common framework activities of process mining in the context to all well known software. The paper also describes the open source process mining software with their operational characteristics. Finally, paper represents the role of process mining software for various famous industries.

Author(s):  
Bruna Brandão ◽  
Flávia Santoro ◽  
Leonardo Azevedo

In business process models, elements can be scattered (repeated) within different processes, making it difficult to handle changes, analyze process for improvements, or check crosscutting impacts. These scattered elements are named as Aspects. Similar to the aspect-oriented paradigm in programming languages, in BPM, aspect handling has the goal to modularize the crosscutting concerns spread across the models. This process modularization facilitates the management of the process (reuse, maintenance and understanding). The current approaches for aspect identification are made manually; thus, resulting in the problem of subjectivity and lack of systematization. This paper proposes a method to automatically identify aspects in business process from its event logs. The method is based on mining techniques and it aims to solve the problem of the subjectivity identification made by specialists. The initial results from a preliminary evaluation showed evidences that the method identified correctly the aspects present in the process model.


2018 ◽  
Vol 7 (4) ◽  
pp. 2446
Author(s):  
Muktikanta Sahu ◽  
Rupjit Chakraborty ◽  
Gopal Krishna Nayak

Building process models from the available data in the event logs is the primary objective of Process discovery. Alpha algorithm is one of the popular algorithms accessible for ascertaining a process model from the event logs in process mining. The steps involved in the Alpha algorithm are computationally rigorous and this problem further manifolds with the exponentially increasing event log data. In this work, we have exploited task parallelism in the Alpha algorithm for process discovery by using MPI programming model. The proposed work is based on distributed memory parallelism available in MPI programming for performance improvement. Independent and computationally intensive steps in the Alpha algorithm are identified and task parallelism is exploited. The execution time of serial as well as parallel implementation of Alpha algorithm are measured and used for calculating the extent of speedup achieved. The maximum and minimum speedups obtained are 3.97x and 3.88x respectively with an average speedup of 3.94x.


2021 ◽  
Vol 16 ◽  
pp. 1-14
Author(s):  
Zineb Lamghari

Process discovery technique aims at automatically generating a process model that accurately describes a Business Process (BP) based on event data. Related discovery algorithms consider recorded events are only resulting from an operational BP type. While the management community defines three BP types, which are: Management, Support and Operational. They distinguish each BP type by different proprieties like the main business process objective as domain knowledge. This puts forward the lack of process discovery technique in obtaining process models according to business process types (Management and Support). In this paper, we demonstrate that business process types can guide the process discovery technique in generating process models. A special interest is given to the use of process mining to deal with this challenge.


2009 ◽  
Vol 19 (6) ◽  
pp. 1091-1124 ◽  
Author(s):  
NADIA BUSI ◽  
G. MICHELE PINNA

The aim of the research domain known as process mining is to use process discovery to construct a process model as an abstract representation of event logs. The goal is to build a model (in terms of a Petri net) that can reproduce the logs under consideration, and does not allow different behaviours compared with those shown in the logs. In particular, process mining aims to verify the accuracy of the model design (represented as a Petri net), basically checking whether the same net can be rediscovered. However, the main mining methods proposed in the literature have some drawbacks: the classical α-algorithm is unable to rediscover various nets, while the region-based approach, which can mine them correctly, is too complex.In this paper, we compare different approaches and propose some ideas to counter the weaknesses of the region-based approach.


Process models are the analytical illustration of an organization’s activity. They are very primordial to map out the current business process of an organization, build a baseline of process enhancement and construct future processes where the enhancements are incorporated. To achieve this, in the field of process mining, algorithms have been proposed to build process models using the information recorded in the event logs. However, for complex process configurations, these algorithms cannot correctly build complex process structures. These structures are invisible tasks, non-free choice constructs, and short loops. The ability of each discovery algorithm in discovering the process constructs is different. In this work, we propose a framework responsible of detecting from event logs the complex constructs existing in the data. By identifying the existing constructs, one can choose the process discovery techniques suitable for the event data in question. The proposed framework has been implemented in ProM as a plugin. The evaluation results demonstrate that the constructs can correctly be identified.


2021 ◽  
Vol 10 (09) ◽  
pp. 116-121
Author(s):  
Huiling LI ◽  
Shuaipeng ZHANG ◽  
Xuan SU

The information system collects a large number of business process event logs, and process discovery aims to discover process models from the event logs. Many process discovery methods have been proposed, but most of them still have problems when processing event logs, such as low mining efficiency and poor process model quality. The trace clustering method allows to decompose original log to effectively solve these problems. There are many existing trace clustering methods, such as clustering based on vector space approaches, context-aware trace clustering, model-based sequence clustering, etc. The clustering effects obtained by different trace clustering methods are often different. Therefore, this paper proposes a preprocessing method to improve the performance of process discovery, called as trace clustering. Firstly, the event log is decomposed into a set of sub-logs by trace clustering method, Secondly, the sub-logs generate process models respectively by the process mining method. The experimental analysis on the datasets shows that the method proposed not only effectively improves the time performance of process discovery, but also improves the quality of the process model.


2021 ◽  
Vol 11 (22) ◽  
pp. 10556
Author(s):  
Heidy M. Marin-Castro ◽  
Edgar Tello-Leal

Process Mining allows organizations to obtain actual business process models from event logs (discovery), to compare the event log or the resulting process model in the discovery task with the existing reference model of the same process (conformance), and to detect issues in the executed process to improve (enhancement). An essential element in the three tasks of process mining (discovery, conformance, and enhancement) is data cleaning, used to reduce the complexity inherent to real-world event data, to be easily interpreted, manipulated, and processed in process mining tasks. Thus, new techniques and algorithms for event data preprocessing have been of interest in the research community in business process. In this paper, we conduct a systematic literature review and provide, for the first time, a survey of relevant approaches of event data preprocessing for business process mining tasks. The aim of this work is to construct a categorization of techniques or methods related to event data preprocessing and to identify relevant challenges around these techniques. We present a quantitative and qualitative analysis of the most popular techniques for event log preprocessing. We also study and present findings about how a preprocessing technique can improve a process mining task. We also discuss the emerging future challenges in the domain of data preprocessing, in the context of process mining. The results of this study reveal that the preprocessing techniques in process mining have demonstrated a high impact on the performance of the process mining tasks. The data cleaning requirements are dependent on the characteristics of the event logs (voluminous, a high variability in the set of traces size, changes in the duration of the activities. In this scenario, most of the surveyed works use more than a single preprocessing technique to improve the quality of the event log. Trace-clustering and trace/event level filtering resulted in being the most commonly used preprocessing techniques due to easy of implementation, and they adequately manage noise and incompleteness in the event logs.


2021 ◽  
Vol 5 (4) ◽  
pp. 1-13
Author(s):  
Muhammad Faizan ◽  
Megat F. Zuhairi ◽  
Shahrinaz Ismail

The potential in process mining is progressively growing due to the increasing amount of event-data. Process mining strategies use event-logs to automatically classify process models, recommend improvements, predict processing times, check conformance, and recognize anomalies/deviations and bottlenecks. However, proper handling of event-logs while evaluating and using them as input is crucial to any process mining technique. When process mining techniques are applied to flexible systems with a large number of decisions to take at runtime, the outcome is often unstructured or semi-structured process models that are hard to comprehend. Existing approaches are good at discovering and visualizing structured processes but often struggle with less structured ones. Surprisingly, process mining is most useful in domains where flexibility is desired. A good illustration is the "patient treatment" process in a hospital, where the ability to deviate from dealing with changing conditions is crucial. It is useful to have insights into actual operations. However, there is a significant amount of diversity, which contributes to complicated, difficult-to-understand models. Trace clustering is a method for decreasing the complexity of process models in this context while also increasing their comprehensibility and accuracy. This paper discusses process mining, event-logs, and presenting a clustering approach to pre-process event-logs, i.e., a homogeneous subset of the event-log is created. A process model is generated for each subset. These homogeneous subsets are then evaluated independently from each other, which significantly improving the quality of mining results in flexible environments. The presented approach improves the fitness and precision of a discovered model while reducing its complexity, resulting in well-structured and easily understandable process discovery results.


Workflow management systems help to execute, monitor and manage work process flow and execution. These systems, as they are executing, keep a record of who does what and when (e.g. log of events). The activity of using computer software to examine these records, and deriving various structural data results is called workflow mining. The workflow mining activity, in general, needs to encompass behavioral (process/control-flow), social, informational (data-flow), and organizational perspectives; as well as other perspectives, because workflow systems are "people systems" that must be designed, deployed, and understood within their social and organizational contexts. This paper particularly focuses on mining the behavioral aspect of workflows from XML-based workflow enactment event logs, which are vertically (semantic-driven distribution) or horizontally (syntactic-driven distribution) distributed over the networked workflow enactment components. That is, this paper proposes distributed workflow mining approaches that are able to rediscover ICN-based structured workflow process models through incrementally amalgamating a series of vertically or horizontally fragmented temporal workcases. And each of the approaches consists of a temporal fragment discovery algorithm, which is able to discover a set of temporal fragment models from the fragmented workflow enactment event logs, and a workflow process mining algorithm which rediscovers a structured workflow process model from the discovered temporal fragment models. Where, the temporal fragment model represents the concrete model of the XML-based distributed workflow fragment events log.


2021 ◽  
Vol 11 (8) ◽  
pp. 3438
Author(s):  
Jorge Fernandes ◽  
João Reis ◽  
Nuno Melão ◽  
Leonor Teixeira ◽  
Marlene Amorim

This article addresses the evolution of Industry 4.0 (I4.0) in the automotive industry, exploring its contribution to a shift in the maintenance paradigm. To this end, we firstly present the concepts of predictive maintenance (PdM), condition-based maintenance (CBM), and their applications to increase awareness of why and how these concepts are revolutionizing the automotive industry. Then, we introduce the business process management (BPM) and business process model and notation (BPMN) methodologies, as well as their relationship with maintenance. Finally, we present the case study of the Renault Cacia, which is developing and implementing the concepts mentioned above.


Sign in / Sign up

Export Citation Format

Share Document