scholarly journals Mining Users' Intents from Logs

Author(s):  
Ghazaleh Khodabandelou ◽  
Charlotte Hug ◽  
Camille Salinesi

Intentions play a key role in information systems engineering. Research on process modeling has highlighted that specifying intentions can expressly mitigate many problems encountered in process modeling as lack of flexibility or adaptation. Process mining approaches mine processes in terms of tasks and branching. To identify and formalize intentions from event logs, this work presents a novel approach of process mining, called Map Miner Method (MMM). This method automates the construction of intentional process models from event logs. First, MMM estimates users' strategies (i.e., the different ways to fulfill the intentions) in terms of their activities. These estimated strategies are then used to infer users' intentions at different levels of abstraction using two tailored algorithms. MMM constructs intentional process models with respect to the Map metamodel formalism. MMM is applied on a real-world dataset, i.e. event logs of developers of Eclipse UDC (Usage Data Collector). The resulting Map process model provides a precious understanding of the processes followed by the developers, and also provide feedback on the effectiveness and demonstrate scalability of MMM.

Author(s):  
Bruna Brandão ◽  
Flávia Santoro ◽  
Leonardo Azevedo

In business process models, elements can be scattered (repeated) within different processes, making it difficult to handle changes, analyze process for improvements, or check crosscutting impacts. These scattered elements are named as Aspects. Similar to the aspect-oriented paradigm in programming languages, in BPM, aspect handling has the goal to modularize the crosscutting concerns spread across the models. This process modularization facilitates the management of the process (reuse, maintenance and understanding). The current approaches for aspect identification are made manually; thus, resulting in the problem of subjectivity and lack of systematization. This paper proposes a method to automatically identify aspects in business process from its event logs. The method is based on mining techniques and it aims to solve the problem of the subjectivity identification made by specialists. The initial results from a preliminary evaluation showed evidences that the method identified correctly the aspects present in the process model.


2018 ◽  
Vol 7 (4) ◽  
pp. 2446
Author(s):  
Muktikanta Sahu ◽  
Rupjit Chakraborty ◽  
Gopal Krishna Nayak

Building process models from the available data in the event logs is the primary objective of Process discovery. Alpha algorithm is one of the popular algorithms accessible for ascertaining a process model from the event logs in process mining. The steps involved in the Alpha algorithm are computationally rigorous and this problem further manifolds with the exponentially increasing event log data. In this work, we have exploited task parallelism in the Alpha algorithm for process discovery by using MPI programming model. The proposed work is based on distributed memory parallelism available in MPI programming for performance improvement. Independent and computationally intensive steps in the Alpha algorithm are identified and task parallelism is exploited. The execution time of serial as well as parallel implementation of Alpha algorithm are measured and used for calculating the extent of speedup achieved. The maximum and minimum speedups obtained are 3.97x and 3.88x respectively with an average speedup of 3.94x.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Weidong Zhao ◽  
Xi Liu ◽  
Weihui Dai

Process mining is automated acquisition of process models from event logs. Although many process mining techniques have been developed, most of them are based on control flow. Meanwhile, the existing role-oriented process mining methods focus on correctness and integrity of roles while ignoring role complexity of the process model, which directly impacts understandability and quality of the model. To address these problems, we propose a genetic programming approach to mine the simplified process model. Using a new metric of process complexity in terms of roles as the fitness function, we can find simpler process models. The new role complexity metric of process models is designed from role cohesion and coupling, and applied to discover roles in process models. Moreover, the higher fitness derived from role complexity metric also provides a guideline for redesigning process models. Finally, we conduct case study and experiments to show that the proposed method is more effective for streamlining the process by comparing with related studies.


2018 ◽  
Vol 25 (6) ◽  
pp. 711-725
Author(s):  
Anna A. Kalenkova ◽  
Danil A. Kolesnikov

Finding graph-edit distance (graph similarity) is an important task in many computer science areas, such as image analysis, machine learning, chemicalinformatics. Recently, with the development of process mining techniques, it became important to adapt and apply existing graph analysis methods to examine process models (annotated graphs) discovered from event data. In particular, finding graph-edit distance techniques can be used to reveal patterns (subprocesses), compare discovered process models. As it was shown experimentally and theoretically justified, exact methods for finding graph-edit distances between discovered process models (and graphs in general) are computationally expensive and can be applied to small models only. In this paper, we present and assess accuracy and performance characteristics of an inexact genetic algorithm applied to find distances between process models discovered from event logs. In particular, we find distances between BPMN (Business Process Model and Notation) models discovered from event logs by using different process discovery algorithms. We show that the genetic algorithm allows us to dramatically reduce the time of comparison and produces results which are close to the optimal solutions (minimal graph edit distances calculated by the exact search algorithm).


2021 ◽  
Vol 10 (09) ◽  
pp. 116-121
Author(s):  
Huiling LI ◽  
Shuaipeng ZHANG ◽  
Xuan SU

The information system collects a large number of business process event logs, and process discovery aims to discover process models from the event logs. Many process discovery methods have been proposed, but most of them still have problems when processing event logs, such as low mining efficiency and poor process model quality. The trace clustering method allows to decompose original log to effectively solve these problems. There are many existing trace clustering methods, such as clustering based on vector space approaches, context-aware trace clustering, model-based sequence clustering, etc. The clustering effects obtained by different trace clustering methods are often different. Therefore, this paper proposes a preprocessing method to improve the performance of process discovery, called as trace clustering. Firstly, the event log is decomposed into a set of sub-logs by trace clustering method, Secondly, the sub-logs generate process models respectively by the process mining method. The experimental analysis on the datasets shows that the method proposed not only effectively improves the time performance of process discovery, but also improves the quality of the process model.


2021 ◽  
Vol 11 (22) ◽  
pp. 10556
Author(s):  
Heidy M. Marin-Castro ◽  
Edgar Tello-Leal

Process Mining allows organizations to obtain actual business process models from event logs (discovery), to compare the event log or the resulting process model in the discovery task with the existing reference model of the same process (conformance), and to detect issues in the executed process to improve (enhancement). An essential element in the three tasks of process mining (discovery, conformance, and enhancement) is data cleaning, used to reduce the complexity inherent to real-world event data, to be easily interpreted, manipulated, and processed in process mining tasks. Thus, new techniques and algorithms for event data preprocessing have been of interest in the research community in business process. In this paper, we conduct a systematic literature review and provide, for the first time, a survey of relevant approaches of event data preprocessing for business process mining tasks. The aim of this work is to construct a categorization of techniques or methods related to event data preprocessing and to identify relevant challenges around these techniques. We present a quantitative and qualitative analysis of the most popular techniques for event log preprocessing. We also study and present findings about how a preprocessing technique can improve a process mining task. We also discuss the emerging future challenges in the domain of data preprocessing, in the context of process mining. The results of this study reveal that the preprocessing techniques in process mining have demonstrated a high impact on the performance of the process mining tasks. The data cleaning requirements are dependent on the characteristics of the event logs (voluminous, a high variability in the set of traces size, changes in the duration of the activities. In this scenario, most of the surveyed works use more than a single preprocessing technique to improve the quality of the event log. Trace-clustering and trace/event level filtering resulted in being the most commonly used preprocessing techniques due to easy of implementation, and they adequately manage noise and incompleteness in the event logs.


2021 ◽  
Vol 5 (4) ◽  
pp. 1-13
Author(s):  
Muhammad Faizan ◽  
Megat F. Zuhairi ◽  
Shahrinaz Ismail

The potential in process mining is progressively growing due to the increasing amount of event-data. Process mining strategies use event-logs to automatically classify process models, recommend improvements, predict processing times, check conformance, and recognize anomalies/deviations and bottlenecks. However, proper handling of event-logs while evaluating and using them as input is crucial to any process mining technique. When process mining techniques are applied to flexible systems with a large number of decisions to take at runtime, the outcome is often unstructured or semi-structured process models that are hard to comprehend. Existing approaches are good at discovering and visualizing structured processes but often struggle with less structured ones. Surprisingly, process mining is most useful in domains where flexibility is desired. A good illustration is the "patient treatment" process in a hospital, where the ability to deviate from dealing with changing conditions is crucial. It is useful to have insights into actual operations. However, there is a significant amount of diversity, which contributes to complicated, difficult-to-understand models. Trace clustering is a method for decreasing the complexity of process models in this context while also increasing their comprehensibility and accuracy. This paper discusses process mining, event-logs, and presenting a clustering approach to pre-process event-logs, i.e., a homogeneous subset of the event-log is created. A process model is generated for each subset. These homogeneous subsets are then evaluated independently from each other, which significantly improving the quality of mining results in flexible environments. The presented approach improves the fitness and precision of a discovered model while reducing its complexity, resulting in well-structured and easily understandable process discovery results.


2019 ◽  
Vol 25 (5) ◽  
pp. 995-1019 ◽  
Author(s):  
Anna Kalenkova ◽  
Andrea Burattin ◽  
Massimiliano de Leoni ◽  
Wil van der Aalst ◽  
Alessandro Sperduti

Purpose The purpose of this paper is to demonstrate that process mining techniques can help to discover process models from event logs, using conventional high-level process modeling languages, such as Business Process Model and Notation (BPMN), leveraging their representational bias. Design/methodology/approach The integrated discovery approach presented in this work is aimed to mine: control, data and resource perspectives within one process diagram, and, if possible, construct a hierarchy of subprocesses improving the model readability. The proposed approach is defined as a sequence of steps, performed to discover a model, containing various perspectives and presenting a holistic view of a process. This approach was implemented within an open-source process mining framework called ProM and proved its applicability for the analysis of real-life event logs. Findings This paper shows that the proposed integrated approach can be applied to real-life event logs of information systems from different domains. The multi-perspective process diagrams obtained within the approach are of good quality and better than models discovered using a technique that does not consider hierarchy. Moreover, due to the decomposition methods applied, the proposed approach can deal with large event logs, which cannot be handled by methods that do not use decomposition. Originality/value The paper consolidates various process mining techniques, which were never integrated before and presents a novel approach for the discovery of multi-perspective hierarchical BPMN models. This approach bridges the gap between well-known process mining techniques and a wide range of BPMN-complaint tools.


Workflow management systems help to execute, monitor and manage work process flow and execution. These systems, as they are executing, keep a record of who does what and when (e.g. log of events). The activity of using computer software to examine these records, and deriving various structural data results is called workflow mining. The workflow mining activity, in general, needs to encompass behavioral (process/control-flow), social, informational (data-flow), and organizational perspectives; as well as other perspectives, because workflow systems are "people systems" that must be designed, deployed, and understood within their social and organizational contexts. This paper particularly focuses on mining the behavioral aspect of workflows from XML-based workflow enactment event logs, which are vertically (semantic-driven distribution) or horizontally (syntactic-driven distribution) distributed over the networked workflow enactment components. That is, this paper proposes distributed workflow mining approaches that are able to rediscover ICN-based structured workflow process models through incrementally amalgamating a series of vertically or horizontally fragmented temporal workcases. And each of the approaches consists of a temporal fragment discovery algorithm, which is able to discover a set of temporal fragment models from the fragmented workflow enactment event logs, and a workflow process mining algorithm which rediscovers a structured workflow process model from the discovered temporal fragment models. Where, the temporal fragment model represents the concrete model of the XML-based distributed workflow fragment events log.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Cong Liu ◽  
Huiling Li ◽  
Qingtian Zeng ◽  
Ting Lu ◽  
Caihong Li

To support effective emergency disposal, organizations need to collaborate with each other to complete the emergency mission that cannot be handled by a single organization. In general, emergency disposal that involves multiple organizations is typically organized as a group of interactive processes, known as cross-organization emergency response processes (CERPs). The construction of CERPs is a time-consuming and error-prone task that requires practitioners to have extensive experience and business background. Process mining aims to construct process models by analyzing event logs. However, existing process mining techniques cannot be applied directly to discover CERPs since we have to consider the complexity of various collaborations among different organizations, e.g., message exchange and resource sharing patterns. To tackle this challenge, a CERP model mining method is proposed in this paper. More specifically, we first extend classical Petri nets with resource and message attributes, known as resource and message aware Petri nets (RMPNs). Then, intra-organization emergency response process (IERP) models that are represented as RMPNs are discovered from emergency drilling event logs. Next, collaboration patterns among emergency organizations are formally defined and discovered. Finally, CERP models are obtained by merging IERP models and collaboration patterns. Through comparative experimental evaluation using the fire emergency drilling event log, we illustrate that the proposed approach facilitates the discovery of high-quality CERP models than existing state-of-the-art approaches.


Sign in / Sign up

Export Citation Format

Share Document