Soundness Verification of Data-Aware Process Models with Variable-to-Variable Conditions

2021 ◽  
Vol 182 (1) ◽  
pp. 1-29
Author(s):  
Paolo Felli ◽  
Massimiliano de Leoni ◽  
Marco Montali

Traditionally Business Process Modeling has only focused on the control-flow perspective, thus allowing process designers to specify the constraints on the activities of the process: the order and potential concurrency of their execution, their mutual exclusivity, the possibility of being repeated, etc. However, activities are executed by different resources, manipulate data objects and are constrained by the state of such objects. This requires that the traditional notion of soundness, typically introduced for control-flow-only models, is extended so as to consider data. Intuitively, a (data-aware) process model is sound if (1) it does not contain deadlocks, (2) no more activities are enabled when the process instance is marked as completed and finally (3) there are no parts of the model that cannot be executed. Although several data-aware notations have been introduced in the literature, not all of these are given a formal semantics. In this paper, we propose a technique for checking the data-aware soundness for a specific class of such integrated models, with a simple syntax and semantics, building on Data Petri Nets (DPNs). These are Petri nets enriched with case variables, where transitions are guarded by formulas that inspect and update such variables, and are of the form variable-operator-variable or variable-operator-constant. Even though DPNs are less expressive than Petri nets where data are carried by tokens, they elegantly capture business processes operating over simple case data, allowing to model complex data-aware decisions. We show that, if a DPN is data-aware sound, the Constraint Graph is a finite-state automaton; however, a finite-state Constraint Graph does not guarantee data-aware soundness, but provides a finite structure through which this property can be checked. Finally, we investigate further properties beyond data-aware soundness, such as the problem of verifying that an actor participating in the business process can unilaterally enforce data-aware soundness by restricting the possible executions of a bounded DPN, assuming this actor to be able to control the firing of some transitions and decide the value of some of the case variables whenever these are updated.

2016 ◽  
pp. 25-33
Author(s):  
Jean-François Aubry ◽  
Nicolae Brinzei ◽  
Mohammed-Habib Mazouni

Author(s):  
Ishak H. A. Meddah ◽  
Khaled Belkadi

MapReduce is a solution for the treatment of large data. With it we can analyze and process data. It does this by distributing the computation in a large set of machines. Process mining provides an important bridge between data mining and business process analysis. This technique allows for the extraction of information from event logs. Firstly, the chapter mines small patterns from log traces. Those patterns are the representation of the traces execution from a business process. The authors use existing techniques; the patterns are represented by finite state automaton; the final model is the combination of only two types of patterns that are represented by the regular expressions. Secondly, the authors compute these patterns in parallel, and then combine those patterns using MapReduce. They have two parties. The first is the Map Step. The authors mine patterns from execution traces. The second is the combination of these small patterns as reduce step. The results are promising; they show that the approach is scalable, general, and precise. It minimizes the execution time by the use of MapReduce.


2014 ◽  
Vol 20 (1) ◽  
pp. 127-141 ◽  
Author(s):  
José N. Pereira ◽  
Porfírio Silva ◽  
Pedro U. Lima ◽  
Alcherio Martinoli

The work described is part of a long term program of introducing institutional robotics, a novel framework for the coordination of robot teams that stems from institutional economics concepts. Under the framework, institutions are cumulative sets of persistent artificial modifications made to the environment or to the internal mechanisms of a subset of agents, thought to be functional for the collective order. In this article we introduce a formal model of institutional controllers based on Petri nets. We define executable Petri nets—an extension of Petri nets that takes into account robot actions and sensing—to design, program, and execute institutional controllers. We use a generalized stochastic Petri net view of the robot team controlled by the institutional controllers to model and analyze the stochastic performance of the resulting distributed robotic system. The ability of our formalism to replicate results obtained using other approaches is assessed through realistic simulations of up to 40 e-puck robots. In particular, we model a robot swarm and its institutional controller with the goal of maintaining wireless connectivity, and successfully compare our model predictions and simulation results with previously reported results, obtained by using finite state automaton models and controllers.


2016 ◽  
Vol 3 (4) ◽  
pp. 21-31 ◽  
Author(s):  
Ishak Meddah ◽  
Belkadi Khaled

Process mining provides an important bridge between data mining and business process analysis, his techniques allow for extracting information from event logs. In general, there are two steps in process mining, correlation definition or discovery and then process inference or composition. Firstly, the authors' work consists to mine small patterns from a log traces of two applications; SKYPE, and VIBER, those patterns are the representation of the execution traces of a business process. In this step, the authors use existing techniques; The patterns are represented by finite state automaton or their regular expression; The final model is the combination of only two types of small patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Secondly, the authors compute these patterns in parallel, and then combine those small patterns using the composition rules, they have two parties the first is the mine, they discover patterns from execution traces and the second is the combination of these small patterns. The patterns mining and the composition is illustrated by the automaton existing techniques. The Execution traces are the different actions effected by users in the SKYPE and VIBER. The results are general and precise. It minimizes the execution time and the loss of information.


Author(s):  
Ishak H.A. Meddah ◽  
Khaled Belkadi ◽  
Mohamed Amine Boudia

Hadoop MapReduce has arrived to solve the problem of treatment of big data, also the parallel treatment, with this framework the authors analyze, process a large size of data. It based for distributing the work in two big steps, the map and the reduce steps in a cluster or big set of machines. They apply the MapReduce framework to solve some problems in the domain of process mining how provides a bridge between data mining and business process analysis, this technique consists to mine lot of information from the process traces; In process mining, there are two steps, correlation definition and the process inference. The work consists in first time of mining patterns whom are the work flow of the process from execution traces, those patterns present the work or the history of each party of the process, the authors' small patterns are represented in this work by finite state automaton or their regular expression, the authors have only two patterns to facilitate the process, the general presentation of the process is the combination of the small mining patterns. The patterns are represented by the regular expressions (ab)* and (ab*c)*. Secondly, they compute the patterns, and combine them using the Hadoop MapReduce framework, in this work they have two general steps, first the Map step, they mine small patterns or small models from business process, and the second is the combination of models as reduce step. The authors use the business process of two web applications, the SKYPE, and VIBER applications. The general result shown that the parallel distributed process by using the Hadoop MapReduce framework is scalable, and minimizes the execution time.


2017 ◽  
Vol 9 (1) ◽  
pp. 49-60
Author(s):  
Ishak H.A. Meddah ◽  
Khaled Belkadi ◽  
Mohamed Amine Boudia

Hadoop MapReduce is one of the solutions for the process of large and big data, with-it the authors can analyze and process data, it does this by distributing the computational in a large set of machines. Process mining provides an important bridge between data mining and business process analysis, his techniques allow for mining data information from event logs. Firstly, the work consists to mine small patterns from a log traces, those patterns are the workflow of the execution traces of business process. The authors' work is an amelioration of the existing techniques who mine only one general workflow, the workflow present the general traces of two web applications; they use existing techniques; the patterns are represented by finite state automaton; the final model is the combination of only two types of patterns whom are represented by the regular expressions. Secondly, the authors compute these patterns in parallel, and then combine those patterns using MapReduce, they have two parts the first is the Map Step, they mine patterns from execution traces and the second is the combination of these small patterns as reduce step. The results are promising; they show that the approach is scalable, general and precise. It reduces the execution time by the use of Hadoop MapReduce Framework.


Author(s):  
Ishak H. A. Meddah ◽  
Khaled Belkadi

Process mining provides an important bridge between data mining and business process analysis. This technique allows for the extraction of information from event logs. In general, there are two steps in process mining: correlation definition or discovery and then process inference or composition. Firstly, the authors mine small patterns from log traces of two applications; those patterns are the representation of the execution traces of a business process. In this step, the authors use existing techniques. The patterns are represented by finite state automaton or their regular expression. The final model is the combination of only two types of small patterns that are represented by the regular expressions (ab)* and (ab*c)*. Secondly, the authors compute these patterns in parallel and then combine those small patterns using the composition rules. They have two parties. The first is the mine, where the authors discover patterns from execution traces, and the second is the combination of these small patterns. The pattern mining and the composition is illustrated by the automaton existing techniques.


Author(s):  
Ishak H. A. Meddah ◽  
Khaled Belkadi

The treatment of large data is proving more difficult in different axes, but the arrival of the framework MapReduce is a solution of this problem. With it we can analyze and process vast amounts of data. It does this by distributing the computational work across a cluster of virtual servers running in a cloud or large set of machines while process mining provides an important bridge between data mining and business process analysis. The process mining techniques allow for extracting information from event logs. In general, there are two steps in process mining: correlation definition or discovery and process inference or composition. Firstly, the authors' work consists to mine small patterns from a log traces. Those patterns are the representation of the traces execution from a log file of a business process. In this step, they use existing techniques. The patterns are represented by finite state automaton or their regular expression. The final model is the combination of only two types of small patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Secondly, the authors compute these patterns in parallel, and then combine those small patterns using the MapReduce framework. They have two parties: the first is the Map Step in which they mine patterns from execution traces; the second is the combination of these small patterns as reduce step. The authors' results are promising in that they show that their approach is scalable, general, and precise. It minimizes the execution time by the use of the MapReduce framework.


2019 ◽  
Vol 6 (3) ◽  
pp. 1-17
Author(s):  
Ishak H.A Meddah ◽  
Nour El Houda REMIL

The treatment of large data is difficult and it looks like the arrival of the framework MapReduce is a solution of this problem. This framework can be used to analyze and process vast amounts of data. This happens by distributing the computational work across a cluster of virtual servers running in a cloud or a large set of machines. Process mining provides an important bridge between data mining and business process analysis. Its techniques allow for extracting information from event logs. Generally, there are two steps in process mining, correlation definition or discovery and the inference or composition. First of all, their work mines small patterns from log traces. Those patterns are the representation of the traces execution from a log file of a business process. In this step, the authors use existing techniques. The patterns are represented by finite state automaton or their regular expression; and the final model is the combination of only two types of different patterns whom are represented by the regular expressions (ab)* and (ab*c)*. Second, they compute these patterns in parallel, and then combine those small patterns using the Hadoop framework. They have two steps; the first is the Map Step through which they mine patterns from execution traces, and the second one is the combination of these small patterns as a reduce step. The results show that their approach is scalable, general and precise. It minimizes the execution time by the use of the Hadoop framework.


Sign in / Sign up

Export Citation Format

Share Document