scholarly journals Methods of structural analysis of business processes

2018 ◽  
Vol 210 ◽  
pp. 04016
Author(s):  
Jarosław Koszela

The article outlines selected methods for analyzing business processes: their definitions and instances. The methods for analytical processing of processes constitute a component of the Business Intelligence environment - process warehouses, including methods for analytical processing and exploration of the collected process definitions and instances - i.e. process mining. One of the main elements of the analysis of processes is to determine the similarity between them. In systems for analyzing large sets of elements, the method of determining similarity should be efficiency because is the basis for others analysis methods, e.g. clustering, classification, etc. A method for analyzing structural similarity of business processes, based on the similarity of sequences of genetic tags of such processes, was presented using the similarity analysis methods based on the editing distance and the developed methods of structural similarity: GNM, DNM, GCM, DCM. The presented similarity methods were used to clustering processes and to determine the central element of the cluster. The developed methods form the basis for the development of similarity methods extended to aspects of semantic similarity of business processes and methods of analysis and exploration of processes.

2018 ◽  
Vol 9 (1) ◽  
pp. 64-77 ◽  
Author(s):  
A.D.N. Sarma

In recent years, Operational Business Intelligence has emerged as an important trend in the Business Intelligence (BI) market. Majority of BI application architectures are bespoke in nature which have several architectural limitations like tightly coupled, static, historic, subjective, no performance measurement of business processes, limited user access, limited analytical processing, querying and reporting features. In this article, a generic functional architecture for Operational BI systems based on software architecture principles is presented. All functional modules of the system are derived from the key features of the system and by using top down approach of software design principles. The similar functional modules are grouped into sub-systems and a set of these sub-systems constitutes overall functional architecture. The proposed architecture overcomes the limitations of traditional BI architectures.


2021 ◽  
Vol 24 (2) ◽  
Author(s):  
Daniel Calegari ◽  
Andrea Delgado ◽  
Alexis Artus ◽  
Andrés Borges

Organizations require a unified view of business processes and organizational data for the improvement of their daily operations. However, it is infrequent for both kinds of data to be consistently unified. Organizational data (e.g., clients, orders, and payments) is usually stored in many different data sources. Process data (e.g., cases, activity in- stances, and variables) is generally handled manually or implicit in information systems and coupled with organizational data without clear separation. It impairs the combined application of process mining and data mining techniques for a complete evaluation of their business process execution. In this paper, we deal with the integration of both kinds of data into a unified view. First, we analyze data integration scenarios and data matching problems considering intra-organizational and inter-organizational collaborative business processes. We also propose a model-driven approach to integrate several data sources, generating a unified model for evidence-based business intelligence.


Author(s):  
Harkiran Kaur ◽  
Kawaljeet Singh ◽  
Tejinder Kaur

Background: Numerous E – Migrants databases assist the migrants to locate their peers in various countries; hence contributing largely in communication of migrants, staying overseas. Presently, these traditional E – Migrants databases face the issues of non – scalability, difficult search mechanisms and burdensome information update routines. Furthermore, analysis of migrants’ profiles in these databases has remained unhandled till date and hence do not generate any knowledge. Objective: To design and develop an efficient and multidimensional knowledge discovery framework for E - Migrants databases. Method: In the proposed technique, results of complex calculations related to most probable On-Line Analytical Processing operations required by end users, are stored in the form of Decision Trees, at the pre- processing stage of data analysis. While browsing the Cube, these pre-computed results are called; thus offering Dynamic Cubing feature to end users at runtime. This data-tuning step reduces the query processing time and increases efficiency of required data warehouse operations. Results: Experiments conducted with Data Warehouse of around 1000 migrants’ profiles confirm the knowledge discovery power of this proposal. Using the proposed methodology, authors have designed a framework efficient enough to incorporate the amendments made in the E – Migrants Data Warehouse systems on regular intervals, which was totally missing in the traditional E – Migrants databases. Conclusion: The proposed methodology facilitate migrants to generate dynamic knowledge and visualize it in the form of dynamic cubes. Applying Business Intelligence mechanisms, blending it with tuned OLAP operations, the authors have managed to transform traditional datasets into intelligent migrants Data Warehouse.


2019 ◽  
Vol 30 (3) ◽  
pp. 325-329 ◽  
Author(s):  
Mirosław Kwiatkowski ◽  
Dimitrios Kalderis

Abstract This paper presents the results of the analysis of the porous structure of biochars produced from biomass, namely eucalyptus, wood chips, pruning waste and rice husk. The structural analysis was carried out using the BET, the t-plot, the NLDFT and the LBET methods, which yielded not only complementary information on the adsorptive properties of obtained biochars from these materials, but also information on the usefulness of the structural analysis methods in question for the research into an effect of the technology of carbonaceous adsorbent preparation.


Open Biology ◽  
2012 ◽  
Vol 2 (7) ◽  
pp. 120099 ◽  
Author(s):  
Owen R. Davies ◽  
Joseph D. Maman ◽  
Luca Pellegrini

The successful completion of meiosis is essential for all sexually reproducing organisms. The synaptonemal complex (SC) is a large proteinaceous structure that holds together homologous chromosomes during meiosis, providing the structural framework for meiotic recombination and crossover formation. Errors in SC formation are associated with infertility, recurrent miscarriage and aneuploidy. The current lack of molecular information about the dynamic process of SC assembly severely restricts our understanding of its function in meiosis. Here, we provide the first biochemical and structural analysis of an SC protein component and propose a structural basis for its function in SC assembly. We show that human SC proteins SYCE2 and TEX12 form a highly stable, constitutive complex, and define the regions responsible for their homotypic and heterotypic interactions. Biophysical analysis reveals that the SYCE2–TEX12 complex is an equimolar hetero-octamer, formed from the association of an SYCE2 tetramer and two TEX12 dimers. Electron microscopy shows that biochemically reconstituted SYCE2–TEX12 complexes assemble spontaneously into filamentous structures that resemble the known physical features of the SC central element (CE). Our findings can be combined with existing biological data in a model of chromosome synapsis driven by growth of SYCE2–TEX12 higher-order structures within the CE of the SC.


2021 ◽  
Vol 17 (4) ◽  
pp. 1-28
Author(s):  
Waqas Ahmed ◽  
Esteban Zimányi ◽  
Alejandro A. Vaisman ◽  
Robert Wrembel

Data warehouses (DWs) evolve in both their content and schema due to changes of user requirements, business processes, or external sources to name a few. Although multiple approaches using temporal and/or multiversion DWs have been proposed to handle these changes, an efficient solution for this problem is still lacking. The authors' approach is to separate concerns and use temporal DWs to deal with content changes, and multiversion DWs to deal with schema changes. To address the former, previously, they have proposed a temporal multidimensional (MD) model. In this paper, they propose a multiversion MD model for schema evolution to tackle the latter problem. The two models complement each other and allow managing both content and schema evolution. In this paper, the semantics of schema modification operators (SMOs) to derive various schema versions are given. It is also shown how online analytical processing (OLAP) operations like roll-up work on the model. Finally, the mapping from the multiversion MD model to a relational schema is given along with OLAP operations in standard SQL.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Shabnam Shahzadi ◽  
Xianwen Fang ◽  
David Anekeya Alilah

For exploitation and extraction of an event’s data that has vital information which is related to the process from the event log, process mining is used. There are three main basic types of process mining as explained in relation to input and output. These are process discovery, conformance checking, and enhancement. Process discovery is one of the most challenging process mining activities based on the event log. Business processes or system performance plays a vital role in modelling, analysis, and prediction. Recently, a memoryless model such as exponential distribution of the stochastic Petri net SPN has gained much attention in research and industry. This paper uses time perspective for modelling and analysis and uses stochastic Petri net to check the performance, evolution, stability, and reliability of the model. To assess the effect of time delay in firing the transition, stochastic reward net SRN model is used. The model can also be used in checking the reliability of the model, whereas the generalized stochastic Petri net GSPN is used for evaluation and checking the performance of the model. SPN is used to analyze the probability of state transition and the stability from one state to another. However, in process mining, logs are used by linking log sequence with the state and, by this, modelling can be done, and its relation with stability of the model can be established.


2020 ◽  
Author(s):  
Yaghoub rashnavadi ◽  
Sina Behzadifard ◽  
Reza Farzadnia ◽  
sina zamani

<p>Communication has never been more accessible than today. With the help of Instant messengers and Email Services, millions of people can transfer information with ease, and this trend has affected organizations as well. There are billions of organizational emails sent or received daily, and their main goal is to facilitate the daily operation of organizations. Behind this vast corpus of human-generated content, there is much implicit information that can be mined and used to improve or optimize the organizations’ operations. Business processes are one of those implicit knowledge areas that can be discovered from Email logs of an Organization, as most of the communications are followed inside Emails. The purpose of this research is to propose an approach to discover the process models in the Email log. In this approach, we combine two tools, supervised machine learning and process mining. With the help of supervised machine learning, fastText classifier, we classify the body text of emails to the activity-related. Then the generated log will be mined with process mining techniques to find process models. We illustrate the approach with a case study company from the oil and gas sector.</p>


2013 ◽  
Vol 3 (3) ◽  
pp. 08-15
Author(s):  
Mostafa Medhat Nazier ◽  
Dr. Ayman Khedr ◽  
Assoc. Prof. Mohamed Haggag

As every small or large organization requires information to promote their business by forecasting the future trends, information is now the primary tool to understand the market trends and understand their own position in the market comparison to its competitors. Business intelligence is the use of an organizations disparate data to provide meaningful information and analyses to employees, customers, suppliers, and partners for more efficient and effective decision-making. BI applications include the activities of decision support systems, query and reporting, online analytical processing (OLAP), data warehouse (DW), statistical analysis, forecasting, and data mining.


Sign in / Sign up

Export Citation Format

Share Document