event log
Recently Published Documents


TOTAL DOCUMENTS

220
(FIVE YEARS 110)

H-INDEX

11
(FIVE YEARS 4)

2022 ◽  
Vol 183 (3-4) ◽  
pp. 203-242
Author(s):  
Dirk Fahland ◽  
Vadim Denisov ◽  
Wil. M.P. van der Aalst

To identify the causes of performance problems or to predict process behavior, it is essential to have correct and complete event data. This is particularly important for distributed systems with shared resources, e.g., one case can block another case competing for the same machine, leading to inter-case dependencies in performance. However, due to a variety of reasons, real-life systems often record only a subset of all events taking place. To understand and analyze the behavior and performance of processes with shared resources, we aim to reconstruct bounds for timestamps of events in a case that must have happened but were not recorded by inference over events in other cases in the system. We formulate and solve the problem by systematically introducing multi-entity concepts in event logs and process models. We introduce a partial-order based model of a multi-entity event log and a corresponding compositional model for multi-entity processes. We define PQR-systems as a special class of multi-entity processes with shared resources and queues. We then study the problem of inferring from an incomplete event log unobserved events and their timestamps that are globally consistent with a PQR-system. We solve the problem by reconstructing unobserved traces of resources and queues according to the PQR-model and derive bounds for their timestamps using a linear program. While the problem is illustrated for material handling systems like baggage handling systems in airports, the approach can be applied to other settings where recording is incomplete. The ideas have been implemented in ProM and were evaluated using both synthetic and real-life event logs.


2022 ◽  
Vol 12 (2) ◽  
pp. 553
Author(s):  
Minyeol Yang ◽  
Junhyung Moon ◽  
Jongpil Jeong ◽  
Seokho Sin ◽  
Jimin Kim

Recently, the production environment has been rapidly changing, and accordingly, correct mid term and short term decision-making for production is considered more important. Reliable indicators are required for correct decision-making, and the manufacturing cycle time plays an important role in manufacturing. A method using digital twin technology is being studied to implement accurate prediction, and an approach utilizing process discovery was recently proposed. This paper proposes a digital twin discovery framework using process transition technology. The generated digital twin will unearth its characteristics in the event log. The proposed method was applied to actual manufacturing data, and the experimental results demonstrate that the proposed method is effective at discovering digital twins.


eLearn ◽  
2021 ◽  
Vol 2021 (12) ◽  
Author(s):  
José Francisco dos Santos Neto ◽  
Sarajane Marques Peres ◽  
Paulo Correia ◽  
Marcelo Fantinato

Flipped classroom is an active learning method that encourages students to access study material prior to class time. Ensuring the flipping process took place, understanding how it occurred, and verifying whether it produced positive results has been a challenge for lecturers. In this article, we analyze a flipped classroom scenario through process mining techniques. Process mining was applied to an event log provided by a learning management system that supported a particular undergraduate course offering. The outcomes provide evidence for the flip of the classroom, adding precision and reliability to lecturer analyses.


Author(s):  
Sven Weinzierl ◽  
Verena Wolf ◽  
Tobias Pauli ◽  
Daniel Beverungen ◽  
Martin Matzner

2021 ◽  
pp. 116274
Author(s):  
Niels Martin ◽  
Greg Van Houdt ◽  
Gert Janssenswillen

2021 ◽  
Author(s):  
Ashok Kumar Saini ◽  
Ruchi Kamra ◽  
Utpal Shrivastava

Conformance Checking (CC) techniques enable us to gives the deviation between modelled behavior and actual execution behavior. The majority of organizations have Process-Aware Information Systems for recording the insights of the system. They have the process model to show how the process will be executed. The key intention of Process Mining is to extracting facts from the event log and used them for analysis, ratification, improvement, and redesigning of a process. Researchers have proposed various CC techniques for specific applications and process models. This paper has a detailed study of key concepts and contributions of Process Mining. It also helps in achieving business goals. The current challenges and opportunities in Process Mining are also discussed. The survey is based on CC techniques proposed by researchers with key objectives like quality parameters, perspective, algorithm types, tools, and achievements.


2021 ◽  
Vol 8 (6) ◽  
pp. 1227
Author(s):  
Angelina Prima Kurniati ◽  
Gede Agung Ary Wisudiawan

<p>Sistem manajemen pembelajaran (<em>Learning Management System/ LMS</em>) berbasis komputer telah banyak digunakan untuk mengelola pembelajaran dalam institusi pendidikan, termasuk universitas. LMS merekam dan mengelola akses pengguna secara otomatis dalam bentuk <em>event log</em>. Data dalam <em>event log</em> tersebut dapat dianalisis untuk mengenali pola penggunaan LMS sebagai pertimbangan pengembangan LMS. Salah satu metode yang dapat diadopsi adalah <em>process mining</em>, yaitu menganalisis data <em>event log</em> berbasis proses. Analisis data berbasis proses ini bertujuan untuk memodelkan proses yang terjadi dan terekam dalam LMS, mengecek kesesuaian pelaksanaan proses dengan prosedur, dan mengusulkan pengembangan proses di masa mendatang. Makalah ini mengeksplorasi kesiapan data penggunaan LMS di Universitas Telkom sebagai subjek penelitian untuk dianalisis dengan pendekatan <em>process mining</em>. Sepanjang pengetahuan kami, belum ada penelitian sebelumnya yang melakukan analisis data berbasis proses pada LMS ini. Kontribusi penelitian ini adalah eksplorasi peluang untuk menganalisis proses pembelajaran dan pengembangan metode pembelajaran berbasis LMS. Analisis kesiapan LMS dilakukan berdasarkan daftar pengecekan komponen yang dibutuhkan dalam <em>process mining</em>. Makalah ini mengikuti tahap-tahap utama dalam <em>Process Mining Process Methodology</em> (PM<sup>2</sup>). Studi kasus yang dieksplorasi adalah proses pembelajaran pada satu mata kuliah dalam satu semester berdasarkan <em>event log </em>yang diekstrak dari LMS. Hasil penelitian ini menunjukkan bahwa analisis data dalam LMS ini dapat digunakan untuk menganalisis performansi pembelajaran di Universitas Telkom dari kelompok pengguna yang berbeda-beda dan dapat dikembangkan untuk menganalisis data pada studi kasus yang lebih besar. Studi kelayakan ini diakhiri dengan diskusi tentang kelayakan LMS untuk dianalisis dengan <em>process mining</em>, evaluasi oleh tim ahli LMS, dan usulan pengembangan LMS di masa mendatang. <em></em></p><p> </p><p><em><strong>Abstract</strong></em></p><p><em><em>Computer-based Learning Management Systems (LMS) are commonly used in educational institutions, including universities. An LMS records and manages user access logs in an event log. Data in an event log can be analysed to understand patterns in the LMS usage to support recommendations for improvements. One promising method is process mining, which is a process-based data analytics working on event logs. Process mining aims to discover process models as recorded in the LMS, conformance checking of process execution to the defined procedure, and suggest improvements. This paper explores the feasibility of Telkom University LMS usage data to be analysed using process mining. To the best of our knowledge, there was no previous research doing process-based data analytics on this LMS. This paper contributes to explore opportunities to analyse learning processes and enhance LMS-based learning methods. The feasibility study is based on a data component checklist for process mining. This paper is written following the main stages on the Process Mining Project Methodology (PM2). We explore a case study of the learning process of a course in a semester, based on an event log extracted from the LMS. The results show that data analytics on this LMS can be used to analyse learning process performance in Telkom University, based on different user roles. This feasibility study is concluded with a discussion on the feasibility of the LMS to be analysed using process mining, an evaluation by the representative of the LMS expert team, and a recommendation for improvements.</em></em></p>


Techno Com ◽  
2021 ◽  
Vol 20 (4) ◽  
pp. 579-587
Author(s):  
Afina Lina Nurlaili ◽  
- Muhsin ◽  
Eristya Maya Safitri
Keyword(s):  

Proses yang tidak terdokumentasi dapat menjadi masalah apabila tidak ada suatu prosedur yang mendasarinya, khususnya pada proses yang melibatkan banyak aktivitas. Prosedur dibutuhkan untuk mengatur kegiatan dalam mencapai tujuan tertentu. Dalam hal memenuhi tujuan tersebut, dibuatlah panduan berupa Standar Operasional Prosedur (SOP) yang dapat memberikan manfaat bagi organisasi mana pun yang menerapkannya. SOP juga menjadi dasar dalam proses evaluasi antara kenyataan di lapangan dengan prosedur yang telah dibuat ke dalam SOP sehingga diperlukan suatu teknik untuk mengevaluasi antara SOP dengan kenyataan. Process mining merupakan teknik evaluasi antara suatu model proses dengan data peristiwa atau event log yang terdapat dalam sistem informasi. Berbagai metode process mining telah dikenalkan yaitu algoritma Alpha dan Heuristic Miner. Algoritma Alpha dan Heurisctic pada penelitian ini akan dihitung nilai fitness dan presisi. Fitness dilakukan dengan mengukur kesesuaian antara event log dan model proses. Presisi mengukur apakah suatu algoritma tepat untuk menyelesaikan kasus tertentu. Berdasarkan evaluasi yang dilakukan, algoritma Alpha tidak mampu menggambarkan proses sesuai dengan event log PKL. Hal ini disebabkan karena varian kasus mengandung proses loop/perulangan. Hal ini juga menunjukkan event log yang ada pada proses PKL belum menerapkan SOP PKL. Sedangkan Heuristic Miner mengabaikan proses minor menyebabkan proses-proses yang tidak banyak terjadi, tidak digambarkan ke dalam model proses. Secara keseluruhan proses model yang terbentuk menggunakan algoritma Alpha yang paling mendekati dengan kenyataan karena memiliki fitness 0,96.


2021 ◽  
Author(s):  
◽  
Paul Radford

<p>Event log messages are currently the only genuine interface through which computer systems administrators can effectively monitor their systems and assemble a mental perception of system state. The popularisation of the Internet and the accompanying meteoric growth of business-critical systems has resulted in an overwhelming volume of event log messages, channeled through mechanisms whose designers could not have envisaged the scale of the problem. Messages regarding intrusion detection, hardware status, operating system status changes, database tablespaces, and so on, are being produced at the rate of many gigabytes per day for a significant computing environment. Filtering technologies have not been able to keep up. Most messages go unnoticed; no  filtering whatsoever is performed on them, at least in part due to the difficulty of implementing and maintaining an effective filtering solution. The most commonly-deployed  filtering alternatives rely on regular expressions to match pre-defi ned strings, with 100% accuracy, which can then become ineffective as the code base for the software producing the messages 'drifts' away from those strings. The exactness requirement means all possible failure scenarios must be accurately anticipated and their events catered for with regular expressions, in order to make full use of this technique. Alternatives to regular expressions remain largely academic. Data mining, automated corpus construction, and neural networks, to name the highest-profi le ones, only produce probabilistic results and are either difficult or impossible to alter in any deterministic way. Policies are therefore not supported under these alternatives. This thesis explores a new architecture which utilises rich metadata in order to avoid the burden of message interpretation. The metadata itself is based on an intention to improve end-to-end communication and reduce ambiguity. A simple yet effective filtering scheme is also presented which fi lters log messages through a short and easily-customisable set of rules. With such an architecture, it is envisaged that systems administrators could signi ficantly improve their awareness of their systems while avoiding many of the false-positives and -negatives which plague today's fi ltering solutions.</p>


2021 ◽  
Author(s):  
◽  
Paul Radford

<p>Event log messages are currently the only genuine interface through which computer systems administrators can effectively monitor their systems and assemble a mental perception of system state. The popularisation of the Internet and the accompanying meteoric growth of business-critical systems has resulted in an overwhelming volume of event log messages, channeled through mechanisms whose designers could not have envisaged the scale of the problem. Messages regarding intrusion detection, hardware status, operating system status changes, database tablespaces, and so on, are being produced at the rate of many gigabytes per day for a significant computing environment. Filtering technologies have not been able to keep up. Most messages go unnoticed; no  filtering whatsoever is performed on them, at least in part due to the difficulty of implementing and maintaining an effective filtering solution. The most commonly-deployed  filtering alternatives rely on regular expressions to match pre-defi ned strings, with 100% accuracy, which can then become ineffective as the code base for the software producing the messages 'drifts' away from those strings. The exactness requirement means all possible failure scenarios must be accurately anticipated and their events catered for with regular expressions, in order to make full use of this technique. Alternatives to regular expressions remain largely academic. Data mining, automated corpus construction, and neural networks, to name the highest-profi le ones, only produce probabilistic results and are either difficult or impossible to alter in any deterministic way. Policies are therefore not supported under these alternatives. This thesis explores a new architecture which utilises rich metadata in order to avoid the burden of message interpretation. The metadata itself is based on an intention to improve end-to-end communication and reduce ambiguity. A simple yet effective filtering scheme is also presented which fi lters log messages through a short and easily-customisable set of rules. With such an architecture, it is envisaged that systems administrators could signi ficantly improve their awareness of their systems while avoiding many of the false-positives and -negatives which plague today's fi ltering solutions.</p>


Sign in / Sign up

Export Citation Format

Share Document