scholarly journals First Steps Towards Process Mining in Distributed Health Information Systems

2015 ◽  
Vol 61 (2) ◽  
pp. 137-142 ◽  
Author(s):  
Emmanuel Helm ◽  
Ferdinand Paster

Abstract Business Intelligence approaches such as process mining can be applied to the healthcare domain in order to gain insight into the complex processes taking place. Disclosing asis processes helps identify room for improvement and answers questions from medical professionals. Existing approaches are based on proprietary log data as input for mining algorithms. Integrating the Healthcare Enterprise (IHE) defines in its Audit Trail and Node Authentication (ATNA) profile how real-world events must be recorded. Since IHE is used by many healthcare providers throughout the world, an extensive amount of log data is produced. In our research we investigate if audit trails, generated from an IHE test system, carry enough content to successfully apply process mining techniques. Furthermore we assess the quality of the recorded events in accordance with the maturity level scoring system. A simplified simulation of the organizational workflow in a radiological practice is presented. Based on this simulation a process miing task is conducted.

2021 ◽  
pp. 251604352199026
Author(s):  
Peter Isherwood ◽  
Patrick Waterson

Patient safety, staff moral and system performance are at the heart of healthcare delivery. Investigation of adverse outcomes is one strategy that enables organisations to learn and improve. Healthcare is now understood as a complex, possibly the most complex, socio-technological system. Despite this the use of a 20th century linear investigation model is still recommended for the investigation of adverse outcomes. In this review the authors use data gathered from the investigation of a real life healthcare near incident and apply three different methodologies to the analysis of this data. They compare both the methodologies themselves and the outputs generated. This illustrates how different methodologies generate different system level recommendations. The authors conclude that system based models generate the strongest barriers to improve future performance. Healthcare providers and their regulatory bodies need to embrace system based methodologies if they are to effectively learn from, and reduce future, adverse outcomes.


2021 ◽  
Vol 11 (22) ◽  
pp. 10686
Author(s):  
Syeda Amna Sohail ◽  
Faiza Allah Bukhsh ◽  
Maurice van Keulen

Healthcare providers are legally bound to ensure the privacy preservation of healthcare metadata. Usually, privacy concerning research focuses on providing technical and inter-/intra-organizational solutions in a fragmented manner. In this wake, an overarching evaluation of the fundamental (technical, organizational, and third-party) privacy-preserving measures in healthcare metadata handling is missing. Thus, this research work provides a multilevel privacy assurance evaluation of privacy-preserving measures of the Dutch healthcare metadata landscape. The normative and empirical evaluation comprises the content analysis and process mining discovery and conformance checking techniques using real-world healthcare datasets. For clarity, we illustrate our evaluation findings using conceptual modeling frameworks, namely e3-value modeling and REA ontology. The conceptual modeling frameworks highlight the financial aspect of metadata share with a clear description of vital stakeholders, their mutual interactions, and respective exchange of information resources. The frameworks are further verified using experts’ opinions. Based on our empirical and normative evaluations, we provide the multilevel privacy assurance evaluation with a level of privacy increase and decrease. Furthermore, we verify that the privacy utility trade-off is crucial in shaping privacy increase/decrease because data utility in healthcare is vital for efficient, effective healthcare services and the financial facilitation of healthcare enterprises.


2009 ◽  
Vol 133 (11) ◽  
pp. 1841-1849 ◽  
Author(s):  
Christel Daniel ◽  
Marcial García Rojo ◽  
Karima Bourquard ◽  
Dominique Henin ◽  
Thomas Schrader ◽  
...  

Abstract Context.—Integrating anatomic pathology information— text and images—into electronic health care records is a key challenge for enhancing clinical information exchange between anatomic pathologists and clinicians. The aim of the Integrating the Healthcare Enterprise (IHE) international initiative is precisely to ensure interoperability of clinical information systems by using existing widespread industry standards such as Digital Imaging and Communication in Medicine (DICOM) and Health Level Seven (HL7). Objective.—To define standard-based informatics transactions to integrate anatomic pathology information to the Healthcare Enterprise. Design.—We used the methodology of the IHE initiative. Working groups from IHE, HL7, and DICOM, with special interest in anatomic pathology, defined consensual technical solutions to provide end-users with improved access to consistent information across multiple information systems. Results.—The IHE anatomic pathology technical framework describes a first integration profile, “Anatomic Pathology Workflow,” dedicated to the diagnostic process including basic image acquisition and reporting solutions. This integration profile relies on 10 transactions based on HL7 or DICOM standards. A common specimen model was defined to consistently identify and describe specimens in both HL7 and DICOM transactions. Conclusion.—The IHE anatomic pathology working group has defined standard-based informatics transactions to support the basic diagnostic workflow in anatomic pathology laboratories. In further stages, the technical framework will be completed to manage whole-slide images and semantically rich structured reports in the diagnostic workflow and to integrate systems used for patient care and those used for research activities (such as tissue bank databases or tissue microarrayers).


Author(s):  
Tuğba Gürgen ◽  
Ayça Tarhan ◽  
N. Alpay Karagöz

The verification of process implementations according to specifications is a critical step of process management. This verification must be practiced according to objective criteria and evidence. This study explains an integrated infrastructure that utilizes process mining for software process verification and case studies carried out by using this infrastructure. Specific software providing the utilization of process mining algorithms for software process verification is developed as a plugin to an open-source EPF Composer tool that supports the management of software and system engineering processes. With three case studies, bug management, task management, and defect management processes are verified against defined and established process models (modeled by using EPF Composer) by using this plugin over real process data. Among these, the results of the case study performed in a large, leading IT solutions company in Turkey are remarkable in demonstrating the opportunities for process improvement.


2021 ◽  
pp. 258-270
Author(s):  
André Filipe Domingos Gomes ◽  
Ana Cristina Wanzeller Guedes de Lacerda ◽  
Joana Rita da Silva Fialho

2021 ◽  
pp. 27-43
Author(s):  
André Filipe Domingos Gomes ◽  
Ana Cristina Wanzeller Guedes de Lacerda ◽  
Joana Rita da Silva Fialho

Author(s):  
Hung Son Nguyen ◽  
Andrzej Jankowski ◽  
James F. Peters ◽  
Andrzej Skowron ◽  
Jaroslaw Stepaniuk ◽  
...  

The rapid expansion of the Internet has resulted not only in the ever-growing amount of data stored therein, but also in the burgeoning complexity of the concepts and phenomena pertaining to that data. This issue has been vividly compared by the renowned statistician J.F. Friedman (Friedman, 1997) of Stanford University to the advances in human mobility from the period of walking afoot to the era of jet travel. These essential changes in data have brought about new challenges in the discovery of new data mining methods, especially the treatment of these data that increasingly involves complex processes that elude classic modeling paradigms. “Hot” datasets like biomedical, financial or net user behavior data are just a few examples. Mining such temporal or stream data is a focal point in the agenda of many research centers and companies worldwide (see, e.g., (Roddick et al., 2001; Aggarwal, 2007)). In the data mining community, there is a rapidly growing interest in developing methods for process mining, e.g., for discovery of structures of temporal processes from observed sample data. Research on process mining (e.g., (Unnikrishnan et al., 2006; de Medeiros et al., 2007; Wu, 2007; Borrett et al., 2007)) have been undertaken by many renowned centers worldwide1. This research is also related to functional data analysis (see, e.g., (Ramsay & Silverman, 2002)), cognitive networks (see, e.g., (Papageorgiou & Stylios, 2008)), and dynamical system modeling, e.g., in biology (see, e.g., (Feng et al., 2007)). We outline an approach to the discovery of processes from data and domain knowledge. The proposed approach to discovery of process models is based on rough-granular computing. In particular, we discuss how changes along trajectories of such processes can be discovered from sample data and domain knowledge.


Sign in / Sign up

Export Citation Format

Share Document