scholarly journals Coping with interoperability in the development of a federated research infrastructure: achievements, challenges and recommendations from the JA-InfAct

2021 ◽  
Vol 79 (1) ◽  
Author(s):  
Juan González-García ◽  
Francisco Estupiñán-Romero ◽  
Carlos Tellería-Orriols ◽  
Javier González-Galindo ◽  
Luigi Palmieri ◽  
...  

Abstract Background Information for Action! is a Joint Action (JA-InfAct) on Health Information promoted by the EU Member States and funded by the European Commission within the Third EU Health Programme (2014–2020) to create and develop solid sustainable infrastructure on EU health information. The main objective of this the JA-InfAct is to build an EU health information system infrastructure and strengthen its core elements by a) establishing a sustainable research infrastructure to support population health and health system performance assessment, b) enhancing the European health information and knowledge bases, as well as health information research capacities to reduce health information inequalities, and c) supporting health information interoperability and innovative health information tools and data sources. Methods Following a federated analysis approach, JA-InfAct developed an ad hoc federated infrastructure based on distributing a well-defined process-mining analysis methodology to be deployed at each participating partners’ systems to reproduce the analysis and pool the aggregated results from the analyses. To overcome the legal interoperability issues on international data sharing, data linkage and management, partners (EU regions) participating in the case studies worked coordinately to query their real-world healthcare data sources complying with a common data model, executed the process-mining analysis pipeline on their premises, and shared the results enabling international comparison and the identification of best practices on stroke care. Results The ad hoc federated infrastructure was designed and built upon open source technologies, providing partners with the capacity to exploit their data and generate dashboards exploring the stroke care pathways. These dashboards can be shared among the participating partners or to a coordination hub without legal issues, enabling the comparative evaluation of the caregiving activities for acute stroke across regions. Nonetheless, the approach is not free of a number of challenges that have been solved, and new challenges that should be addressed in the eventual case of scaling up. For that eventual case, 12 recommendations considering the different layers of interoperability have been provided. Conclusion The proposed approach, when successfully deployed as a federated analysis infrastructure, such as the one developed within the JA-InfAct, can concisely tackle all levels of the interoperability requirements from organisational to technical interoperability, supported by the close collaboration of the partners participating in the study. Any proposal for extension, should require further thinking on how to deal with new challenges on interoperability.

2021 ◽  
Author(s):  
Juan González-García ◽  
Francisco Estupiñán-Romero ◽  
Javier González-Galindo ◽  
Carlos Telleria-Orriols ◽  
Luigi Palmieri ◽  
...  

Abstract Background: Information for Action is a Joint Action (JA InfAct on Health Information) promoted by the EU Member States and funded by the European Commission within the Third EU Health Programme (2014-2020) to create and develop solid sustainable infrastructure on EU health information. The main objective of this the JA-InfAct is to build an EU health information system infrastructure and strengthen its core elements by a) establishing a sustainable research infrastructure to support population health and health system performance assessment, b) strengthening the European health information and knowledge bases, as well as health information research capacities to reduce health information inequalities, and c) supporting health information interoperability and innovative health information tools and data sources.Method: Follsowing a federated analysis approach, JA-InfAct developed an ad hoc federated infrastructure based on distributing a well-defined process-mining analysis methodology to be deployed at each participating partners’ systems to reproduce the analysis and pooling the aggregated results from the analyses. To overcome the legal interoperability issues on international data sharing, data linkage and management, partners (EU regions) participating in the cases study worked coordinately to query their real-world healthcare data sources complying with a common data model, executed the process mining analysis pipeline on their premises, and shared the analysis results enabling international comparison and the identification of best practices on stroke care.Results: the ad hoc federated analysis infrastructure was designed and built upon open source technologies providing partners with the capacity to exploit their data and generate stroke care pathway analysis dashboards. These dashboards can be shared among the participating partners or to a coordination hub without legal issues enabling the comparative evaluation of the caregiving activities for acute stroke across regions.Nonetheless, the approach is not free of a number of challenges that have been solved, and new challenges that should be addressed in the eventual case of scaling up. For that eventual case, 12 recommendations on the different layers of interoperability have been provided. Conclusion: The JA InfAct federated analysis infrastructure has been able to cope with all levels of the interoperability -legal, organisational semantic and technological. Any proposal for extension would require upgrade so as to deal with new challenges on interoperability such as large scale application of GDPR principles, developing high-profile resources capacity across Europe, developing a common strategy to data quality assurance and preparedness for high performance computing and federated learning.


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
A Faragalli ◽  
A Bucci ◽  
R Gesuita ◽  
B Unim ◽  
L Spazzafumo ◽  
...  

Abstract Background The Institute for Health Sciences in Aragon developed a process-mining based algorithm deployed using a container solution to describe the Stroke Care Pathway reproducible in different Health Systems. Our aim is to apply the proposed solution to administrative data of an Italian region to explore interoperability in the case of the stroke care; the study is developed in the context of the Joint Action on Health Information 'InfAct' funded by the European Commission. Methods The target population included 1538055 residents in Marche Region who received National Health System (NHS) assistance in 2017. The collected information was sex, date of birth and death, residence from NHS Beneficiaries database; admission and discharge date/time, hospital code, admission type, discharge status, up to 6 diagnoses from Hospital Discharge database; admission date/time, urgent care facility code, discharge status, diagnosis from Emergency Care database. Patients without hospital/urgent care events were excluded from NHS Beneficiaries database. Care pathway traces and map with frequency information, timing across the pathway and timeline plots with time information were the outputs obtained. Results Data included 17752 patients, 10390 hospital, and 18329 urgent care events. Results were represented graphically, easy to read and spread: 500 episodes of stroke were detected, 66% of patients was hospitalised after emergency room (ER) admissions, 19% directly hospitalised and 10% discharged after ER admission. The most frequent access and exit points of the pathway were ER admission and hospital discharge. Conclusions The application of the stroke case to Italian data demonstrates feasibility and usefulness of interoperability in analysing health care administrative data for public health purposes. A European health information system is crucial to build capacity between researchers across Europe and improve health data on Health Monitoring and Health System Performance Assessment. Key messages Sharing a process-mining methodology deployed using a container solution is a useful strategy to describe care pathways and ensures results’ reliability across European Health Systems. Developing and testing tools to encourage interoperability between researchers across Europe is essential for public health policymaker in Europe.


2007 ◽  
Vol 46 (04) ◽  
pp. 476-483 ◽  
Author(s):  
M. Marschollek ◽  
K.-H. Wolf ◽  
R. Haux ◽  
O. J. Bott

Summary Objectives: To analyze utilization of sensor technology in telemonitoring and home care and to discuss concepts and challenges of sensor-enhanced regional health information systems (rHIS). Methods: The study is based upon experience in sensor-based telemedicine and rHIS projects, and on an analysis of HIS-related journal publications from 2003 to 2005 conducted in the context of publishing the IMIA Yearbook of Medical Informatics. Results: Health-related parameters that are subject to sensor-based measurement in home care and tele-monitoring are identified. Publications related to tele-monitoring, home care and smart houses are analyzed concerning scope and utilization of sensor technology. Current approaches for integrating sensor technology in rHIS based on a corresponding eHealth infrastructure are identified. Based on a coarse architecture of home care and telemonitoring systems ten challenges for sensor-enhanced rHIS are identified and discussed: integration of home and health telematic platforms towards a sensor-enhanced telematic platform, transmission rate guarantees, ad hoc connectivity, cascading data analysis, remote configuration, message and alert logistic, sophisticated user interfaces, unobtrusiveness, data safety and security, and electronic health record integration. Conclusions: Utilization of sensor technology in health care is an active field of research. Currently few research projects and standardization initiatives focus on general architectural considerations towards suitable telematic platforms for establishing sensor-enhanced rHIS. Further research finalized by corresponding standardization is needed. Part 2 of this paperwill present experiences with a research prototype for a sensor-enhanced rHIS telematic platform.


Author(s):  
Heiko Paulheim ◽  
Christian Bizer

Linked Data on the Web is either created from structured data sources (such as relational databases), from semi-structured sources (such as Wikipedia), or from unstructured sources (such as text). In the latter two cases, the generated Linked Data will likely be noisy and incomplete. In this paper, we present two algorithms that exploit statistical distributions of properties and types for enhancing the quality of incomplete and noisy Linked Data sets: SDType adds missing type statements, and SDValidate identifies faulty statements. Neither of the algorithms uses external knowledge, i.e., they operate only on the data itself. We evaluate the algorithms on the DBpedia and NELL knowledge bases, showing that they are both accurate as well as scalable. Both algorithms have been used for building the DBpedia 3.9 release: With SDType, 3.4 million missing type statements have been added, while using SDValidate, 13,000 erroneous RDF statements have been removed from the knowledge base.


2016 ◽  
Vol 31 (2) ◽  
pp. 97-123 ◽  
Author(s):  
Alfred Krzywicki ◽  
Wayne Wobcke ◽  
Michael Bain ◽  
John Calvo Martinez ◽  
Paul Compton

AbstractData mining techniques for extracting knowledge from text have been applied extensively to applications including question answering, document summarisation, event extraction and trend monitoring. However, current methods have mainly been tested on small-scale customised data sets for specific purposes. The availability of large volumes of data and high-velocity data streams (such as social media feeds) motivates the need to automatically extract knowledge from such data sources and to generalise existing approaches to more practical applications. Recently, several architectures have been proposed for what we callknowledge mining: integrating data mining for knowledge extraction from unstructured text (possibly making use of a knowledge base), and at the same time, consistently incorporating this new information into the knowledge base. After describing a number of existing knowledge mining systems, we review the state-of-the-art literature on both current text mining methods (emphasising stream mining) and techniques for the construction and maintenance of knowledge bases. In particular, we focus on mining entities and relations from unstructured text data sources, entity disambiguation, entity linking and question answering. We conclude by highlighting general trends in knowledge mining research and identifying problems that require further research to enable more extensive use of knowledge bases.


2019 ◽  
Vol 21 (6) ◽  
pp. 1937-1953 ◽  
Author(s):  
Jussi Paananen ◽  
Vittorio Fortino

Abstract The drug discovery process starts with identification of a disease-modifying target. This critical step traditionally begins with manual investigation of scientific literature and biomedical databases to gather evidence linking molecular target to disease, and to evaluate the efficacy, safety and commercial potential of the target. The high-throughput and affordability of current omics technologies, allowing quantitative measurements of many putative targets (e.g. DNA, RNA, protein, metabolite), has exponentially increased the volume of scientific data available for this arduous task. Therefore, computational platforms identifying and ranking disease-relevant targets from existing biomedical data sources, including omics databases, are needed. To date, more than 30 drug target discovery (DTD) platforms exist. They provide information-rich databases and graphical user interfaces to help scientists identify putative targets and pre-evaluate their therapeutic efficacy and potential side effects. Here we survey and compare a set of popular DTD platforms that utilize multiple data sources and omics-driven knowledge bases (either directly or indirectly) for identifying drug targets. We also provide a description of omics technologies and related data repositories which are important for DTD tasks.


1991 ◽  
Vol 56 (1) ◽  
pp. 276-294 ◽  
Author(s):  
Arnon Avron

Many-valued logics in general and 3-valued logic in particular is an old subject which had its beginning in the work of Łukasiewicz [Łuk]. Recently there is a revived interest in this topic, both for its own sake (see, for example, [Ho]), and also because of its potential applications in several areas of computer science, such as proving correctness of programs [Jo], knowledge bases [CP] and artificial intelligence [Tu]. There are, however, a huge number of 3-valued systems which logicians have studied throughout the years. The motivation behind them and their properties are not always clear, and their proof theory is frequently not well developed. This state of affairs makes both the use of 3-valued logics and doing fruitful research on them rather difficult.Our first goal in this work is, accordingly, to identify and characterize a class of 3-valued logics which might be called natural. For this we use the general framework for characterizing and investigating logics which we have developed in [Av1]. Not many 3-valued logics appear as natural within this framework, but it turns out that those that do include some of the best known ones. These include the 3-valued logics of Łukasiewicz, Kleene and Sobociński, the logic LPF used in the VDM project, the logic RM3 from the relevance family and the paraconsistent 3-valued logic of [dCA]. Our presentation provides justifications for the introduction of certain connectives in these logics which are often regarded as ad hoc. It also shows that they are all closely related to each other. It is shown, for example, that Łukasiewicz 3-valued logic and RM3 (the strongest logic in the family of relevance logics) are in a strong sense dual to each other, and that both are derivable by the same general construction from, respectively, Kleene 3-valued logic and the 3-valued paraconsistent logic.


Diacronia ◽  
2021 ◽  
Author(s):  
Anna-Maria Totomanova

The paper traces the history of the Histdict system, which turned into a basis for the new Electronic Research Infrastructure for Bulgarian Medieval Written Heritage, which was included into the National Research Roadmap at the end of 2020. Through this act the state declares its support to our resources, that have been so far created and supported by project funding. And of course, it is a big recognition of our efforts and achievements. On the other hand, this act coincided with two other events: the inclusion of RESILIENCE (Research Infrastructure on Religious Studies) in which Histdict is taking part, in the European Research Infrastructures Roadmap and the start of the updating and upgrading of the system. Given the situation the Infrastructure is now facing new challenges—not only the successful improvement of the services it offers, but also the inclusion of the Orthodox Cultural Heritage into European research exchange, which will promote and popularize the history and culture of Southeastern Europe.


Author(s):  
Zhaohong Sun

In recent years, a number of new challenges have been observed in the application of matching theory. One of the most pressing problems concerns how to allocate refugees to hosts safely and in a timely manner. Currently, this placement is implemented on an ad hoc basis where the preferences of both refugees and hosts are not taken into account. Another important realization is that real-life matching markets are often subject to various distributional constraints. For example, there has been increased attention to school choice models that take account of affirmative action and diversity concerns. The objective of this research is to design efficient algorithms while satisfying desirable properties for these new emerging problems.


2020 ◽  
Vol 26 (4) ◽  
Author(s):  
Leonardo Scharth Loureiro Silva ◽  
Silvana Philippi Camboim

Abstract: Cartographic data represents the main and basic component of a Spatial Data Infrastructure. SDI, in turn, has the role of supporting, with strategic information, the most diverse political and economic actions, in the management and planning of public actions. Thus, this work aims, initially, to present an overview of cartography in Brazil through the analysis of the evolution of topographic mapping coverage in the country. For each of the main scales used, a coverage map was created. The analyzes reflect three different periods (until 1997, between 1998 and 2007, after 2008) in order to relate how and to what degree, the creation of Brazilian National SDI (in 2008) had an impact on the mapping production in the country. Given the current panorama, as a final objective, this paper aims at to present proposals to leverage the coverage of this reference data. One of them is the use of new data sources such as Volunteered Geographic Information, especially in areas with outdated mapping or without mapping, as has already been used in some countries. Another proposition is to share the responsibility of mapping through partnerships with other levels of government, which would result the decentralization and the optimization of cartographic production.


Sign in / Sign up

Export Citation Format

Share Document