scholarly journals The German Corona Consensus Dataset (GECCO): A standardized dataset for COVID-19 research in university medicine and beyond

2020 ◽  
Author(s):  
Julian Sass ◽  
Alexander Bartschke ◽  
Moritz Lehne ◽  
Andrea Essenwanger ◽  
Eugenia Rinaldi ◽  
...  

Abstract Background: The current COVID-19 pandemic has led to a surge of research activity. While this research provides important insights, the multitude of studies results in an increasing fragmentation of information. To ensure comparability across projects and institutions, standard datasets are needed. Here, we introduce the “German Corona Consensus Dataset” (GECCO), a uniform dataset that uses international terminologies and health IT standards to improve interoperability of COVID-19 data, in particular for university medicine. Methods: Based on previous work (e.g., the ISARIC-WHO COVID-19 case report form) and in coordination with experts from university hospitals, professional associations and research initiatives, data elements relevant for COVID-19 research were collected, prioritized and consolidated into a compact core dataset. The dataset was mapped to international terminologies, and the Fast Healthcare Interoperability Resources (FHIR) standard was used to define interoperable, machine-readable data formats. Results: A core dataset consisting of 81 data elements with 281 response options was defined, including information about, for example, demography, medical history, symptoms, therapy, medications or laboratory values of COVID-19 patients. Data elements and response options were mapped to SNOMED CT, LOINC, UCUM, ICD-10-GM and ATC, and FHIR profiles for interoperable data exchange were defined. Conclusion: GECCO provides a compact, interoperable dataset that can help to make COVID-19 research data more comparable across studies and institutions. The dataset will be further refined in the future by adding domain-specific extension modules for more specialized use cases.

2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Julian Sass ◽  
Alexander Bartschke ◽  
Moritz Lehne ◽  
Andrea Essenwanger ◽  
Eugenia Rinaldi ◽  
...  

Abstract Background The current COVID-19 pandemic has led to a surge of research activity. While this research provides important insights, the multitude of studies results in an increasing fragmentation of information. To ensure comparability across projects and institutions, standard datasets are needed. Here, we introduce the “German Corona Consensus Dataset” (GECCO), a uniform dataset that uses international terminologies and health IT standards to improve interoperability of COVID-19 data, in particular for university medicine. Methods Based on previous work (e.g., the ISARIC-WHO COVID-19 case report form) and in coordination with experts from university hospitals, professional associations and research initiatives, data elements relevant for COVID-19 research were collected, prioritized and consolidated into a compact core dataset. The dataset was mapped to international terminologies, and the Fast Healthcare Interoperability Resources (FHIR) standard was used to define interoperable, machine-readable data formats. Results A core dataset consisting of 81 data elements with 281 response options was defined, including information about, for example, demography, medical history, symptoms, therapy, medications or laboratory values of COVID-19 patients. Data elements and response options were mapped to SNOMED CT, LOINC, UCUM, ICD-10-GM and ATC, and FHIR profiles for interoperable data exchange were defined. Conclusion GECCO provides a compact, interoperable dataset that can help to make COVID-19 research data more comparable across studies and institutions. The dataset will be further refined in the future by adding domain-specific extension modules for more specialized use cases.


2020 ◽  
Author(s):  
Julian Sass ◽  
Alexander Bartschke ◽  
Moritz Lehne ◽  
Andrea Essenwanger ◽  
Eugenia Rinaldi ◽  
...  

Abstract Background: The current COVID-19 pandemic has led to a surge of research activity. While this research provides important insights, the multitude of studies results in an increasing fragmentation of information. To ensure comparability across projects and institutions, standard datasets are needed. Here, we introduce the “German Corona Consensus Dataset” (GECCO), a uniform dataset that uses international terminologies and health IT standards to improve interoperability of COVID-19 data, in particular for university medicine.Methods: Based on previous work (e.g., the ISARIC-WHO COVID-19 case report form) and in coordination with experts from university hospitals, professional associations and research initiatives, data elements relevant for COVID-19 research were collected, prioritized and consolidated into a compact core dataset. The dataset was mapped to international terminologies, and the Fast Healthcare Interoperability Resources (FHIR) standard was used to define interoperable, machine-readable data formats.Results: A core dataset consisting of 81 data elements with 281 response options was defined, including information about, for example, demography, medical history, symptoms, therapy, medications or laboratory values of COVID-19 patients. Data elements and response options were mapped to SNOMED CT, LOINC, UCUM, ICD-10-GM and ATC, and FHIR profiles for interoperable data exchange were defined.Conclusion: GECCO provides a compact, interoperable dataset that can help to make COVID-19 research data more comparable across studies and institutions. The dataset will be further refined in the future by adding domain-specific extension modules for more specialized use cases.


2020 ◽  
Author(s):  
Julian Sass ◽  
Alexander Bartschke ◽  
Moritz Lehne ◽  
Andrea Essenwanger ◽  
Eugenia Rinaldi ◽  
...  

Background: The current COVID-19 pandemic has led to a surge of research activity. While this research provides important insights, the multitude of studies results in an increasing segmentation of information. To ensure comparability across projects and institutions, standard datasets are needed. Here, we introduce the "German Corona Consensus Dataset" (GECCO), a uniform dataset that uses international terminologies and health IT standards to improve interoperability of COVID-19 data. Methods: Based on previous work (e.g., the ISARIC-WHO COVID-19 case report form) and in coordination with experts from university hospitals, professional associations and research initiatives, data elements relevant for COVID-19 research were collected, prioritized and consolidated into a compact core dataset. The dataset was mapped to international terminologies, and the Fast Healthcare Interoperability Resources (FHIR) standard was used to define interoperable, machine-readable data formats. Results: A core dataset consisting of 81 data elements with 281 response options was defined, including information about, for example, demography, anamnesis, symptoms, therapy, medications or laboratory values of COVID-19 patients. Data elements and response options were mapped to SNOMED CT, LOINC, UCUM, ICD-10-GM and ATC, and FHIR profiles for interoperable data exchange were defined. Conclusion: GECCO provides a compact, interoperable dataset that can help to make COVID-19 research data more comparable across studies and institutions. The dataset will be further refined in the future by adding domain-specific extension modules for more specialized use cases.


2020 ◽  
Author(s):  
Sylvia Thun ◽  
Julian Sass ◽  
Alexander Bartschke ◽  
Moritz Lehne ◽  
Andrea Essenwanger ◽  
...  

Abstract Background: The current COVID-19 pandemic has led to a surge of research activity. While this research provides important insights, the multitude of studies results in an increasing segmentation of information. To ensure comparability across projects and institutions, standard datasets are needed. Here, we introduce the “German Corona Consensus Dataset” (GECCO), a uniform dataset that uses international terminologies and health IT standards to improve interoperability of COVID-19 data.Methods: Based on previous work (e.g., the ISARIC-WHO COVID-19 case report form) and in coordination with experts from university hospitals, professional associations and research initiatives, data elements relevant for COVID-19 research were collected, prioritized and consolidated into a compact core dataset. The dataset was mapped to international terminologies, and the Fast Healthcare Interoperability Resources (FHIR) standard was used to define interoperable, machine-readable data formats.Results: A core dataset consisting of 81 data elements with 281 response options was defined, including information about, for example, demography, anamnesis, symptoms, therapy, medications or laboratory values of COVID-19 patients. Data elements and response options were mapped to SNOMED CT, LOINC, UCUM, ICD-10-GM and ATC, and FHIR profiles for interoperable data exchange were defined.Conclusion: GECCO provides a compact, interoperable dataset that can help to make COVID-19 research data more comparable across studies and institutions. The dataset will be further refined in the future by adding domain-specific extension modules for more specialized use cases.


Author(s):  
Eugenia Rinaldi ◽  
Sylvia Thun

HiGHmed is a German Consortium where eight University Hospitals have agreed to the cross-institutional data exchange through novel medical informatics solutions. The HiGHmed Use Case Infection Control group has modelled a set of infection-related data in the openEHR format. In order to establish interoperability with the other German Consortia belonging to the same national initiative, we mapped the openEHR information to the Fast Healthcare Interoperability Resources (FHIR) format recommended within the initiative. FHIR enables fast exchange of data thanks to the discrete and independent data elements into which information is organized. Furthermore, to explore the possibility of maximizing analysis capabilities for our data set, we subsequently mapped the FHIR elements to the Observational Medical Outcomes Partnership Common Data Model (OMOP CDM). The OMOP data model is designed to support the conduct of research to identify and evaluate associations between interventions and outcomes caused by these interventions. Mapping across standard allows to exploit their peculiarities while establishing and/or maintaining interoperability. This article provides an overview of our experience in mapping infection control related data across three different standards openEHR, FHIR and OMOP CDM.


2021 ◽  
Vol 7 (4) ◽  
pp. 70
Author(s):  
David Jones ◽  
Jianyin Shao ◽  
Heidi Wallis ◽  
Cody Johansen ◽  
Kim Hart ◽  
...  

As newborn screening programs transition from paper-based data exchange toward automated, electronic methods, significant data exchange challenges must be overcome. This article outlines a data model that maps newborn screening data elements associated with patient demographic information, birthing facilities, laboratories, result reporting, and follow-up care to the LOINC, SNOMED CT, ICD-10-CM, and HL7 healthcare standards. The described framework lays the foundation for the implementation of standardized electronic data exchange across newborn screening programs, leading to greater data interoperability. The use of this model can accelerate the implementation of electronic data exchange between healthcare providers and newborn screening programs, which would ultimately improve health outcomes for all newborns and standardize data exchange across programs.


Author(s):  
Aaron Williamon ◽  
Jane Ginsborg ◽  
Rosie Perkins ◽  
George Waddell

Chapter 3 of Performing Music Research explores the guiding principles on which ethical codes are based. These can be summarized as follows: people should not be harmed, nor their rights and dignity compromised, and research must be of scientific value and carried out with integrity. These issues must be considered and addressed in the earliest stages of research and in light of the potential benefits of the findings of the research to society. The chapter reflects on the philosophical underpinnings of ethical research and outlines the process whereby ethical approval is typically sought and obtained, with reference to a selection of codes of research ethics published by professional associations and regulatory bodies that guide and inform research activity.


2018 ◽  
Vol 28 (1) ◽  
pp. 39-47 ◽  
Author(s):  
Karen A Monsen ◽  
Joyce M Rudenick ◽  
Nicole Kapinos ◽  
Kathryn Warmbold ◽  
Siobhan K McMahon ◽  
...  

Background: Electronic health records (EHRs) are a promising new source of population health data that may improve health outcomes. However, little is known about the extent to which social and behavioral determinants of health (SBDH) are currently documented in EHRs, including how SBDH are documented, and by whom. Standardized nursing terminologies have been developed to assess and document SBDH. Objective: We examined the documentation of SBDH in EHRs with and without standardized nursing terminologies. Methods: We carried out a review of the literature for SBDH phrases organized by topic, which were used for analyses. Key informant interviews were conducted regarding SBDH phrases. Results: In nine EHRs (six acute care, three community care) 107 SBDH phrases were documented using free text, structured text, and standardized terminologies in diverse screens and by multiple clinicians, admitting personnel, and other staff. SBDH phrases were documented using one of three standardized terminologies ( N = average number of phrases per terminology per EHR): ICD-9/10 ( N = 1); SNOMED CT ( N = 1); Omaha System ( N = 79). Most often, standardized terminology data were documented by nurses or other clinical staff versus receptionists or other non-clinical personnel. Documentation ‘unknown’ differed significantly between EHRs with and without the Omaha System (mean = 26.0 (standard deviation (SD) = 8.7) versus mean = 74.5 (SD = 16.5)) ( p = .005). SBDH documentation in EHRs differed based on the presence of a nursing terminology. Conclusions: The Omaha System enabled a more comprehensive, holistic assessment and documentation of interoperable SBDH data. Further research is needed to determine SBDH data elements that are needed across settings, the uses of SBDH data in practice, and to examine patient perspectives related to SBDH assessments.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e18074-e18074
Author(s):  
Anna E. Schorer ◽  
Jacob Koskimaki ◽  
Robert S. Miller ◽  
Wendy S. Rubinstein ◽  
Elmer Victor Bernstam ◽  
...  

e18074 Background: Physician reimbursement for care delivered to Medicare beneficiaries fundamentally changed with the 2015 MACRA legislation, requiring eligible clinicians to report quality measures in the Merit-Based Incentive Payment System (MIPS). To determine whether structured data in electronic health records (EHRs) were adequate to report MIPS results, EHR data ingested by ASCO’s CancerLinQ (CLQ) was analyzed. Methods: Nineteen MIPS measures specified for medical oncology, including 8 shared by other specialties, were retrieved from qpp.cms.gov and systematically evaluated to determine data elements necessary to satisfy each measure. The existence of corresponding data fields and completion of these fields with clinical data was analyzed according to EHR implementation in de-identified and aggregated CLQ data. Results: Five clinician informaticists reviewed the 19 oncology MIPS measures, and identified a consensus list of 52 discrete EHR data elements (DEs) that would be needed. CLQ-processed data from 4 commercial EHR systems implemented at 47 CLQ practices found structured data fields for 84% (43 of 52) of the DE, but fewer than half (46%) of these fields were ever populated and only 32% (17 of 52) of DE were recorded for > 20% of cases. Only 3-5 of 19 MIPS measures could be reliably reported based on data element availability by most practices in this sample set. There were minimal differences between the EHRs ability to encode MIPS DE. Elements most likely to be encoded were those for registration (birthdate, gender), billing (diagnosis, meds), vital signs and smoking status, while those seldom or never encoded related to care plans (tobacco, alcohol, pain management). Other DE rarely encoded were patient events occurring outside the oncology practice (receipt/completion of consultations, dates of hospice enrollment and death), which would be dependent on data exchange between work units and practice entities or, more likely, re-entry by oncology practices. Conclusions: Only a minority of DE required to satisfy MIPS criteria are available as discrete data fields in current EHRs, limiting automated reporting efforts. Improved data quality and completeness is needed to satisfy mandated reporting.


2010 ◽  
Vol 49 (02) ◽  
pp. 186-195 ◽  
Author(s):  
P. Hanzlícek ◽  
P. Precková ◽  
A. Ríha ◽  
M. Dioszegi ◽  
L. Seidl ◽  
...  

Summary Objectives: The data interchange in the Czech healthcare environment is mostly based on national standards. This paper describes a utilization of international standards and nomenclatures for building a pilot semantic interoperability platform (SIP) that would serve to exchange information among electronic health record systems (EHR-Ss) in Czech healthcare. The work was performed by the national research project of the “Information Society” program. Methods: At the beginning of the project a set of requirements the SIP should meet was formulated. Several communication standards (open EHR, HL7 v3, DICOM) were analyzed and HL7 v3 was selected to exchange health records in our solution. Two systems were included in our pilot environment: WinMedicalc 2000 and ADAMEKj EHR. Results: HL7-based local information models were created to describe the information content of both systems. The concepts from our original information models were mapped to coding systems supported by HL7 (LOINC, SNOMED CT and ICD-10) and the data exchange via HL7 v3 messages was implemented and tested by querying patient administration data. As a gateway between local EHR systems and the HL7 message-based infrastructure, a configurable HL7 Broker was developed. Conclusions: A nationwide implementation of a full-scale SIP based on HL7 v3 would include adopting and translating appropriate international coding systems and nomenclatures, and developing implementation guidelines facilitating the migration from national standards to international ones. Our pilot study showed that our approach is feasible but it would demand a huge effort to fully integrate the Czech healthcare system into the European e-health context.


Sign in / Sign up

Export Citation Format

Share Document