scholarly journals Hypocrisy Around Medical Patient Data: Issues of Access for Biomedical Research, Data Quality, Usefulness for the Purpose and Omics Data as Game Changer

2019 ◽  
Vol 11 (2) ◽  
pp. 189-207
Author(s):  
Erwin Tantoso ◽  
Wing-Cheong Wong ◽  
Wei Hong Tay ◽  
Joanne Lee ◽  
Swati Sinha ◽  
...  
2021 ◽  
Author(s):  
Craig Barnes ◽  
Binam Bajracharya ◽  
Matthew Cannalte ◽  
Zakir Gowani ◽  
Will Haley ◽  
...  

Objective. The objective was to develop and operate a cloud-based federated system for managing, analyzing and sharing patient data for research purposes, while allowing each resource sharing patient data to operate their component based upon their own governance rules. The federated system is called the Biomedical Research Hub (BRH). Methods. The BRH is a cloud-based federated system built over a core set of software services called framework services. BRH framework services include authentication and authorization, services for generating and assessing FAIR data, and services for importing and exporting bulk clinical data. The BRH includes data resources providing data operated by different entities and workspaces that can access and analyze data from one or more of the data resources in the BRH. Results. The BRH contains multiple data commons that in aggregate provide access to over 6 PB of research data from over 400,000 research participants. Discussion and conclusion. With the growing acceptance of using public cloud computing platforms for biomedical research, and the growing use of opaque persistent digital identifiers for datasets, data objects, and other entities, there is now a foundation for systems that federate data from multiple independently operated data resources that expose FAIR APIs, each using a separate data model. Applications can be built that access data from one or more of the data resources.


2020 ◽  
Author(s):  
Oliver Maassen ◽  
Sebastian Fritsch ◽  
Julia Gantner ◽  
Saskia Deffge ◽  
Julian Kunze ◽  
...  

BACKGROUND The increasing development of artificial intelligence (AI) systems in medicine driven by researchers and entrepreneurs goes along with enormous expectations for medical care advancement. AI might change the clinical practice of physicians from almost all medical disciplines and in most areas of healthcare. While expectations for AI in medicine are high, practical implementations of AI for clinical practice are still scarce in Germany. Moreover, physicians’ requirements and expectations of AI in medicine and their opinion on the usage of anonymized patient data for clinical and biomedical research has not been investigated widely in German university hospitals. OBJECTIVE Evaluate physicians’ requirements and expectations of AI in medicine and their opinion on the secondary usage of patient data for (bio)medical research e.g. for the development of machine learning (ML) algorithms in university hospitals in Germany. METHODS A web-based survey was conducted addressing physicians of all medical disciplines in 8 German university hospitals. Answers were given on Likert scales and general demographic responses. Physicians were asked to participate locally via email in the respective hospitals. RESULTS 121 (39.9%) female and 173 (57.1%) male physicians (N=303) from a wide range of medical disciplines and work experience levels completed the online survey. The majority of respondents either had a positive (130/303, 42.9%) or a very positive attitude (82/303, 27.1%) towards AI in medicine. A vast majority of physicians expected the future of medicine to be a mix of human and artificial intelligence (273/303, 90.1%) but also requested a scientific evaluation before the routine implementation of AI-based systems (276/303, 91.1%). Physicians were most optimistic that AI applications would identify drug interactions (280/303, 92.4%) to improve patient care substantially but were quite reserved regarding AI-supported diagnosis of psychiatric diseases (62/303, 20.5%). 82.5% of respondents (250/303) agreed that there should be open access to anonymized patient databases for medical and biomedical research. CONCLUSIONS Physicians in stationary patient care in German university hospitals show a generally positive attitude towards using most AI applications in medicine. Along with this optimism, there come several expectations and hopes that AI will assist physicians in clinical decision making. Especially in fields of medicine where huge amounts of data are processed (e.g., imaging procedures in radiology and pathology) or data is collected continuously (e.g. cardiology and intensive care medicine), physicians’ expectations to substantially improve future patient care are high. However, for the practical usage of AI in healthcare regulatory and organizational challenges still have to be mastered.


2021 ◽  
Author(s):  
Anita Bandrowski ◽  
Jeffrey S. Grethe ◽  
Anna Pilko ◽  
Tom Gillespie ◽  
Gabi Pine ◽  
...  

AbstractThe NIH Common Fund’s Stimulating Peripheral Activity to Relieve Conditions (SPARC) initiative is a large-scale program that seeks to accelerate the development of therapeutic devices that modulate electrical activity in nerves to improve organ function. Integral to the SPARC program are the rich anatomical and functional datasets produced by investigators across the SPARC consortium that provide key details about organ-specific circuitry, including structural and functional connectivity, mapping of cell types and molecular profiling. These datasets are provided to the research community through an open data platform, the SPARC Portal. To ensure SPARC datasets are Findable, Accessible, Interoperable and Reusable (FAIR), they are all submitted to the SPARC portal following a standard scheme established by the SPARC Curation Team, called the SPARC Data Structure (SDS). Inspired by the Brain Imaging Data Structure (BIDS), the SDS has been designed to capture the large variety of data generated by SPARC investigators who are coming from all fields of biomedical research. Here we present the rationale and design of the SDS, including a description of the SPARC curation process and the automated tools for complying with the SDS, including the SDS validator and Software to Organize Data Automatically (SODA) for SPARC. The objective is to provide detailed guidelines for anyone desiring to comply with the SDS. Since the SDS are suitable for any type of biomedical research data, it can be adopted by any group desiring to follow the FAIR data principles for managing their data, even outside of the SPARC consortium. Finally, this manuscript provides a foundational framework that can be used by any organization desiring to either adapt the SDS to suit the specific needs of their data or simply desiring to design their own FAIR data sharing scheme from scratch.


2020 ◽  
Author(s):  
Carsten Schmidt ◽  
Stephan Struckmann ◽  
Cornelia Enzenbach ◽  
Achim Reineke ◽  
Jürgen Stausberg ◽  
...  

Abstract Background No standards exist for the handling and reporting of data quality in health research. This work introduces a data quality framework for observational health research data collections with supporting software implementations to facilitate harmonized data quality assessments. Methods Developments were guided by the evaluation of an existing data quality framework and literature reviews. Functions for the computation of data quality indicators were written in R. The concept and implementations are illustrated based on data from the population-based Study of Health in Pomerania (SHIP).Results The data quality framework comprises 34 data quality indicators. These target three aspects of data quality: compliance with pre-specified structural and technical requirements (Integrity), presence of data values (completeness), and error in the data values (correctness). R functions calculate data quality metrics based on the provided study data and metadata and R Markdown reports are generated. Guidance on the concept and tools is available through a dedicated website. Conclusions The presented data quality framework is the first of its kind for observational health research data collections that links a formal concept to implementations in R. The framework and tools facilitate harmonized data quality assessments in pursue of transparent and reproducible research. Application scenarios comprise data quality monitoring while a study is carried out as well as performing an initial data analysis before starting substantive scientific analyses.


2021 ◽  
Vol 27 (3) ◽  
pp. 8-34
Author(s):  
Tatyana Cherkashina

The article presents the experience of converting non-targeted administrative data into research data, using as an example data on the income and property of deputies from local legislative bodies of the Russian Federation for 2019, collected as part of anticorruption operations. This particular empirical fragment was selected for the pilot study of administrative data, which includes assessing the possibility of integrating scattered fragments of information into a single database, assessing quality of data and their relevance for solving research problems, particularly analysis of high-income strata and the apparent trends towards individualization of private property. The system of indicators for assessing data quality includes their timeliness, availability, interpretability, reliability, comparability, coherence, errors of representation and measurement, and relevance. In the case of the data set in question, measurement errors are more common than representation errors. Overall the article emphasizes the notion that introducing new non-target data into circulation requires their preliminary testing, while data quality assessment becomes distributed both in time and between different subjects. The transition from created data to «obtained» data shifts the functions of evaluating its quality from the researcher-creator to the researcheruser. And though in this case data quality is in part ensured by the legal support for their production, the transformation of administrative data into research data involves assessing a variety of quality measurements — from availability to uniformity and accuracy.


2010 ◽  
Vol 01 (01) ◽  
pp. 50-67 ◽  
Author(s):  
K. Rahbar ◽  
L. Stegger ◽  
M. Schäfers ◽  
M. Dugas ◽  
S. Herzberg

Summary Objective: Data for clinical documentation and medical research are usually managed in separate systems. We developed, implemented and assessed a documentation system for myocardial scintigraphy (SPECT/CT-data) in order to integrate clinical and research documentation. This paper presents concept, implementation and evaluation of this single source system including methods to improve data quality by plausibility checks. Methods: We analyzed the documentation process for myocardial scintigraphy, especially for collecting medical history, symptoms and medication as well as stress and rest injection protocols. Corresponding electronic forms were implemented in our hospital information system (HIS) including plausibility checks to support correctness and completeness of data entry. Research data can be extracted from routine data by dedicated HIS reports. Results: A single source system based on HIS-electronic documentation merges clinical and scientific documentation and thus avoids multiple documentation. Within nine months 495 patients were documented with our system by 8 physicians and 6 radiographers (466 medical history protocols, 466 stress and 414 rest injection protocols). Documentation consists of 295 attributes, three quarters are conditional items. Data quality improved substantially compared to previous paper-based documentation. Conclusion: A single source system to collect routine and research data for myocardial scintigraphy is feasible in a real-world setting and can generate high-quality data through online plausibility checks.


Information ◽  
2020 ◽  
Vol 11 (4) ◽  
pp. 175 ◽  
Author(s):  
Tibor Koltay

This paper focuses on the characteristics of research data quality, and aims to cover the most important issues related to it, giving particular attention to its attributes and to data governance. The corporate word’s considerable interest in the quality of data is obvious in several thoughts and issues reported in business-related publications, even if there are apparent differences between values and approaches to data in corporate and in academic (research) environments. The paper also takes into consideration that addressing data quality would be unimaginable without considering big data.


2020 ◽  
Vol 108 ◽  
pp. 103491
Author(s):  
Lauren Houston ◽  
Ping Yu ◽  
Allison Martin ◽  
Yasmine Probst

Sign in / Sign up

Export Citation Format

Share Document