scholarly journals Implementation of data access and use procedures in clinical data warehouses. A systematic review of literature and publicly available policies

Author(s):  
Elena Pavlenko ◽  
Daniel Strech ◽  
Holger Langhof

AbstractBackgroundThe promises of improved health care and health research through data-intensive applications rely on a growing amount of health data. At the core of large-scale data integration efforts, clinical data warehouses (CDW) are also responsible of data governance, managing data access and (re)use. As the complexity of the data flow increases, greater transparency and standardization of criteria and procedures is required in order to maintain objective oversight and control. This study assessed the spectrum of data access and use criteria and procedures in clinical data warehouses governance internationally.MethodsWe performed a systematic review of (a) the published scientific literature on CDW and (b) publicly available information on CDW data access, e.g., data access policies. A qualitative thematic analysis was applied to all included literature and policies.ResultsTwenty-three scientific publications and one policy document were included in the final analysis. The qualitative analysis led to a final set of three main thematic categories: (1) requirements, including recipient requirements, reuse requirements, and formal requirements; (2) structures and processes, including review bodies and review values; and (3) access, including access limitations.ConclusionsThe description of data access and use governance in the scientific literature is characterized by a high level of heterogeneity and ambiguity. In practice, this might limit the effective data sharing needed to fulfil the high expectations of data-intensive approaches in medical research and health care. The lack of publicly available information on access policies conflicts with ethical requirements linked to principles of transparency and accountability.CDW should publicly disclose by whom and under which conditions data can be accessed, and provide designated governance structures and policies to increase transparency on data access. The results of this review may contribute to the development of practice-oriented minimal standards for the governance of data access, which could also result in a stronger harmonization, efficiency, and effectiveness of CDW.

10.2196/22280 ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. e22280
Author(s):  
Davide Golinelli ◽  
Erik Boetto ◽  
Gherardo Carullo ◽  
Andrea Giovanni Nuzzolese ◽  
Maria Paola Landini ◽  
...  

Background The COVID-19 pandemic is favoring digital transitions in many industries and in society as a whole. Health care organizations have responded to the first phase of the pandemic by rapidly adopting digital solutions and advanced technology tools. Objective The aim of this review is to describe the digital solutions that have been reported in the early scientific literature to mitigate the impact of COVID-19 on individuals and health systems. Methods We conducted a systematic review of early COVID-19–related literature (from January 1 to April 30, 2020) by searching MEDLINE and medRxiv with appropriate terms to find relevant literature on the use of digital technologies in response to the pandemic. We extracted study characteristics such as the paper title, journal, and publication date, and we categorized the retrieved papers by the type of technology and patient needs addressed. We built a scoring rubric by cross-classifying the patient needs with the type of technology. We also extracted information and classified each technology reported by the selected articles according to health care system target, grade of innovation, and scalability to other geographical areas. Results The search identified 269 articles, of which 124 full-text articles were assessed and included in the review after screening. Most of the selected articles addressed the use of digital technologies for diagnosis, surveillance, and prevention. We report that most of these digital solutions and innovative technologies have been proposed for the diagnosis of COVID-19. In particular, within the reviewed articles, we identified numerous suggestions on the use of artificial intelligence (AI)–powered tools for the diagnosis and screening of COVID-19. Digital technologies are also useful for prevention and surveillance measures, such as contact-tracing apps and monitoring of internet searches and social media usage. Fewer scientific contributions address the use of digital technologies for lifestyle empowerment or patient engagement. Conclusions In the field of diagnosis, digital solutions that integrate with traditional methods, such as AI-based diagnostic algorithms based both on imaging and clinical data, appear to be promising. For surveillance, digital apps have already proven their effectiveness; however, problems related to privacy and usability remain. For other patient needs, several solutions have been proposed, such as telemedicine or telehealth tools. These tools have long been available, but this historical moment may actually be favoring their definitive large-scale adoption. It is worth taking advantage of the impetus provided by the crisis; it is also important to keep track of the digital solutions currently being proposed to implement best practices and models of care in future and to adopt at least some of the solutions proposed in the scientific literature, especially in national health systems, which have proved to be particularly resistant to the digital transition in recent years.


2020 ◽  
Vol 11 (01) ◽  
pp. 059-069 ◽  
Author(s):  
Prashila Dullabh ◽  
Lauren Hovey ◽  
Krysta Heaney-Huls ◽  
Nithya Rajendran ◽  
Adam Wright ◽  
...  

Abstract Objective Interest in application programming interfaces (APIs) is increasing as key stakeholders look for technical solutions to interoperability challenges. We explored three thematic areas to assess the current state of API use for data access and exchange in health care: (1) API use cases and standards; (2) challenges and facilitators for read and write capabilities; and (3) outlook for development of write capabilities. Methods We employed four methods: (1) literature review; (2) expert interviews with 13 API stakeholders; (3) review of electronic health record (EHR) app galleries; and (4) a technical expert panel. We used an eight-dimension sociotechnical model to organize our findings. Results The API ecosystem is complicated and cuts across five of the eight sociotechnical model dimensions: (1) app marketplaces support a range of use cases, the majority of which target providers' needs, with far fewer supporting patient access to data; (2) current focus on read APIs with limited use of write APIs; (3) where standards are used, they are largely Fast Healthcare Interoperability Resources (FHIR); (4) FHIR-based APIs support exchange of electronic health information within the common clinical data set; and (5) validating external data and data sources for clinical decision making creates challenges to provider workflows. Conclusion While the use of APIs in health care is increasing rapidly, it is still in the pilot stages. We identified five key issues with implications for the continued advancement of API use: (1) a robust normative FHIR standard; (2) expansion of the common clinical data set to other data elements; (3) enhanced support for write implementation; (4) data provenance rules; and (5) data governance rules. Thus, while APIs are being touted as a solution to interoperability challenges, they remain an emerging technology that is only one piece of a multipronged approach to data access and use.


2021 ◽  
Author(s):  
Rebecca Asiimwe ◽  
Stephanie Lam ◽  
Samuel Leung ◽  
Shanzhao Wang ◽  
Rachel Wan ◽  
...  

Abstract Background To drive translational medicine, modern day biobanks need to integrate with other sources of data (clinical, genomics) to support novel data-intensive research. Currently, vast amounts of research and clinical data remain in silos, held and managed by individual researchers, operating under different standards and governance structures; a framework that impedes sharing and use of data. In this article, we describe the journey of British Columbia’s Gynecological Cancer Research Program (OVCARE) in moving a traditional tumour biobank, outcome unit, and a collection of data silos, into an integrated data commons to support data standardization, data, and resources sharing under collaborative governance, as a means of providing the gynecologic cancer research community in British Columbia access to tissue samples and associated clinical and molecular data from thousands of patients. Results Through several engagements with stakeholders from various research institutions within our research community, we identified priorities and assessed infrastructure needs required to optimize and support data collections, storage and sharing, under three main research domains: 1) biospecimen collections, 2) molecular and genomics data, and 3) clinical data. We further built a governance model and a resource portal to implement protocols and standard operating procedures for seamless collections, management and governance of interoperable data, making genomic, and clinical data available to the broader research community. Conclusions Proper infrastructures for data collection, sharing and governance is a translational research imperative. We have consolidated our data holdings into a data commons, along with standardized operating procedures to meet research and ethics requirements of the gynecologic cancer community in British Columbia. The developed infrastructure brings together, diverse data, computing framework, as well as tools and applications for managing, analyzing, and sharing data. Our data commons bridges data access gaps and barriers to precision medicine and approaches for diagnostics, treatment and prevention of gynecological cancers, by providing access to large datasets required for data-intensive science.


2016 ◽  
Vol 29 (1) ◽  
pp. 72-90 ◽  
Author(s):  
Rocco Palumbo

Purpose – The purpose of this paper is to contextualize the concepts of “service co-production” and “value co-creation” to health care services, challenging the traditional bio-medical model which focusses on illness treatment and neglects the role played by patients in the provision of care. Design/methodology/approach – For this purpose, the author conducted a systematic review, which paved the way for the identification of the concept of “health care co-production” and allowed to discuss its effects and implications. Starting from a database of 254 records, 65 papers have been included in systematic review and informed the development of this paper. Findings – Co-production of health care services implies the establishment of co-creating partnerships between health care professionals and patients, which are aimed at mobilizing the dormant resources of the latter. However, several barriers prevent the full implementation of health care co-production, nurturing the application of the traditional bio-medical model. Practical implications – Co-production of health care is difficult to realize, due to both health care professionals’ hostility and patients unwillingness to be involved in the provision of care. Nonetheless, the scientific literature is consistent in claiming that co-production of care paves the way for increased health outcomes, enhanced patient satisfaction, better service innovation, and cost savings. The establishment of multi-disciplinary health care teams, the improvement of patient-provider communication, and the enhancement of the use of ICTs for the purpose of value co-creation are crucial ingredients in the recipe for increased patient engagement. Originality/value – To the knowledge of the author, this is the first paper aimed at systematizing the scientific literature in the field of health care co-production. The originality of this paper stems from its twofold relevance: on the one hand, it emphasizes the pros and the cons of health care co-production and, on the other hand, it provides with insightful directions to deal with the engagement of patients in value co-creation.


2021 ◽  
Vol 12 ◽  
Author(s):  
Marina Aline de Brito Sena ◽  
Rodolfo Furlan Damiano ◽  
Giancarlo Lucchetti ◽  
Mario Fernando Prieto Peres

Objective: To investigate the definitions of spirituality in the healthcare field, identifying its main dimensions and proposing a framework that operationalizes the understanding of this concept.Methods: This is a systematic review following the PRISMA guideline (PROSPERO: CRD42021262091), searching for spirituality definitions published in scientific journals. Searches were carried out in PubMed (all articles listed up to October 2020) and in the reference lists of the articles found in the database, followed by selection under specific eligibility criteria.Results: From a total of 493 articles, 166 were included in the final analysis, showing that there is a large body of scientific literature proposing and analyzing spirituality definitions. In these articles, 24 spirituality dimensions were found, most commonly related to the connectedness and meaning of life. Spirituality was presented as a human and individual aspect. These findings led us to construct a framework that represents spirituality as a quantifiable construct.Conclusions: Understanding spirituality is an important aspect for healthcare research and clinical practice. This proposed framework may help to better understand the complexity of this topic, where advances are desirable, given the relevance it has acquired for integral health care.


2020 ◽  
Author(s):  
Donatello Elia ◽  
Fabrizio Antonio ◽  
Cosimo Palazzo ◽  
Paola Nassisi ◽  
Sofiane Bendoukha ◽  
...  

<p>Scientific data analysis experiments and applications require software capable of handling domain-specific and data-intensive workflows. The increasing volume of scientific data is further exacerbating these data management and analytics challenges, pushing the community towards the definition of novel programming environments for dealing efficiently with complex experiments, while abstracting from the underlying computing infrastructure. </p><p>ECASLab provides a user-friendly data analytics environment to support scientists in their daily research activities, in particular in the climate change domain, by integrating analysis tools with scientific datasets (e.g., from the ESGF data archive) and computing resources (i.e., Cloud and HPC-based). It combines the features of the ENES Climate Analytics Service (ECAS) and the JupyterHub service, with a wide set of scientific libraries from the Python landscape for data manipulation, analysis and visualization. ECASLab is being set up in the frame of the European Open Science Cloud (EOSC) platform - in the EU H2020 EOSC-Hub project - by CMCC (https://ecaslab.cmcc.it/) and DKRZ (https://ecaslab.dkrz.de/), which host two major instances of the environment. </p><p>ECAS, which lies at the heart of ECASLab, enables scientists to perform data analysis experiments on large volumes of multi-dimensional data by providing a workflow-oriented, PID-supported, server-side and distributed computing approach. ECAS consists of multiple components, centered around the Ophidia High Performance Data Analytics framework, which has been integrated with data access and sharing services (e.g., EUDAT B2DROP/B2SHARE, Onedata), along with the EGI federated cloud infrastructure. The integration with JupyterHub provides a convenient interface for scientists to access the ECAS features for the development and execution of experiments, as well as for sharing results (and the experiment/workflow definition itself). ECAS parallel data analytics capabilities can be easily exploited in Jupyter Notebooks (by means of PyOphidia, the Ophidia Python bindings) together with well-known Python modules for processing and for plotting the results on charts and maps (e.g., Dask, Xarray, NumPy, Matplotlib, etc.). ECAS is also one of the compute services made available to climate scientists by the EU H2020 IS-ENES3 project. </p><p>Hence, this integrated environment represents a complete software stack for the design and run of interactive experiments as well as complex and data-intensive workflows. One class of such large-scale workflows, efficiently implemented through the environment resources, refers to multi-model data analysis in the context of both CMIP5 and CMIP6 (i.e., precipitation trend analysis orchestrated in parallel over multiple CMIP-based datasets).</p>


Author(s):  
Christopher Orton ◽  
John Gallacher ◽  
Ronan Lyons ◽  
David Ford ◽  
Simon Thompson ◽  
...  

IntroductionModern team science requires effective sharing of data and skills. The DPUK Data Portal is a collection of tools, datasets and networks that allows for epidemiologists and specialist researchers alike to access, analyse and investigate cohort and different modalities of routine data across UK and international sources. Objectives and ApproachThe Portal is housed on an instance of UKSeRP (UK Secure eResearch Platform), that allows customisable infrastructure to be used for multi-modal research (thus far live in genetics, imaging and clinical data) for researchers across the world using remote access technology whilst allowing governance to remain with the data provider. A central team at Swansea University is responsible for data curation and processing, and runs an access procedure for researchers to apply to use data from multiple sources to be analysed in a central analysis environment. Other modalities are similarly hosted, with input from partner sites in Cardiff and Oxford. ResultsDPUK facilitates data access and research on 49 cohorts, 40 UK-based and 9 international. The centralised repository model including remote access and ability to store and make available different modalities of data, from phenotypic data, to genetic and imaging data, has allowed DPUK to begin to support research of varying topics, from those studying cognitive decline and Dementia as a disease, to those maturing analytical models. By providing access to data platforms specialising in genetics, imaging and routine clinical data, as well as to specialists in disease and biology to aid with its understanding, DPUK has realised a large-scale research exercise combining major data modalities on a central platform, and allow access to such rich data across the world under an umbrella of robust governance. Conclusion/ImplicationsGlobally, cohorts are pooling data, expertise and desire to enrich their own aims in partnership with a federated research community to enable in-depth scrutiny of the biological origins of dementia and the development and evaluation of novel approach to disease prevention and cure.


2021 ◽  
Vol 19 (1) ◽  
Author(s):  
Rebecca Asiimwe ◽  
Stephanie Lam ◽  
Samuel Leung ◽  
Shanzhao Wang ◽  
Rachel Wan ◽  
...  

Abstract Background To drive translational medicine, modern day biobanks need to integrate with other sources of data (clinical, genomics) to support novel data-intensive research. Currently, vast amounts of research and clinical data remain in silos, held and managed by individual researchers, operating under different standards and governance structures; a framework that impedes sharing and effective use of data. In this article, we describe the journey of British Columbia’s Gynecological Cancer Research Program (OVCARE) in moving a traditional tumour biobank, outcomes unit, and a collection of data silos, into an integrated data commons to support data standardization and resource sharing under collaborative governance, as a means of providing the gynecologic cancer research community in British Columbia access to tissue samples and associated clinical and molecular data from thousands of patients. Results Through several engagements with stakeholders from various research institutions within our research community, we identified priorities and assessed infrastructure needs required to optimize and support data collections, storage and sharing, under three main research domains: (1) biospecimen collections, (2) molecular and genomics data, and (3) clinical data. We further built a governance model and a resource portal to implement protocols and standard operating procedures for seamless collections, management and governance of interoperable data, making genomic, and clinical data available to the broader research community. Conclusions Proper infrastructures for data collection, sharing and governance is a translational research imperative. We have consolidated our data holdings into a data commons, along with standardized operating procedures to meet research and ethics requirements of the gynecologic cancer community in British Columbia. The developed infrastructure brings together, diverse data, computing frameworks, as well as tools and applications for managing, analyzing, and sharing data. Our data commons bridges data access gaps and barriers to precision medicine and approaches for diagnostics, treatment and prevention of gynecological cancers, by providing access to large datasets required for data-intensive science.


Sign in / Sign up

Export Citation Format

Share Document