scholarly journals Configuring a federated network of real-world patient health data for multimodal deep learning prediction of health outcomes.

2021 ◽  
Author(s):  
Christian Haudenschild ◽  
Louis Vaickus ◽  
Joshua Levy

Vast quantities of electronic patient medical data are currently being collated and processed in large federated data repositories. For instance, TriNetX, Inc., a global health research network, has access to more than 300 million patients, sourced from healthcare organizations, biopharmaceutical companies, and contract research organizations. As such, pipelines that are able to algorithmically extract huge quantities of patient data from multiple modalities present opportunities to leverage machine learning and deep learning approaches with the possibility of generating actionable insight. In this work, we present a modular, semi-automated end-to-end machine and deep learning pipeline designed to interface with a federated network of structured patient data. This proof-of-concept pipeline is disease-agnostic, scalable, and requires little domain expertise and manual feature engineering in order to quickly produce results for the case of a user-defined binary outcome event. We demonstrate the pipeline's efficacy with three different disease workflows, with high discriminatory power achieved in all cases.

2021 ◽  
Vol 42 (Supplement_1) ◽  
pp. S62-S63
Author(s):  
Deepak K Ozhathil ◽  
Garret Gutierrez ◽  
Sagar R Mulay ◽  
Adil Ahmed ◽  
Juquan Song ◽  
...  

Abstract Introduction The National Burn Repository (NBR) is the recognized standard for data collection models within the academic burn community. Despite its robust capabilities, the NBR poses challenges to access and exploration that can be intimidating to some researchers. As a result, user-friendly alternative databases have grown in popularity. We compared the NBR to two commercially available large datasets using the well-recognized relationship between age and total body surface area (TBSA) on mortality as our metric to assess correlation. Methods We accessed the TriNetX Global Health Research Network and queried the Diamond network (medical clearinghouses) and Research network (41 healthcare organizations (HCO)) for all-cause mortality within 3 years following burn injuries that occurring between 2000 – 2020 using ICD-10 codes (T31-32). We explored the distribution of TBSA and age across these cohorts and compared the variance in the distribution to the 2008–2017 NBR report using one proportion z-tests for each age-TBSA matched sub-group. We also compared demographics and lethal area for 50% mortality (LA50). Results The Diamond network identified 336,965 entries with an all-cause mortality rate of 3.21% and an LA50 of < 10%. Demographics showed 50% male, 81% unknown race and a mean age of 39. In 2016 - 2017, 56,430 entries were reported. The Research network identified 114,778 entries with a mortality rate of 2.54% and an LA50 of < 10%. Demographics showed 61% male, 58% white, 24% unknown and 16% black with a mean age of 37. In 2016 - 2017, 14,164 entries were reported. In comparison, the NBR database reported 185,239 entries with a mortality rate of 2.96% and an LA50 of >70%. Demographics showed 67% male, 59% white and 3.6% unknown with a mean age between 20–29. In 2016 - 2017, 42,402 entries were reported. Comparison of mean mortality between age-TBSA matched subgroups in the Diamond and the Research networks relative to NBR showed correlation among pediatric populations but lacked significance. Conclusions The Diamond and Research networks are large datasets, which appear to be statistically different from the NBR dataset and are derived from different populations (insured patients, academic healthcare organization and accredited academic burn centers). The exact overlap between datasets is unknown, but demographics suggest that they are very different populations. The Figure depicts the relationship between age and TBSA on mortality for each database. Each database is large enough to achieve statistically significant conclusions, but caution should be used when contrasting conclusions between datasets due to the significant degree of divergence.


2019 ◽  
Vol 2019 (1) ◽  
pp. 360-368
Author(s):  
Mekides Assefa Abebe ◽  
Jon Yngve Hardeberg

Different whiteboard image degradations highly reduce the legibility of pen-stroke content as well as the overall quality of the images. Consequently, different researchers addressed the problem through different image enhancement techniques. Most of the state-of-the-art approaches applied common image processing techniques such as background foreground segmentation, text extraction, contrast and color enhancements and white balancing. However, such types of conventional enhancement methods are incapable of recovering severely degraded pen-stroke contents and produce artifacts in the presence of complex pen-stroke illustrations. In order to surmount such problems, the authors have proposed a deep learning based solution. They have contributed a new whiteboard image data set and adopted two deep convolutional neural network architectures for whiteboard image quality enhancement applications. Their different evaluations of the trained models demonstrated their superior performances over the conventional methods.


2019 ◽  
Author(s):  
Qian Wu ◽  
Weiling Zhao ◽  
Xiaobo Yang ◽  
Hua Tan ◽  
Lei You ◽  
...  

2020 ◽  
Author(s):  
Priyanka Meel ◽  
Farhin Bano ◽  
Dr. Dinesh K. Vishwakarma

2020 ◽  
Vol 21 ◽  
Author(s):  
Sukanya Panja ◽  
Sarra Rahem ◽  
Cassandra J. Chu ◽  
Antonina Mitrofanova

Background: In recent years, the availability of high throughput technologies, establishment of large molecular patient data repositories, and advancement in computing power and storage have allowed elucidation of complex mechanisms implicated in therapeutic response in cancer patients. The breadth and depth of such data, alongside experimental noise and missing values, requires a sophisticated human-machine interaction that would allow effective learning from complex data and accurate forecasting of future outcomes, ideally embedded in the core of machine learning design. Objective: In this review, we will discuss machine learning techniques utilized for modeling of treatment response in cancer, including Random Forests, support vector machines, neural networks, and linear and logistic regression. We will overview their mathematical foundations and discuss their limitations and alternative approaches all in light of their application to therapeutic response modeling in cancer. Conclusion: We hypothesize that the increase in the number of patient profiles and potential temporal monitoring of patient data will define even more complex techniques, such as deep learning and causal analysis, as central players in therapeutic response modeling.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


2018 ◽  
Vol 11 (4) ◽  
pp. 87-98
Author(s):  
Abdullah Alamri

Healthcare systems have evolved to become more patient-centric. Many efforts have been made to transform paper-based patient data to automated medical information by developing electronic healthcare records (EHRs). Several international EHRs standards have been enabling healthcare interoperability and communication among a wide variety of medical centres. It is a dual-model methodology which comprises a reference information model and an archetype model. The archetype is responsible for the definition of clinical concepts which has limitations in terms of supporting complex reasoning and knowledge discovery requirements. The objective of this article is to propose a semantic-mediation architecture to support semantic interoperability among healthcare organizations. It provides an intermediate semantic layer to exploit clinical information based on richer ontological representations to create a “model of meaning” for enabling semantic mediation. The proposed model also provides secure mechanisms to allow interoperable sharing of patient data between healthcare organizations.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Shan Guleria ◽  
Tilak U. Shah ◽  
J. Vincent Pulido ◽  
Matthew Fasullo ◽  
Lubaina Ehsan ◽  
...  

AbstractProbe-based confocal laser endomicroscopy (pCLE) allows for real-time diagnosis of dysplasia and cancer in Barrett’s esophagus (BE) but is limited by low sensitivity. Even the gold standard of histopathology is hindered by poor agreement between pathologists. We deployed deep-learning-based image and video analysis in order to improve diagnostic accuracy of pCLE videos and biopsy images. Blinded experts categorized biopsies and pCLE videos as squamous, non-dysplastic BE, or dysplasia/cancer, and deep learning models were trained to classify the data into these three categories. Biopsy classification was conducted using two distinct approaches—a patch-level model and a whole-slide-image-level model. Gradient-weighted class activation maps (Grad-CAMs) were extracted from pCLE and biopsy models in order to determine tissue structures deemed relevant by the models. 1970 pCLE videos, 897,931 biopsy patches, and 387 whole-slide images were used to train, test, and validate the models. In pCLE analysis, models achieved a high sensitivity for dysplasia (71%) and an overall accuracy of 90% for all classes. For biopsies at the patch level, the model achieved a sensitivity of 72% for dysplasia and an overall accuracy of 90%. The whole-slide-image-level model achieved a sensitivity of 90% for dysplasia and 94% overall accuracy. Grad-CAMs for all models showed activation in medically relevant tissue regions. Our deep learning models achieved high diagnostic accuracy for both pCLE-based and histopathologic diagnosis of esophageal dysplasia and its precursors, similar to human accuracy in prior studies. These machine learning approaches may improve accuracy and efficiency of current screening protocols.


2021 ◽  
Author(s):  
Isidro Lloret ◽  
José A. Troyano ◽  
Fernando Enríquez ◽  
Juan-José González-de-la-Rosa

2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Sign in / Sign up

Export Citation Format

Share Document