Pure and Hybrid Deep Learning Models can Predict Pathologic Tumor Response to Neoadjuvant Therapy in Pancreatic Adenocarcinoma: A Pilot Study

2020 ◽  
pp. 000313482098255
Author(s):  
Michael D. Watson ◽  
Maria R. Baimas-George ◽  
Keith J. Murphy ◽  
Ryan C. Pickens ◽  
David A. Iannitti ◽  
...  

Background Neoadjuvant therapy may improve survival of patients with pancreatic adenocarcinoma; however, determining response to therapy is difficult. Artificial intelligence allows for novel analysis of images. We hypothesized that a deep learning model can predict tumor response to NAC. Methods Patients with pancreatic cancer receiving neoadjuvant therapy prior to pancreatoduodenectomy were identified between November 2009 and January 2018. The College of American Pathologists Tumor Regression Grades 0-2 were defined as pathologic response (PR) and grade 3 as no response (NR). Axial images from preoperative computed tomography scans were used to create a 5-layer convolutional neural network and LeNet deep learning model to predict PRs. The hybrid model incorporated decrease in carbohydrate antigen 19-9 (CA19-9) of 10%. Accuracy was determined by area under the curve. Results A total of 81 patients were included in the study. Patients were divided between PR (333 images) and NR (443 images). The pure model had an area under the curve (AUC) of .738 ( P < .001), whereas the hybrid model had an AUC of .785 ( P < .001). CA19-9 decrease alone was a poor predictor of response with an AUC of .564 ( P = .096). Conclusions A deep learning model can predict pathologic tumor response to neoadjuvant therapy for patients with pancreatic adenocarcinoma and the model is improved with the incorporation of decreases in serum CA19-9. Further model development is needed before clinical application.

HPB ◽  
2020 ◽  
Vol 22 ◽  
pp. S38
Author(s):  
M. Watson ◽  
M. Baimas-George ◽  
K. Murphy ◽  
R. Pickens ◽  
D. Iannitti ◽  
...  

BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


2020 ◽  
Author(s):  
Chih-Min Liu ◽  
Chien-Liang Liu ◽  
Kai-Wen Hu ◽  
Vincent S. Tseng ◽  
Shih-Lin Chang ◽  
...  

BACKGROUND Brugada syndrome is a rare inherited arrhythmia with a unique electrocardiogram (ECG) pattern (type 1 Brugada ECG pattern), which is a major cause of sudden cardiac death in young people. Automatic screening for the ECG pattern of Brugada syndrome by a deep learning model gives us the chance to identify these patients at an early time, thus allowing them to receive life-saving therapy. OBJECTIVE To develop a deep learning-enabled ECG model for diagnosing Brugada syndrome. METHODS A total of 276 ECGs with a type 1 Brugada ECG pattern (276 type 1 Brugada ECGs and another randomly retrieved 276 non-Brugada type ECGs for one to one allocation) were extracted from the hospital-based ECG database for a two-stage analysis with a deep learning model. We first trained the network to identify right bundle branch block (RBBB) pattern, and then, we transferred the first-stage learning to the second task to diagnose the type 1 Brugada ECG pattern. The diagnostic performance of the deep learning model was compared to that of board-certified practicing cardiologists. The model was also validated by the independent international data of ECGs. RESULTS The AUC (area under the curve) of the deep learning model in diagnosing the type 1 Brugada ECG pattern was 0.96 (sensitivity: 88.4%, specificity: 89.1%). The sensitivity and specificity of the cardiologists for the diagnosis of the type 1 Brugada ECG pattern were 62.7±17.8%, and 98.5±3.0%, respectively. The diagnoses by the deep learning model were highly consistent with the standard diagnoses (Kappa coefficient: 0.78, McNemar test, P = 0.86). However, the diagnoses by the cardiologists were significantly different from the standard diagnoses, with only moderate consistency (Kappa coefficient: 0.60, McNemar test, P = 2.35x10-22). For the international validation, the AUC of the deep learning model for diagnosing the type 1 Brugada ECG pattern was 0.99 (sensitivity: 85.7%, specificity: 100.0%). CONCLUSIONS The deep learning-enabled ECG model for diagnosing Brugada syndrome is a robust screening tool with better diagnostic sensitivity than that of cardiologists.


2018 ◽  
Vol 36 (4_suppl) ◽  
pp. 266-266
Author(s):  
Sunyoung S. Lee ◽  
Jin Cheon Kim ◽  
Jillian Dolan ◽  
Andrew Baird

266 Background: The characteristic histological feature of pancreatic adenocarcinoma (PAD) is extensive desmoplasia alongside leukocytes and cancer-associated fibroblasts. Desmoplasia is a known barrier to the absorption and penetration of therapeutic drugs. Stromal cells are key elements for a clinical response to chemotherapy and immunotherapy, but few models exist to analyze the spatial and architectural elements that compose the complex tumor microenvironment in PAD. Methods: We created a deep learning algorithm to analyze images and quantify cells and fibrotic tissue. Histopathology slides of PAD patients (pts) were then used to automate the recognition and mapping of adenocarcinoma cells, leukocytes, fibroblasts, and degree of desmoplasia, defined as the ratio of the area of fibrosis to that of the tumor gland. This information was correlated with mutational burden, defined as mutations (mts) per megabase (mb) of each pt. Results: The histopathology slides (H&E stain) of 126 pts were obtained from The Cancer Genome Atlas (TCGA) and analyzed with the deep learning model. Pt with the largest mutational burden (733 mts/mb, n = 1 pt) showed the largest number of leukocytes (585/mm2). Those with the smallest mutational burden (0 mts/mb, n = 16 pts) showed the fewest leukocytes (median, 14/mm2). Mutational burden was linearly proportional to the number of leukocytes (R2 of 0.7772). The pt with a mutational burden of 733 was excluded as an outlier. No statistically significant difference in the number of fibroblasts, degree of desmoplasia, or thickness of the first fibrotic layer (the smooth muscle actin-rich layer outside of the tumor gland), was found among pts of varying mutational burden. The median distance from a tumor gland to a leukocyte was inversely proportional to the number of leukocytes in a box of 1 mm2 with a tumor gland at the center. Conclusions: A deep learning model enabled automated quantification and mapping of desmoplasia, stromal and malignant cells, revealing the spatial and architectural relationship of these cells in PAD pts with varying mutational burdens. Further biomarker driven studies in the context of immunotherapy and anti-fibrosis are warranted.


2020 ◽  
Author(s):  
Rui Cao ◽  
Fan Yang ◽  
Si-Cong Ma ◽  
Li Liu ◽  
Yan Li ◽  
...  

ABSTRACTBackgroundMicrosatellite instability (MSI) is a negative prognostic factor for colorectal cancer (CRC) and can be used as a predictor of success for immunotherapy in pan-cancer. However, current MSI identification methods are not available for all patients. We propose an ensemble multiple instance learning (MIL)-based deep learning model to predict MSI status directly from histopathology images.DesignTwo cohorts of patients were collected, including 429 from The Cancer Genome Atlas (TCGA-COAD) and 785 from a self-collected Asian data set (Asian-CRC). The initial model was developed and validated in TCGA-COAD, and then generalized in Asian-CRC through transfer learning. The pathological signatures extracted from the model are associated with genotypes for model interpretation.ResultsA model called Ensembled Patch Likelihood Aggregation (EPLA) was developed in the TCGA-COAD training set based on two consecutive stages: patch-level prediction and WSI-level prediction. The EPLA model achieved an area-under-the -curve (AUC) of 0.8848 in the TCGA-COAD test set, which outperformed the state-of-the-art approach, and an AUC of 0.8504 in the Asian-CRC after transfer learning. Furthermore, the five pathological imaging signatures identified using the model are associated with genomic and transcriptomic profiles, which makes the MIL model interpretable. Results show that our model recognizes pathological signatures related to mutation burden, DNA repair pathways, and immunity.ConclusionOur MIL-based deep learning model can effectively predict MSI from histopathology images and are transferable to a new patient cohort. The interpretability of our model by association with genomic and transcriptomic biomarkers lays the foundation for prospective clinical research.


2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e20601-e20601 ◽  
Author(s):  
Yi Yang ◽  
Jiancheng Yang ◽  
Yuxiang Ye ◽  
Tian Xia ◽  
Shun Lu

e20601 Background: Manual application of length-based tumor response criteria is the standard-of-care for assessing metastatic tumor response. It is technically challenging, time-consuming and associated with low reproducibility. In this study, we presented a novel automatic Deep Neural Networks (DNNs) based segmentation method for assessing tumor progression to immunotherapy. Next stage, AI will assist Physicians assessing pseudo-progression. Methods: A data set of 39 lung cancer patients with 156 computed tomography (CT) scans was used for model training and validation. A 3D segmentation DNN DenseSharp, was trained with an input size of on CT scans of tumor with manual delineated volume of interest (VOI) as ground truth. The trained model was subsequently used to estimate the volumes of target lesions via 16 sliding windows. We referred the progression-free survival (PFS) only considering tumor size as PFS-T. PFS-Ts assessed by longest tumor diameter (PFS-Tdiam), by tumor volume (PFS-Tvol), and by predicted tumor volume (PFS-Tpred-vol) were compared with standard PFS (as assessed by one junior and one senior clinician). Tumor progression was defined as > 20% increase in the longest tumor diameter or > 50% increase in tumor volume. Effective treatment was defined as a PFS of > 60 days after immunotherapy. Results: In a 4-fold cross-validation test, the DenseSharp segmentation neural network achieved a mean per-class intersection over union (mIoU) of 80.1%. The effectiveness rates of immunotherapy assessed using PFS-Tdiam (32 / 39, 82.1%), PFS-Tvol (33/39, 84.6%) and PFS-T pred-vol (32/39, 82.1%) were the same as standard PFS. The agreement between PFS-Tvol, and PFS-Tpred-vol was 97.4% (38/39). Evaluation time with deep learning model implemented with PyTorch 0.4.1 on GTX 1080 GPU was hundred-fold faster than manual evaluation (1.42s vs. 5-10 min per patient). Conclusions: In this study, DNN based model demonstrated fast and stable performance for tumor progression evaluation. Automatic volumetric measurement of tumor lesion enabled by deep learning provides the potential for a more efficient, objective and sensitive measurement than linear measurement by clinicians.


Cancers ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 2284
Author(s):  
Han Gyul Yoon ◽  
Wonjoong Cheon ◽  
Sang Woon Jeong ◽  
Hye Seung Kim ◽  
Kyunga Kim ◽  
...  

This study aimed to investigate the performance of a deep learning-based survival-prediction model, which predicts the overall survival (OS) time of glioblastoma patients who have received surgery followed by concurrent chemoradiotherapy (CCRT). The medical records of glioblastoma patients who had received surgery and CCRT between January 2011 and December 2017 were retrospectively reviewed. Based on our inclusion criteria, 118 patients were selected and semi-randomly allocated to training and test datasets (3:1 ratio, respectively). A convolutional neural network–based deep learning model was trained with magnetic resonance imaging (MRI) data and clinical profiles to predict OS. The MRI was reconstructed by using four pulse sequences (22 slices) and nine images were selected based on the longest slice of glioblastoma by a physician for each pulse sequence. The clinical profiles consist of personal, genetic, and treatment factors. The concordance index (C-index) and integrated area under the curve (iAUC) of the time-dependent area-under-the-curve curves of each model were calculated to evaluate the performance of the survival-prediction models. The model that incorporated clinical and radiomic features showed a higher C-index (0.768 (95% confidence interval (CI): 0.759, 0.776)) and iAUC (0.790 (95% CI: 0.783, 0.797)) than the model using clinical features alone (C-index = 0.693 (95% CI: 0.685, 0.701); iAUC = 0.723 (95% CI: 0.716, 0.731)) and the model using radiomic features alone (C-index = 0.590 (95% CI: 0.579, 0.600); iAUC = 0.614 (95% CI: 0.607, 0.621)). These improvements to the C-indexes and iAUCs were validated using the 1000-times bootstrapping method; all were statistically significant (p < 0.001). This study suggests the synergistic benefits of using both clinical and radiomic parameters. Furthermore, it indicates the potential of multi-parametric deep learning models for the survival prediction of glioblastoma patients.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1147
Author(s):  
Hyun-Il Kim ◽  
Yuna Kim ◽  
Bomin Kim ◽  
Dae Youp Shin ◽  
Seong Jae Lee ◽  
...  

Kinematic analysis of the hyoid bone in a videofluorosopic swallowing study (VFSS) is important for assessing dysphagia. However, calibrating the hyoid bone movement is time-consuming, and its reliability shows wide variation. Computer-assisted analysis has been studied to improve the efficiency and accuracy of hyoid bone identification and tracking, but its performance is limited. In this study, we aimed to design a robust network that can track hyoid bone movement automatically without human intervention. Using 69,389 frames from 197 VFSS files as the data set, a deep learning model for detection and trajectory prediction was constructed and trained by the BiFPN-U-Net(T) network. The present model showed improved performance when compared with the previous models: an area under the curve (AUC) of 0.998 for pixelwise accuracy, an accuracy of object detection of 99.5%, and a Dice similarity of 90.9%. The bounding box detection performance for the hyoid bone and reference objects was superior to that of other models, with a mean average precision of 95.9%. The estimation of the distance of hyoid bone movement also showed higher accuracy. The deep learning model proposed in this study could be used to detect and track the hyoid bone more efficiently and accurately in VFSS analysis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fahdi Kanavati ◽  
Gouji Toyokawa ◽  
Seiya Momosaki ◽  
Hiroaki Takeoka ◽  
Masaki Okamoto ◽  
...  

AbstractThe differentiation between major histological types of lung cancer, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC), and small-cell lung cancer (SCLC) is of crucial importance for determining optimum cancer treatment. Hematoxylin and Eosin (H&E)-stained slides of small transbronchial lung biopsy (TBLB) are one of the primary sources for making a diagnosis; however, a subset of cases present a challenge for pathologists to diagnose from H&E-stained slides alone, and these either require further immunohistochemistry or are deferred to surgical resection for definitive diagnosis. We trained a deep learning model to classify H&E-stained Whole Slide Images of TBLB specimens into ADC, SCC, SCLC, and non-neoplastic using a training set of 579 WSIs. The trained model was capable of classifying an independent test set of 83 challenging indeterminate cases with a receiver operator curve area under the curve (AUC) of 0.99. We further evaluated the model on four independent test sets—one TBLB and three surgical, with combined total of 2407 WSIs—demonstrating highly promising results with AUCs ranging from 0.94 to 0.99.


2021 ◽  
Author(s):  
Ritika Nandi ◽  
Manjunath Mulimani

Abstract In this paper, a hybrid deep learning model is proposed for the detection of coronavirus from chest X-ray images. The hybrid deep learning model is a combination of ResNet50 and MobileNet. Both ResNet50 and MobileNet are light Deep Neural Networks (DNNs) and can be used with low hardware resource-based Personal Digital Assistants (PDA) for quick detection of COVID-19 infection. The performance of the proposed hybrid model is evaluated on two publicly available COVID-19 chest X-ray datasets. Both datasets include normal, pneumonia and coronavirus infected chest X-rays. Results show that the proposed hybrid model more suitable for COVID-19 detection and achieve the highest recognition accuracy on both the datasets.


Sign in / Sign up

Export Citation Format

Share Document