scholarly journals Deep Learning Based HPV Status Prediction for Oropharyngeal Cancer Patients

Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 786
Author(s):  
Daniel M. Lang ◽  
Jan C. Peeken ◽  
Stephanie E. Combs ◽  
Jan J. Wilkens ◽  
Stefan Bartzsch

Infection with the human papillomavirus (HPV) has been identified as a major risk factor for oropharyngeal cancer (OPC). HPV-related OPCs have been shown to be more radiosensitive and to have a reduced risk for cancer related death. Hence, the histological determination of HPV status of cancer patients depicts an essential diagnostic factor. We investigated the ability of deep learning models for imaging based HPV status detection. To overcome the problem of small medical datasets, we used a transfer learning approach. A 3D convolutional network pre-trained on sports video clips was fine-tuned, such that full 3D information in the CT images could be exploited. The video pre-trained model was able to differentiate HPV-positive from HPV-negative cases, with an area under the receiver operating characteristic curve (AUC) of 0.81 for an external test set. In comparison to a 3D convolutional neural network (CNN) trained from scratch and a 2D architecture pre-trained on ImageNet, the video pre-trained model performed best. Deep learning models are capable of CT image-based HPV status determination. Video based pre-training has the ability to improve training for 3D medical data, but further studies are needed for verification.

2021 ◽  
Vol 11 ◽  
Author(s):  
Yubizhuo Wang ◽  
Jiayuan Shao ◽  
Pan Wang ◽  
Lintao Chen ◽  
Mingliang Ying ◽  
...  

BackgroundOur aim was to establish a deep learning radiomics method to preoperatively evaluate regional lymph node (LN) staging for hilar cholangiocarcinoma (HC) patients. Methods and MaterialsOf the 179 enrolled HC patients, 90 were pathologically diagnosed with lymph node metastasis. Quantitative radiomic features and deep learning features were extracted. An LN metastasis status classifier was developed through integrating support vector machine, high-performance deep learning radiomics signature, and three clinical characteristics. An LN metastasis stratification classifier (N1 vs. N2) was also proposed with subgroup analysis.ResultsThe average areas under the receiver operating characteristic curve (AUCs) of the LN metastasis status classifier reached 0.866 in the training cohort and 0.870 in the external test cohorts. Meanwhile, the LN metastasis stratification classifier performed well in predicting the risk of LN metastasis, with an average AUC of 0.946.ConclusionsTwo classifiers derived from computed tomography images performed well in predicting LN staging in HC and will be reliable evaluation tools to improve decision-making.


2020 ◽  
Vol 34 (7) ◽  
pp. 717-730 ◽  
Author(s):  
Matthew C. Robinson ◽  
Robert C. Glen ◽  
Alpha A. Lee

Abstract Machine learning methods may have the potential to significantly accelerate drug discovery. However, the increasing rate of new methodological approaches being published in the literature raises the fundamental question of how models should be benchmarked and validated. We reanalyze the data generated by a recently published large-scale comparison of machine learning models for bioactivity prediction and arrive at a somewhat different conclusion. We show that the performance of support vector machines is competitive with that of deep learning methods. Additionally, using a series of numerical experiments, we question the relevance of area under the receiver operating characteristic curve as a metric in virtual screening. We further suggest that area under the precision–recall curve should be used in conjunction with the receiver operating characteristic curve. Our numerical experiments also highlight challenges in estimating the uncertainty in model performance via scaffold-split nested cross validation.


BDJ ◽  
2020 ◽  
Author(s):  
Vinod Patel ◽  
Dipesh Patel ◽  
Timothy Browning ◽  
Sheelen Patel ◽  
Mark McGurk ◽  
...  

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Jiarui Feng ◽  
Heming Zhang ◽  
Fuhai Li

Abstract Background Survival analysis is an important part of cancer studies. In addition to the existing Cox proportional hazards model, deep learning models have recently been proposed in survival prediction, which directly integrates multi-omics data of a large number of genes using the fully connected dense deep neural network layers, which are hard to interpret. On the other hand, cancer signaling pathways are important and interpretable concepts that define the signaling cascades regulating cancer development and drug resistance. Thus, it is important to investigate potential associations between patient survival and individual signaling pathways, which can help domain experts to understand deep learning models making specific predictions. Results In this exploratory study, we proposed to investigate the relevance and influence of a set of core cancer signaling pathways in the survival analysis of cancer patients. Specifically, we built a simplified and partially biologically meaningful deep neural network, DeepSigSurvNet, for survival prediction. In the model, the gene expression and copy number data of 1967 genes from 46 major signaling pathways were integrated in the model. We applied the model to four types of cancer and investigated the influence of the 46 signaling pathways in the cancers. Interestingly, the interpretable analysis identified the distinct patterns of these signaling pathways, which are helpful in understanding the relevance of signaling pathways in terms of their application to the prediction of cancer patients’ survival time. These highly relevant signaling pathways, when combined with other essential signaling pathways inhibitors, can be novel targets for drug and drug combination prediction to improve cancer patients’ survival time. Conclusion The proposed DeepSigSurvNet model can facilitate the understanding of the implications of signaling pathways on cancer patients’ survival by integrating multi-omics data and clinical factors.


Author(s):  
V. R. S. Mani

In this chapter, the author paints a comprehensive picture of different deep learning models used in different multi-modal image segmentation tasks. This chapter is an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different types of multi-modal images and the corresponding types of convolution neural networks used in the segmentation task. The chapter starts with an introduction to CNN topology and describes various models like Hyper Dense Net, Organ Attention Net, UNet, VNet, Dilated Fully Convolutional Network, Transfer Learning, etc.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7784
Author(s):  
Johan Wasselius ◽  
Eric Lyckegård Finn ◽  
Emma Persson ◽  
Petter Ericson ◽  
Christina Brogårdh ◽  
...  

Recent advances in stroke treatment have provided effective tools to successfully treat ischemic stroke, but still a majority of patients are not treated due to late arrival to hospital. With modern stroke treatment, earlier arrival would greatly improve the overall treatment results. This prospective study was performed to asses the capability of bilateral accelerometers worn in bracelets 24/7 to detect unilateral arm paralysis, a hallmark symptom of stroke, early enough to receive treatment. Classical machine learning algorithms as well as state-of-the-art deep neural networks were evaluated on detection times between 15 min and 120 min. Motion data were collected using triaxial accelerometer bracelets worn on both arms for 24 h. Eighty-four stroke patients with unilateral arm motor impairment and 101 healthy subjects participated in the study. Accelerometer data were divided into data windows of different lengths and analyzed using multiple machine learning algorithms. The results show that all algorithms performed well in separating the two groups early enough to be clinically relevant, based on wrist-worn accelerometers. The two evaluated deep learning models, fully convolutional network and InceptionTime, performed better than the classical machine learning models with an AUC score between 0.947–0.957 on 15 min data windows and up to 0.993–0.994 on 120 min data windows. Window lengths longer than 90 min only marginally improved performance. The difference in performance between the deep learning models and the classical models was statistically significant according to a non-parametric Friedman test followed by a post-hoc Nemenyi test. Introduction of wearable stroke detection devices may dramatically increase the portion of stroke patients eligible for revascularization and shorten the time to treatment. Since the treatment effect is highly time-dependent, early stroke detection may dramatically improve stroke outcomes.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2074
Author(s):  
Masayuki Tsuneki ◽  
Fahdi Kanavati

Colorectal poorly differentiated adenocarcinoma (ADC) is known to have a poor prognosis as compared with well to moderately differentiated ADC. The frequency of poorly differentiated ADC is relatively low (usually less than 5% among colorectal carcinomas). Histopathological diagnosis based on endoscopic biopsy specimens is currently the most cost effective method to perform as part of colonoscopic screening in average risk patients, and it is an area that could benefit from AI-based tools to aid pathologists in their clinical workflows. In this study, we trained deep learning models to classify poorly differentiated colorectal ADC from Whole Slide Images (WSIs) using a simple transfer learning method. We evaluated the models on a combination of test sets obtained from five distinct sources, achieving receiver operating characteristic curve (ROC) area under the curves (AUCs) up to 0.95 on 1799 test cases.


2020 ◽  
pp. 666-679 ◽  
Author(s):  
Xuhong Zhang ◽  
Toby C. Cornish ◽  
Lin Yang ◽  
Tellen D. Bennett ◽  
Debashis Ghosh ◽  
...  

PURPOSE We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS Our method achieved an F1 score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F1 score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.


2021 ◽  
Author(s):  
Leonid Joffe

Deep learning models for tabular data are restricted to a specific table format. Computer vision models, on the other hand, have a broader applicability; they work on all images and can learn universal features. This allows them to be trained on enormous corpora and have very wide transferability and applicability. Inspired by these properties, this work presents an architecture that aims to capture useful patterns across arbitrary tables. The model is trained on randomly sampled subsets of features from a table, processed by a convolutional network. This internal representation captures feature interactions that appear in the table. Experimental results show that the embeddings produced by this model are useful and transferable across many commonly used machine learning benchmarks datasets. Specifically, that using the embeddings produced by the network as additional features, improves the performance of a number of classifiers.


Sign in / Sign up

Export Citation Format

Share Document