scholarly journals Prognostic analysis of histopathological images using pre-trained convolutional neural networks: application to hepatocellular carcinoma

PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e8668 ◽  
Author(s):  
Liangqun Lu ◽  
Bernie J. Daigle

Histopathological images contain rich phenotypic descriptions of the molecular processes underlying disease progression. Convolutional neural networks, state-of-the-art image analysis techniques in computer vision, automatically learn representative features from such images which can be useful for disease diagnosis, prognosis, and subtyping. Hepatocellular carcinoma (HCC) is the sixth most common type of primary liver malignancy. Despite the high mortality rate of HCC, little previous work has made use of CNN models to explore the use of histopathological images for prognosis and clinical survival prediction of HCC. We applied three pre-trained CNN models—VGG 16, Inception V3 and ResNet 50—to extract features from HCC histopathological images. Sample visualization and classification analyses based on these features showed a very clear separation between cancer and normal samples. In a univariate Cox regression analysis, 21.4% and 16% of image features on average were significantly associated with overall survival (OS) and disease-free survival (DFS), respectively. We also observed significant correlations between these features and integrated biological pathways derived from gene expression and copy number variation. Using an elastic net regularized Cox Proportional Hazards model of OS constructed from Inception image features, we obtained a concordance index (C-index) of 0.789 and a significant log-rank test (p = 7.6E−18). We also performed unsupervised classification to identify HCC subgroups from image features. The optimal two subgroups discovered using Inception model image features showed significant differences in both overall (C-index = 0.628 and p = 7.39E−07) and DFS (C-index = 0.558 and p = 0.012). Our work demonstrates the utility of extracting image features using pre-trained models by using them to build accurate prognostic models of HCC as well as highlight significant correlations between these features, clinical survival, and relevant biological pathways. Image features extracted from HCC histopathological images using the pre-trained CNN models VGG 16, Inception V3 and ResNet 50 can accurately distinguish normal and cancer samples. Furthermore, these image features are significantly correlated with survival and relevant biological pathways.

2019 ◽  
Author(s):  
Liangqun Lu ◽  
Bernie Daigle

AbstractBackgroundHistopathological images contain rich phenotypic descriptions of the molecular processes underlying disease progression. Convolutional neural networks (CNNs), a state-of-the-art image analysis technique in computer vision, automatically learns representative features from such images which can be useful for disease diagnosis, prognosis, and subtyping. Despite hepatocellular carcinoma (HCC) being the sixth most common type of primary liver malignancy with a high mortality rate, little previous work has made use of CNN models to delineate the importance of histopathological images in diagnosis and clinical survival of HCC.ResultsWe applied three pre-trained CNN models – VGG 16, Inception V3, and ResNet 50 – to extract features from HCC histopathological images. The visualization and classification showed clear separation between cancer and normal samples using image features. In a univariate Cox regression analysis, 21.4% and 16% of image features on average were significantly associated with overall survival and disease-free survival, respectively. We also observed significant correlations between these features and integrated biological pathways derived from gene expression and copy number variation. Using an elastic net regularized CoxPH model of overall survival, we obtained a concordance index (C-index) of 0.789 and a significant log-rank test (p = 7.6E-18) after applying Inception image features. We also performed unsupervised classification to identify HCC subgroups from image features. The optimal two subgroups discovered using Inception image features were significantly associated with both overall (C-index = 0.628 and p = 7.39E-07) and disease-free survival (C-index =0.558 and p = 0.012). Our results suggest the feasibility of feature extraction using pre-trained models, as well as the utility of the resulting features to build an accurate prognosis model of HCC and highlight significant correlations with clinical survival and biological pathways.ConclusionsThe image features extracted from HCC histopathological images using the pre-trained CNN models VGG 16, Inception V3 and ResNet 50 can accurately distinguish normal and cancer samples. Furthermore, these image features are significantly correlated with relevant biological outcomes.


2021 ◽  
Author(s):  
Xinxin Chen ◽  
Wenxia Qiu ◽  
Xuekun Xie ◽  
Zefeng Chen ◽  
Zhiwei Han ◽  
...  

Abstract Background: This work was designed to establish and verify our nomograms integrating clinicopathological characteristics with hematological biomarkers to predict both disease-free survival (DFS) and overall survival (OS) in solitary hepatocellular carcinoma (HCC) patients following hepatectomy.Methods: We scrutinized the data retrospectively from 414 patients with a clinicopathological diagnosis of solitary HCC from Guangxi Medical University Cancer Hospital (Nanning, China) between January 2004 and December 2012. Following the random separation of the samples in a 7:3 ratio into the training set and validation set, the former set was assessed by Cox regression analysis to develop two nomograms to predict the 1-year and 3-year DFS and OS (3-years and 5-years). This was followed by discrimination and calibration estimation employing Harrell’s C-index (C-index) and calibration curves, while the internal validation was also assessed.Results: In the training cohort, the tumor diameter, tumor capsule, macrovascular invasion, and alpha-fetoprotein (AFP) were included in the DFS nomogram. Age, tumor diameter, tumor capsule, macrovascular invasion, microvascular invasion, and aspartate aminotransferase (AST) were included in the OS nomogram. The C-index was 0.691 (95% CI: 0.644-0.738) for the DFS-nomogram and 0.713 (95% CI: 0.670-0.756) for the OS-nomogram. The survival probability calibration curves displayed a fine agreement between the predicted and observed ranges in both data sets. Conclusion: Our nomograms combined clinicopathological features with hematological biomarkers to emerge effective in predicting the DFS and OS in solitary HCC patients following curative liver resection. Therefore, the potential utility of our nomograms for guiding individualized treatment clinically and monitor the recurrence monitoring in these patients.


2022 ◽  
Vol 12 ◽  
Author(s):  
Shaodi Wen ◽  
Yuzhong Chen ◽  
Chupeng Hu ◽  
Xiaoyue Du ◽  
Jingwei Xia ◽  
...  

BackgroundHepatocellular carcinoma (HCC) is the most common pathological type of primary liver cancer. The lack of prognosis indicators is one of the challenges in HCC. In this study, we investigated the combination of tertiary lymphoid structure (TLS) and several systemic inflammation parameters as a prognosis indicator for HCC.Materials and MethodsWe retrospectively recruited 126 postoperative patients with primary HCC. The paraffin section was collected for TLS density assessment. In addition, we collected the systemic inflammation parameters from peripheral blood samples. We evaluated the prognostic values of those parameters on overall survival (OS) using Kaplan-Meier curves, univariate and multivariate Cox regression. Last, we plotted a nomogram to predict the survival of HCC patients.ResultsWe first found TLS density was positively correlated with HCC patients’ survival (HR=0.16, 95% CI: 0.06 − 0.39, p < 0.0001), but the power of TLS density for survival prediction was found to be limited (AUC=0.776, 95% CI:0.772 − 0.806). Thus, we further introduced several systemic inflammation parameters for survival analysis, we found neutrophil-to-lymphocyte ratio (NLR) was positively associated with OS in univariate Cox regression analysis. However, the combination of TLS density and NLR better predicts patient’s survival (AUC=0.800, 95% CI: 0.698-0.902, p < 0.001) compared with using any single indicator alone. Last, we incorporated TLS density, NLR, and other parameters into the nomogram to provide a reproducible approach for survival prediction in HCC clinical practice.ConclusionThe combination of TLS density and NLR was shown to be a good predictor of HCC patient survival. It also provides a novel direction for the evaluation of immunotherapies in HCC.


Author(s):  
N Seijdel ◽  
N Tsakmakidis ◽  
EHF De Haan ◽  
SM Bohte ◽  
HS Scholte

AbstractFeedforward deep convolutional neural networks (DCNNs) are, under specific conditions, matching and even surpassing human performance in object recognition in natural scenes. This performance suggests that the analysis of a loose collection of image features could support the recognition of natural object categories, without dedicated systems to solve specific visual subtasks. Research in humans however suggests that while feedforward activity may suffice for sparse scenes with isolated objects, additional visual operations (‘routines’) that aid the recognition process (e.g. segmentation or grouping) are needed for more complex scenes. Linking human visual processing to performance of DCNNs with increasing depth, we here explored if, how, and when object information is differentiated from the backgrounds they appear on. To this end, we controlled the information in both objects and backgrounds, as well as the relationship between them by adding noise, manipulating background congruence and systematically occluding parts of the image. Results indicate that with an increase in network depth, there is an increase in the distinction between object- and background information. For more shallow networks, results indicated a benefit of training on segmented objects. Overall, these results indicate that, de facto, scene segmentation can be performed by a network of sufficient depth. We conclude that the human brain could perform scene segmentation in the context of object identification without an explicit mechanism, by selecting or “binding” features that belong to the object and ignoring other features, in a manner similar to a very deep convolutional neural network.


Sign in / Sign up

Export Citation Format

Share Document