scholarly journals A Precision Diagnostic Framework of Renal Cell Carcinoma on Whole-Slide Images using Deep Learning

Author(s):  
Jialun Wu ◽  
Ruonan Zhang ◽  
Tieliang Gong ◽  
Xinrui Bao ◽  
Zeyu Gao ◽  
...  
2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Hisham Abdeltawab ◽  
Fahmi Khalifa ◽  
Mohammed Mohammed ◽  
Liang Cheng ◽  
Dibson Gondim ◽  
...  

AbstractRenal cell carcinoma is the most common type of kidney cancer. There are several subtypes of renal cell carcinoma with distinct clinicopathologic features. Among the subtypes, clear cell renal cell carcinoma is the most common and tends to portend poor prognosis. In contrast, clear cell papillary renal cell carcinoma has an excellent prognosis. These two subtypes are primarily classified based on the histopathologic features. However, a subset of cases can a have a significant degree of histopathologic overlap. In cases with ambiguous histologic features, the correct diagnosis is dependent on the pathologist’s experience and usage of immunohistochemistry. We propose a new method to address this diagnostic task based on a deep learning pipeline for automated classification. The model can detect tumor and non-tumoral portions of kidney and classify the tumor as either clear cell renal cell carcinoma or clear cell papillary renal cell carcinoma. Our framework consists of three convolutional neural networks and the whole slide images of kidney which were divided into patches of three different sizes for input into the networks. Our approach can provide patchwise and pixelwise classification. The kidney histology images consist of 64 whole slide images. Our framework results in an image map that classifies the slide image on the pixel-level. Furthermore, we applied generalized Gauss-Markov random field smoothing to maintain consistency in the map. Our approach classified the four classes accurately and surpassed other state-of-the-art methods, such as ResNet (pixel accuracy: 0.89 Resnet18, 0.92 proposed). We conclude that deep learning has the potential to augment the pathologist’s capabilities by providing automated classification for histopathological images.


Urology ◽  
2020 ◽  
Vol 144 ◽  
pp. 152-157
Author(s):  
Michael Fenstermaker ◽  
Scott A. Tomlins ◽  
Karandeep Singh ◽  
Jenna Wiens ◽  
Todd M. Morgan

Author(s):  
Mohammed Lamine Benomar ◽  
Nesma Settouti ◽  
Eric Debreuve ◽  
Xavier Descombes ◽  
Damien Ambrosetti

2019 ◽  
Vol 44 (6) ◽  
pp. 2009-2020 ◽  
Author(s):  
Heidi Coy ◽  
Kevin Hsieh ◽  
Willie Wu ◽  
Mahesh B. Nagarajan ◽  
Jonathan R. Young ◽  
...  

2021 ◽  
Vol 11 ◽  
Author(s):  
Teng Zuo ◽  
Yanhua Zheng ◽  
Lingfeng He ◽  
Tao Chen ◽  
Bin Zheng ◽  
...  

ObjectivesThis study was conducted in order to design and develop a framework utilizing deep learning (DL) to differentiate papillary renal cell carcinoma (PRCC) from chromophobe renal cell carcinoma (ChRCC) using convolutional neural networks (CNNs) on a small set of computed tomography (CT) images and provide a feasible method that can be applied to light devices.MethodsTraining and validation datasets were established based on radiological, clinical, and pathological data exported from the radiology, urology, and pathology departments. As the gold standard, reports were reviewed to determine the pathological subtype. Six CNN-based models were trained and validated to differentiate the two subtypes. A special test dataset generated with six new cases and four cases from The Cancer Imaging Archive (TCIA) was applied to validate the efficiency of the best model and of the manual processing by abdominal radiologists. Objective evaluation indexes [accuracy, sensitivity, specificity, receiver operating characteristic (ROC) curve, and area under the curve (AUC)] were calculated to assess model performance.ResultsThe CT image sequences of 70 patients were segmented and validated by two experienced abdominal radiologists. The best model achieved 96.8640% accuracy (99.3794% sensitivity and 94.0271% specificity) in the validation set and 100% (case accuracy) and 93.3333% (image accuracy) in the test set. The manual classification achieved 85% accuracy (100% sensitivity and 70% specificity) in the test set.ConclusionsThis framework demonstrates that DL models could help reliably predict the subtypes of PRCC and ChRCC.


2020 ◽  
Vol 125 (4) ◽  
pp. 553-560 ◽  
Author(s):  
Amir Baghdadi ◽  
Naif A. Aldhaam ◽  
Ahmed S. Elsayed ◽  
Ahmed A. Hussein ◽  
Lora A. Cavuoto ◽  
...  

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Seok-Soo Byun ◽  
Tak Sung Heo ◽  
Jeong Myeong Choi ◽  
Yeong Seok Jeong ◽  
Yu Seop Kim ◽  
...  

AbstractSurvival analyses for malignancies, including renal cell carcinoma (RCC), have primarily been conducted using the Cox proportional hazards (CPH) model. We compared the random survival forest (RSF) and DeepSurv models with the CPH model to predict recurrence-free survival (RFS) and cancer-specific survival (CSS) in non-metastatic clear cell RCC (nm-cRCC) patients. Our cohort included 2139 nm-cRCC patients who underwent curative-intent surgery at six Korean institutions between 2000 and 2014. The data of two largest hospitals’ patients were assigned into the training and validation dataset, and the data of the remaining hospitals were assigned into the external validation dataset. The performance of the RSF and DeepSurv models was compared with that of CPH using Harrel’s C-index. During the follow-up, recurrence and cancer-specific deaths were recorded in 190 (12.7%) and 108 (7.0%) patients, respectively, in the training-dataset. Harrel’s C-indices for RFS in the test-dataset were 0.794, 0.789, and 0.802 for CPH, RSF, and DeepSurv, respectively. Harrel’s C-indices for CSS in the test-dataset were 0.831, 0.790, and 0.834 for CPH, RSF, and DeepSurv, respectively. In predicting RFS and CSS in nm-cRCC patients, the performance of DeepSurv was superior to that of CPH and RSF. In no distant time, deep learning-based survival predictions may be useful in RCC patients.


2020 ◽  
Vol 52 (5) ◽  
pp. 1542-1549 ◽  
Author(s):  
Yijun Zhao ◽  
Marcello Chang ◽  
Robin Wang ◽  
Ianto Lin Xi ◽  
Ken Chang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document