scholarly journals Breast Tumor Cellularity Assessment using Deep Neural Networks

2019 ◽  
Author(s):  
Alexander Rakhlin ◽  
Aleksei Tiulpin ◽  
Alexey A. Shvets ◽  
Alexandr A. Kalinin ◽  
Vladimir I. Iglovikov ◽  
...  

AbstractBreast cancer is one of the main causes of death world-wide. Histopathological cellularity assessment of residual tumors in post-surgical tissues is used to analyze a tumor’s response to a therapy. Correct cellularity assessment increases the chances of getting an appropriate treatment and facilitates the patient’s survival. In current clinical practice, tumor cellularity is manually estimated by pathologists; this process is tedious and prone to errors or low agreement rates between assessors. In this work, we evaluated three strong novel Deep Learning-based approaches for automatic assessment of tumor cellularity from post-treated breast surgical specimens stained with hematoxylin and eosin. We validated the proposed methods on the BreastPathQ SPIE challenge dataset that consisted of 2395 image patches selected from whole slide images acquired from 64 patients. Compared to expert pathologist scoring, our best performing method yielded the Cohen’s kappa coefficient of 0.69 (vs. 0.42 previously known in literature) and the intra-class correlation coefficient of 0.89 (vs. 0.83). Our results suggest that Deep Learning-based methods have a significant potential to alleviate the burden on pathologists, enhance the diagnostic workflow, and, thereby, facilitate better clinical outcomes in breast cancer treatment.

2020 ◽  
Vol 6 (10) ◽  
pp. 101
Author(s):  
Mauricio Alberto Ortega-Ruiz ◽  
Cefa Karabağ ◽  
Victor García Garduño ◽  
Constantino Carlos Reyes-Aldasoro

This paper describes a methodology that extracts key morphological features from histological breast cancer images in order to automatically assess Tumour Cellularity (TC) in Neo-Adjuvant treatment (NAT) patients. The response to NAT gives information on therapy efficacy and it is measured by the residual cancer burden index, which is composed of two metrics: TC and the assessment of lymph nodes. The data consist of whole slide images (WSIs) of breast tissue stained with Hematoxylin and Eosin (H&E) released in the 2019 SPIE Breast Challenge. The methodology proposed is based on traditional computer vision methods (K-means, watershed segmentation, Otsu’s binarisation, and morphological operations), implementing colour separation, segmentation, and feature extraction. Correlation between morphological features and the residual TC after a NAT treatment was examined. Linear regression and statistical methods were used and twenty-two key morphological parameters from the nuclei, epithelial region, and the full image were extracted. Subsequently, an automated TC assessment that was based on Machine Learning (ML) algorithms was implemented and trained with only selected key parameters. The methodology was validated with the score assigned by two pathologists through the intra-class correlation coefficient (ICC). The selection of key morphological parameters improved the results reported over other ML methodologies and it was very close to deep learning methodologies. These results are encouraging, as a traditionally-trained ML algorithm can be useful when limited training data are available preventing the use of deep learning approaches.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Hui Qu ◽  
Mu Zhou ◽  
Zhennan Yan ◽  
He Wang ◽  
Vinod K. Rustgi ◽  
...  

AbstractBreast carcinoma is the most common cancer among women worldwide that consists of a heterogeneous group of subtype diseases. The whole-slide images (WSIs) can capture the cell-level heterogeneity, and are routinely used for cancer diagnosis by pathologists. However, key driver genetic mutations related to targeted therapies are identified by genomic analysis like high-throughput molecular profiling. In this study, we develop a deep-learning model to predict the genetic mutations and biological pathway activities directly from WSIs. Our study offers unique insights into WSI visual interactions between mutation and its related pathway, enabling a head-to-head comparison to reinforce our major findings. Using the histopathology images from the Genomic Data Commons Database, our model can predict the point mutations of six important genes (AUC 0.68–0.85) and copy number alteration of another six genes (AUC 0.69–0.79). Additionally, the trained models can predict the activities of three out of ten canonical pathways (AUC 0.65–0.79). Next, we visualized the weight maps of tumor tiles in WSI to understand the decision-making process of deep-learning models via a self-attention mechanism. We further validated our models on liver and lung cancers that are related to metastatic breast cancer. Our results provide insights into the association between pathological image features, molecular outcomes, and targeted therapies for breast cancer patients.


2019 ◽  
Vol 35 (18) ◽  
pp. 3461-3467 ◽  
Author(s):  
Mohamed Amgad ◽  
Habiba Elfandy ◽  
Hagar Hussein ◽  
Lamees A Atteya ◽  
Mai A T Elsebaie ◽  
...  

Abstract Motivation While deep-learning algorithms have demonstrated outstanding performance in semantic image segmentation tasks, large annotation datasets are needed to create accurate models. Annotation of histology images is challenging due to the effort and experience required to carefully delineate tissue structures, and difficulties related to sharing and markup of whole-slide images. Results We recruited 25 participants, ranging in experience from senior pathologists to medical students, to delineate tissue regions in 151 breast cancer slides using the Digital Slide Archive. Inter-participant discordance was systematically evaluated, revealing low discordance for tumor and stroma, and higher discordance for more subjectively defined or rare tissue classes. Feedback provided by senior participants enabled the generation and curation of 20 000+ annotated tissue regions. Fully convolutional networks trained using these annotations were highly accurate (mean AUC=0.945), and the scale of annotation data provided notable improvements in image classification accuracy. Availability and Implementation Dataset is freely available at: https://goo.gl/cNM4EL. Supplementary information Supplementary data are available at Bioinformatics online.


2021 ◽  
Author(s):  
Wen-Yu Chuang ◽  
Chi-Chung Chen ◽  
Wei-Hsiang Yu ◽  
Chi-Ju Yeh ◽  
Shang-Hung Chang ◽  
...  

AbstractDetection of nodal micrometastasis (tumor size: 0.2–2.0 mm) is challenging for pathologists due to the small size of metastatic foci. Since lymph nodes with micrometastasis are counted as positive nodes, detecting micrometastasis is crucial for accurate pathologic staging of colorectal cancer. Previously, deep learning algorithms developed with manually annotated images performed well in identifying micrometastasis of breast cancer in sentinel lymph nodes. However, the process of manual annotation is labor intensive and time consuming. Multiple instance learning was later used to identify metastatic breast cancer without manual annotation, but its performance appears worse in detecting micrometastasis. Here, we developed a deep learning model using whole-slide images of regional lymph nodes of colorectal cancer with only a slide-level label (either a positive or negative slide). The training, validation, and testing sets included 1963, 219, and 1000 slides, respectively. A supercomputer TAIWANIA 2 was used to train a deep learning model to identify metastasis. At slide level, our algorithm performed well in identifying both macrometastasis (tumor size > 2.0 mm) and micrometastasis with an area under the receiver operating characteristics curve (AUC) of 0.9993 and 0.9956, respectively. Since most of our slides had more than one lymph node, we then tested the performance of our algorithm on 538 single-lymph node images randomly cropped from the testing set. At single-lymph node level, our algorithm maintained good performance in identifying macrometastasis and micrometastasis with an AUC of 0.9944 and 0.9476, respectively. Visualization using class activation mapping confirmed that our model identified nodal metastasis based on areas of tumor cells. Our results demonstrate for the first time that micrometastasis could be detected by deep learning on whole-slide images without manual annotation.


Author(s):  
Ke Zhao ◽  
Lin Wu ◽  
Yanqi Huang ◽  
Su Yao ◽  
Zeyan Xu ◽  
...  

Abstract Background In colorectal cancer (CRC), mucinous adenocarcinoma differs from other adenocarcinomas in gene-phenotype, morphology, and prognosis. However, mucinous components are present in a large amount of adenocarcinoma, and the prognostic value of mucus proportion has not been investigated. Artificial intelligence provides a way to quantify mucus proportion on whole-slide images (WSIs) accurately. We aimed to quantify mucus proportion by deep learning and further investigate its prognostic value in two CRC patients cohorts. Methods Deep learning was used to segment WSIs stained with hematoxylin and eosin. Mucus-tumor ratio (MTR) was defined as the proportion of mucinous component in the tumor area. A training cohort (N = 419) and a validation cohort (N = 315) were used to evaluate the prognostic value of MTR. Survival analysis was performed by the Cox proportional hazard model. Result Patients were stratified to mucus-low and mucus-high groups by 24.1% as the threshold. In the training cohort, patients with mucus-high had unfavorable outcomes (hazard ratio for high vs. low 1.88, 95% confidence interval 1.18-2.99, P = 0.008), with 5-year overall survival rates of 54.8% and 73.7% in mucus-high and mucus-low groups, respectively. The results were confirmed in the validation cohort (2.09, 1.21-3.60, 0.008; 79.8% vs. 62.8%). The prognostic value of MTR was maintained in multivariate analysis for both cohorts. Conclusion The deep learning quantified MTR was an independent prognostic factor in CRC. With the advantages of advanced efficiency and high consistency, our method is suitable for clinical application and promotes precision medicine development.


2021 ◽  
Author(s):  
Asmaa Ibrahim ◽  
Ayat G. Lashen ◽  
Ayaka Katayama ◽  
Raluca Mihai ◽  
Graham Ball ◽  
...  

AbstractAlthough counting mitoses is part of breast cancer grading, concordance studies showed low agreement. Refining the criteria for mitotic counting can improve concordance, particularly when using whole slide images (WSIs). This study aims to refine the methodology for optimal mitoses counting on WSI. Digital images of 595 hematoxylin and eosin stained sections were evaluated. Several morphological criteria were investigated and applied to define mitotic hotspots. Reproducibility, representativeness, time, and association with outcome were the criteria used to evaluate the best area size for mitoses counting. Three approaches for scoring mitoses on WSIs (single and multiple annotated rectangles and multiple digital high-power (×40) screen fields (HPSFs)) were evaluated. The relative increase in tumor cell density was the most significant and easiest parameter for identifying hotspots. Counting mitoses in 3 mm2 area was the most representative regarding saturation and concordance levels. Counting in area <2 mm2 resulted in a significant reduction in mitotic count (P = 0.02), whereas counting in area ≥4 mm2 was time-consuming and did not add a significant rise in overall mitotic count (P = 0.08). Using multiple HPSF, following calibration, provided the most reliable, timesaving, and practical method for mitoses counting on WSI. This study provides evidence-based methodology for defining the area and methodology of visual mitoses counting using WSI. Visual mitoses scoring on WSI can be performed reliably by adjusting the number of monitor screens.


2019 ◽  
Author(s):  
Geoffrey F. Schau ◽  
Erik A. Burlingame ◽  
Guillaume Thibault ◽  
Tauangtham Anekpuritanang ◽  
Ying Wang ◽  
...  

AbstractPathologists rely on clinical information, tissue morphology, and sophisticated molecular diagnostics to accurately infer the metastatic origin of secondary liver cancer. In this paper, we introduce a deep learning approach to identify spatially localized regions of cancerous tumor within hematoxylin and eosin stained tissue sections of liver cancer and to generate predictions of the cancer’s metastatic origin. Our approach achieves an accuracy of 90.2% when classifying metastatic origin of whole slide images into three distinct classes, which compares favorably to an established clinical benchmark by three board-certified pathologists whose accuracies ranged from 90.2% to 94.1% on the same prediction task. This approach illustrates the potential impact of deep learning systems to leverage morphological and structural features of H&E stained tissue sections to guide pathological and clinical determination of the metastatic origin of secondary liver cancers.


Sign in / Sign up

Export Citation Format

Share Document