scholarly journals Defining the area of mitoses counting in invasive breast cancer using whole slide image

2021 ◽  
Author(s):  
Asmaa Ibrahim ◽  
Ayat G. Lashen ◽  
Ayaka Katayama ◽  
Raluca Mihai ◽  
Graham Ball ◽  
...  

AbstractAlthough counting mitoses is part of breast cancer grading, concordance studies showed low agreement. Refining the criteria for mitotic counting can improve concordance, particularly when using whole slide images (WSIs). This study aims to refine the methodology for optimal mitoses counting on WSI. Digital images of 595 hematoxylin and eosin stained sections were evaluated. Several morphological criteria were investigated and applied to define mitotic hotspots. Reproducibility, representativeness, time, and association with outcome were the criteria used to evaluate the best area size for mitoses counting. Three approaches for scoring mitoses on WSIs (single and multiple annotated rectangles and multiple digital high-power (×40) screen fields (HPSFs)) were evaluated. The relative increase in tumor cell density was the most significant and easiest parameter for identifying hotspots. Counting mitoses in 3 mm2 area was the most representative regarding saturation and concordance levels. Counting in area <2 mm2 resulted in a significant reduction in mitotic count (P = 0.02), whereas counting in area ≥4 mm2 was time-consuming and did not add a significant rise in overall mitotic count (P = 0.08). Using multiple HPSF, following calibration, provided the most reliable, timesaving, and practical method for mitoses counting on WSI. This study provides evidence-based methodology for defining the area and methodology of visual mitoses counting using WSI. Visual mitoses scoring on WSI can be performed reliably by adjusting the number of monitor screens.

2019 ◽  
Author(s):  
Alexander Rakhlin ◽  
Aleksei Tiulpin ◽  
Alexey A. Shvets ◽  
Alexandr A. Kalinin ◽  
Vladimir I. Iglovikov ◽  
...  

AbstractBreast cancer is one of the main causes of death world-wide. Histopathological cellularity assessment of residual tumors in post-surgical tissues is used to analyze a tumor’s response to a therapy. Correct cellularity assessment increases the chances of getting an appropriate treatment and facilitates the patient’s survival. In current clinical practice, tumor cellularity is manually estimated by pathologists; this process is tedious and prone to errors or low agreement rates between assessors. In this work, we evaluated three strong novel Deep Learning-based approaches for automatic assessment of tumor cellularity from post-treated breast surgical specimens stained with hematoxylin and eosin. We validated the proposed methods on the BreastPathQ SPIE challenge dataset that consisted of 2395 image patches selected from whole slide images acquired from 64 patients. Compared to expert pathologist scoring, our best performing method yielded the Cohen’s kappa coefficient of 0.69 (vs. 0.42 previously known in literature) and the intra-class correlation coefficient of 0.89 (vs. 0.83). Our results suggest that Deep Learning-based methods have a significant potential to alleviate the burden on pathologists, enhance the diagnostic workflow, and, thereby, facilitate better clinical outcomes in breast cancer treatment.


2020 ◽  
Vol 7 (1) ◽  
Author(s):  
Marc Aubreville ◽  
Christof A. Bertram ◽  
Taryn A. Donovan ◽  
Christian Marzahl ◽  
Andreas Maier ◽  
...  

AbstractCanine mammary carcinoma (CMC) has been used as a model to investigate the pathogenesis of human breast cancer and the same grading scheme is commonly used to assess tumor malignancy in both. One key component of this grading scheme is the density of mitotic figures (MF). Current publicly available datasets on human breast cancer only provide annotations for small subsets of whole slide images (WSIs). We present a novel dataset of 21 WSIs of CMC completely annotated for MF. For this, a pathologist screened all WSIs for potential MF and structures with a similar appearance. A second expert blindly assigned labels, and for non-matching labels, a third expert assigned the final labels. Additionally, we used machine learning to identify previously undetected MF. Finally, we performed representation learning and two-dimensional projection to further increase the consistency of the annotations. Our dataset consists of 13,907 MF and 36,379 hard negatives. We achieved a mean F1-score of 0.791 on the test set and of up to 0.696 on a human breast cancer dataset.


2021 ◽  
pp. jclinpath-2021-207742
Author(s):  
Asmaa Ibrahim ◽  
Ayat Lashen ◽  
Michael Toss ◽  
Raluca Mihai ◽  
Emad Rakha

The assessment of cell proliferation is a key morphological feature for diagnosing various pathological lesions and predicting their clinical behaviour. Visual assessment of mitotic figures in routine histological sections remains the gold-standard method to evaluate the proliferative activity and grading of cancer. Despite the apparent simplicity of such a well-established method, visual assessment of mitotic figures in breast cancer (BC) remains a challenging task with low concordance among pathologists which can lead to under or overestimation of tumour grade and hence affects management. Guideline recommendations for counting mitoses in BC have been published to standardise methodology and improve concordance; however, the results remain less satisfactory. Alternative approaches such as the use of the proliferation marker Ki67 have been recommended but these did not show better performance in terms of concordance or prognostic stratification. The advent of whole slide image technology has brought the issue of mitotic counting in BC into the light again with more challenges to develop objective criteria for identifying and scoring mitotic figures in digitalised images. Using reliable and reproducible morphological criteria can provide the highest degree of concordance among pathologists and could even benefit the further application of artificial intelligence (AI) in breast pathology, and this relies mainly on the explicit description of these figures. In this review, we highlight the morphology of mitotic figures and their mimickers, address the current caveats in counting mitoses in breast pathology and describe how to strictly apply the morphological criteria for accurate and reliable histological grade and AI models.


2019 ◽  
Author(s):  
Jun Jiang ◽  
Nicholas B. Larson ◽  
Naresh Prodduturi ◽  
Thomas J. Flotte ◽  
Steven N. Hart

AbstractFor many disease conditions, tissue samples are colored with multiple dyes and stains to add contrast and location information for specific proteins to accurately identify and diagnose disease. This presents a computational challenge for digital pathology, as whole-slide images (WSIs) need to be properly overlaid (i.e. registered) to identify co-localized features. Traditional image registration methods sometimes fail due to the high variation of cell density and insufficient texture information in WSIs – particularly at high magnifications. In this paper, we proposed a robust image registration strategy to align re-stained WSIs precisely and efficiently. This method is applied to 30 pairs of immunohistochemical (IHC) stains and their hematoxylin and eosin (H&E) counterparts. Our approach advances the existing methods in three key ways. First, we introduce refinements to existing image registration methods. Second, we present an effective weighting strategy using kernel density estimation to mitigate registration errors. Third, we account for the linear relationship across WSI levels to improve accuracy. Our experiments show significant decreases in registration errors when on matching IHC and H&E pairs, enabling subcellular-level analysis on stained and re-stained histological images. We also provide a tool to allow users to develop their own registration benchmarking experiments.


Doklady BGUIR ◽  
2020 ◽  
Vol 18 (8) ◽  
pp. 21-28
Author(s):  
S. N. Rjabceva ◽  
V. A. Kovalev ◽  
V. D. Malyshev ◽  
I. A. Siamionik ◽  
M. A. Derevyanko ◽  
...  

Analysis of breast cancer whole-slide image is an extremely labor-intensive process. Histological whole slide images have the following features: a high degree of tissue diversity both in one image and between different images, hierarchy, a large amount of graphic information and different artifacts. In this work, pre-processing of breast cancer whole-slide tissue image was carried out, which included normalization of the color distribution and the image area selection. We reduced the operating time of the other algorithms and excluded areas of breast cancer whole-slide tissue with a background to analyze. Also, an algorithm for finding similar neoplastic regions for semi-automatic selection using various image descriptors has been developed and implemented.


Author(s):  
Liron Pantanowitz ◽  
Pamela Michelow ◽  
Scott Hazelhurst ◽  
Shivam Kalra ◽  
Charles Choi ◽  
...  

Context.— Pathologists may encounter extraneous pieces of tissue (tissue floaters) on glass slides because of specimen cross-contamination. Troubleshooting this problem, including performing molecular tests for tissue identification if available, is time consuming and often does not satisfactorily resolve the problem. Objective.— To demonstrate the feasibility of using an image search tool to resolve the tissue floater conundrum. Design.— A glass slide was produced containing 2 separate hematoxylin and eosin (H&E)-stained tissue floaters. This fabricated slide was digitized along with the 2 slides containing the original tumors used to create these floaters. These slides were then embedded into a dataset of 2325 whole slide images comprising a wide variety of H&E stained diagnostic entities. Digital slides were broken up into patches and the patch features converted into barcodes for indexing and easy retrieval. A deep learning-based image search tool was employed to extract features from patches via barcodes, hence enabling image matching to each tissue floater. Results.— There was a very high likelihood of finding a correct tumor match for the queried tissue floater when searching the digital database. Search results repeatedly yielded a correct match within the top 3 retrieved images. The retrieval accuracy improved when greater proportions of the floater were selected. The time to run a search was completed within several milliseconds. Conclusions.— Using an image search tool offers pathologists an additional method to rapidly resolve the tissue floater conundrum, especially for those laboratories that have transitioned to going fully digital for primary diagnosis.


2016 ◽  
Vol 69 (11) ◽  
pp. 992-997 ◽  
Author(s):  
Shaimaa Al-Janabi ◽  
Anja Horstman ◽  
Henk-Jan van Slooten ◽  
Chantal Kuijpers ◽  
Clifton Lai-A-Fat ◽  
...  

2019 ◽  
Author(s):  
Seda Bilaloglu ◽  
Joyce Wu ◽  
Eduardo Fierro ◽  
Raul Delgado Sanchez ◽  
Paolo Santiago Ocampo ◽  
...  

AbstractVisual analysis of solid tissue mounted on glass slides is currently the primary method used by pathologists for determining the stage, type and subtypes of cancer. Although whole slide images are usually large (10s to 100s thousands pixels wide), an exhaustive though time-consuming assessment is necessary to reduce the risk of misdiagnosis. In an effort to address the many diagnostic challenges faced by trained experts, recent research has been focused on developing automatic prediction systems for this multi-class classification problem. Typically, complex convolutional neural network (CNN) architectures, such as Google’s Inception, are used to tackle this problem. Here, we introduce a greatly simplified CNN architecture, PathCNN, which allows for more efficient use of computational resources and better classification performance. Using this improved architecture, we trained simultaneously on whole-slide images from multiple tumor sites and corresponding non-neoplastic tissue. Dimensionality reduction analysis of the weights of the last layer of the network capture groups of images that faithfully represent the different types of cancer, highlighting at the same time differences in staining and capturing outliers, artifacts and misclassification errors. Our code is available online at: https://github.com/sedab/PathCNN.


Sign in / Sign up

Export Citation Format

Share Document