scholarly journals Convolutional autoencoder based model HistoCAE for segmentation of viable tumor regions in liver whole-slide images

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mousumi Roy ◽  
Jun Kong ◽  
Satyananda Kashyap ◽  
Vito Paolo Pastore ◽  
Fusheng Wang ◽  
...  

AbstractLiver cancer is one of the leading causes of cancer deaths in Asia and Africa. It is caused by the Hepatocellular carcinoma (HCC) in almost 90% of all cases. HCC is a malignant tumor and the most common histological type of the primary liver cancers. The detection and evaluation of viable tumor regions in HCC present an important clinical significance since it is a key step to assess response of chemoradiotherapy and tumor cell proportion in genetic tests. Recent advances in computer vision, digital pathology and microscopy imaging enable automatic histopathology image analysis for cancer diagnosis. In this paper, we present a multi-resolution deep learning model HistoCAE for viable tumor segmentation in whole-slide liver histopathology images. We propose convolutional autoencoder (CAE) based framework with a customized reconstruction loss function for image reconstruction, followed by a classification module to classify each image patch as tumor versus non-tumor. The resulting patch-based prediction results are spatially combined to generate the final segmentation result for each WSI. Additionally, the spatially organized encoded feature map derived from small image patches is used to compress the gigapixel whole-slide images. Our proposed model presents superior performance to other benchmark models with extensive experiments, suggesting its efficacy for viable tumor area segmentation with liver whole-slide images.

Author(s):  
Liron Pantanowitz ◽  
Pamela Michelow ◽  
Scott Hazelhurst ◽  
Shivam Kalra ◽  
Charles Choi ◽  
...  

Context.— Pathologists may encounter extraneous pieces of tissue (tissue floaters) on glass slides because of specimen cross-contamination. Troubleshooting this problem, including performing molecular tests for tissue identification if available, is time consuming and often does not satisfactorily resolve the problem. Objective.— To demonstrate the feasibility of using an image search tool to resolve the tissue floater conundrum. Design.— A glass slide was produced containing 2 separate hematoxylin and eosin (H&E)-stained tissue floaters. This fabricated slide was digitized along with the 2 slides containing the original tumors used to create these floaters. These slides were then embedded into a dataset of 2325 whole slide images comprising a wide variety of H&E stained diagnostic entities. Digital slides were broken up into patches and the patch features converted into barcodes for indexing and easy retrieval. A deep learning-based image search tool was employed to extract features from patches via barcodes, hence enabling image matching to each tissue floater. Results.— There was a very high likelihood of finding a correct tumor match for the queried tissue floater when searching the digital database. Search results repeatedly yielded a correct match within the top 3 retrieved images. The retrieval accuracy improved when greater proportions of the floater were selected. The time to run a search was completed within several milliseconds. Conclusions.— Using an image search tool offers pathologists an additional method to rapidly resolve the tissue floater conundrum, especially for those laboratories that have transitioned to going fully digital for primary diagnosis.


2021 ◽  
Vol 7 (3) ◽  
pp. 51
Author(s):  
Emanuela Paladini ◽  
Edoardo Vantaggiato ◽  
Fares Bougourzi ◽  
Cosimo Distante ◽  
Abdenour Hadid ◽  
...  

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.


2020 ◽  
Vol 12 ◽  
pp. 175883592097141
Author(s):  
Fan Zhang ◽  
Lian-Zhen Zhong ◽  
Xun Zhao ◽  
Di Dong ◽  
Ji-Jin Yao ◽  
...  

Background: To explore the prognostic value of radiomics-based and digital pathology-based imaging biomarkers from macroscopic magnetic resonance imaging (MRI) and microscopic whole-slide images for patients with nasopharyngeal carcinoma (NPC). Methods: We recruited 220 NPC patients and divided them into training ( n = 132), internal test ( n = 44), and external test ( n = 44) cohorts. The primary endpoint was failure-free survival (FFS). Radiomic features were extracted from pretreatment MRI and selected and integrated into a radiomic signature. The histopathological signature was extracted from whole-slide images of biopsy specimens using an end-to-end deep-learning method. Incorporating two signatures and independent clinical factors, a multi-scale nomogram was constructed. We also tested the correlation between the key imaging features and genetic alternations in an independent cohort of 16 patients (biological test cohort). Results: Both radiomic and histopathologic signatures presented significant associations with treatment failure in the three cohorts (C-index: 0.689–0.779, all p < 0.050). The multi-scale nomogram showed a consistent significant improvement for predicting treatment failure compared with the clinical model in the training (C-index: 0.817 versus 0.730, p < 0.050), internal test (C-index: 0.828 versus 0.602, p < 0.050) and external test (C-index: 0.834 versus 0.679, p < 0.050) cohorts. Furthermore, patients were stratified successfully into two groups with distinguishable prognosis (log-rank p < 0.0010) using our nomogram. We also found that two texture features were related to the genetic alternations of chromatin remodeling pathways in another independent cohort. Conclusion: The multi-scale imaging features showed a complementary value in prognostic prediction and may improve individualized treatment in NPC.


Cancers ◽  
2020 ◽  
Vol 13 (1) ◽  
pp. 11
Author(s):  
Rokshana Stephny Geread ◽  
Abishika Sivanandarajah ◽  
Emily Rita Brouwer ◽  
Geoffrey A. Wood ◽  
Dimitrios Androutsos ◽  
...  

In this work, a novel proliferation index (PI) calculator for Ki67 images called piNET is proposed. It is successfully tested on four datasets, from three scanners comprised of patches, tissue microarrays (TMAs) and whole slide images (WSI), representing a diverse multi-centre dataset for evaluating Ki67 quantification. Compared to state-of-the-art methods, piNET consistently performs the best over all datasets with an average PI difference of 5.603%, PI accuracy rate of 86% and correlation coefficient R = 0.927. The success of the system can be attributed to several innovations. Firstly, this tool is built based on deep learning, which can adapt to wide variability of medical images—and it was posed as a detection problem to mimic pathologists’ workflow which improves accuracy and efficiency. Secondly, the system is trained purely on tumor cells, which reduces false positives from non-tumor cells without needing the usual pre-requisite tumor segmentation step for Ki67 quantification. Thirdly, the concept of learning background regions through weak supervision is introduced, by providing the system with ideal and non-ideal (artifact) patches that further reduces false positives. Lastly, a novel hotspot analysis is proposed to allow automated methods to score patches from WSI that contain “significant” activity.


2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Yazan M. Alomari ◽  
Siti Norul Huda Sheikh Abdullah ◽  
Reena Rahayu MdZin ◽  
Khairuddin Omar

Analysis of whole-slide tissue for digital pathology images has been clinically approved to provide a second opinion to pathologists. Localization of focus points from Ki-67-stained histopathology whole-slide tissue microscopic images is considered the first step in the process of proliferation rate estimation. Pathologists use eye pooling or eagle-view techniques to localize the highly stained cell-concentrated regions from the whole slide under microscope, which is called focus-point regions. This procedure leads to a high variety of interpersonal observations and time consuming, tedious work and causes inaccurate findings. The localization of focus-point regions can be addressed as a clustering problem. This paper aims to automate the localization of focus-point regions from whole-slide images using the random patch probabilistic density method. Unlike other clustering methods, random patch probabilistic density method can adaptively localize focus-point regions without predetermining the number of clusters. The proposed method was compared with thek-means and fuzzyc-means clustering methods. Our proposed method achieves a good performance, when the results were evaluated by three expert pathologists. The proposed method achieves an average false-positive rate of 0.84% for the focus-point region localization error. Moreover, regarding RPPD used to localize tissue from whole-slide images, 228 whole-slide images have been tested; 97.3% localization accuracy was achieved.


2018 ◽  
Vol 142 (5) ◽  
pp. 638-644 ◽  
Author(s):  
Matthew G. Hanna ◽  
Ishtiaque Ahmed ◽  
Jeffrey Nine ◽  
Shyam Prajapati ◽  
Liron Pantanowitz

Context Augmented reality (AR) devices such as the Microsoft HoloLens have not been well used in the medical field. Objective To test the HoloLens for clinical and nonclinical applications in pathology. Design A Microsoft HoloLens was tested for virtual annotation during autopsy, viewing 3D gross and microscopic pathology specimens, navigating whole slide images, telepathology, as well as real-time pathology-radiology correlation. Results Pathology residents performing an autopsy wearing the HoloLens were remotely instructed with real-time diagrams, annotations, and voice instruction. 3D-scanned gross pathology specimens could be viewed as holograms and easily manipulated. Telepathology was supported during gross examination and at the time of intraoperative consultation, allowing users to remotely access a pathologist for guidance and to virtually annotate areas of interest on specimens in real-time. The HoloLens permitted radiographs to be coregistered on gross specimens and thereby enhanced locating important pathologic findings. The HoloLens also allowed easy viewing and navigation of whole slide images, using an AR workstation, including multiple coregistered tissue sections facilitating volumetric pathology evaluation. Conclusions The HoloLens is a novel AR tool with multiple clinical and nonclinical applications in pathology. The device was comfortable to wear, easy to use, provided sufficient computing power, and supported high-resolution imaging. It was useful for autopsy, gross and microscopic examination, and ideally suited for digital pathology. Unique applications include remote supervision and annotation, 3D image viewing and manipulation, telepathology in a mixed-reality environment, and real-time pathology-radiology correlation.


2014 ◽  
Vol 9 (S1) ◽  
Author(s):  
David Ameisen ◽  
Christophe Deroulers ◽  
Valérie Perrier ◽  
Fatiha Bouhidel ◽  
Maxime Battistella ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document