histopathology images
Recently Published Documents


TOTAL DOCUMENTS

415
(FIVE YEARS 278)

H-INDEX

25
(FIVE YEARS 10)

2022 ◽  
Author(s):  
Deepa Darshini Gunashekar ◽  
Lars Bielak ◽  
Leonard Hägele ◽  
Arnie Berlin ◽  
Benedict Oerther ◽  
...  

Abstract Automatic prostate tumor segmentation is often unable to identify the lesion even if in multi-parametric MRI data is used as input, and the segmentation output is difficult to verify due to the lack of clinically established ground truth images. In this work we use an explainable deep learning model to interpret the predictions of a convolutional neural network (CNN) for prostate tumor segmentation. The CNN uses a U-Net architecture which was trained on multi-parametric MRI data from 122 patients to automatically segment the prostate gland and prostate tumor lesions. In addition, co-registered ground truth data from whole mount histopathology images were available in 15 patients that were used as a test set during CNN testing. To be able to interpret the segmentation results of the CNN, heat maps were generated using the Gradient Weighted Class Activation Map (Grad-CAM) method. With the CNN a mean Dice Sorensen Coefficient for the prostate gland and the tumor lesions of 0.62 and 0.31 with the radiologist drawn ground truth and 0.32 with wholemount histology ground truth for tumor lesions could be achieved. Dice Sorensen Coefficient between CNN predictions and manual segmentations from MRI and histology data were not significantly different. In the prostate the Grad-CAM heat maps could differentiate between tumor and healthy prostate tissue, which indicates that the image information in the tumor was essential for the CNN segmentation.


2022 ◽  
pp. 103400
Author(s):  
Pin Wang ◽  
Pufei Li ◽  
Yongming Li ◽  
Jin Xu ◽  
Fang Yan ◽  
...  

2022 ◽  
Vol 71 (2) ◽  
pp. 3407-3423
Author(s):  
Shakra Mehak ◽  
M. Usman Ashraf ◽  
Rabia Zafar ◽  
Ahmed M. Alghamdi ◽  
Ahmed S. Alfakeeh ◽  
...  

2021 ◽  
Vol 2 (1) ◽  
pp. 101-105
Author(s):  
Runyu Hong ◽  
Wenke Liu ◽  
David Fenyö

Studies have shown that STK11 mutation plays a critical role in affecting the lung adenocarcinoma (LUAD) tumor immune environment. By training an Inception-Resnet-v2 deep convolutional neural network model, we were able to classify STK11-mutated and wild-type LUAD tumor histopathology images with a promising accuracy (per slide AUROC = 0.795). Dimensional reduction of the activation maps before the output layer of the test set images revealed that fewer immune cells were accumulated around cancer cells in STK11-mutation cases. Our study demonstrated that deep convolutional network model can automatically identify STK11 mutations based on histopathology slides and confirmed that the immune cell density was the main feature used by the model to distinguish STK11-mutated cases.


2021 ◽  
Vol 12 (1) ◽  
pp. 288
Author(s):  
Tasleem Kausar ◽  
Adeeba Kausar ◽  
Muhammad Adnan Ashraf ◽  
Muhammad Farhan Siddique ◽  
Mingjiang Wang ◽  
...  

Histopathological image analysis is an examination of tissue under a light microscope for cancerous disease diagnosis. Computer-assisted diagnosis (CAD) systems work well by diagnosing cancer from histopathology images. However, stain variability in histopathology images is inevitable due to the use of different staining processes, operator ability, and scanner specifications. These stain variations present in histopathology images affect the accuracy of the CAD systems. Various stain normalization techniques have been developed to cope with inter-variability issues, allowing standardizing the appearance of images. However, in stain normalization, these methods rely on the single reference image rather than incorporate color distributions of the entire dataset. In this paper, we design a novel machine learning-based model that takes advantage of whole dataset distributions as well as color statistics of a single target image instead of relying only on a single target image. The proposed deep model, called stain acclimation generative adversarial network (SA-GAN), consists of one generator and two discriminators. The generator maps the input images from the source domain to the target domain. Among discriminators, the first discriminator forces the generated images to maintain the color patterns as of target domain. While second discriminator forces the generated images to preserve the structure contents as of source domain. The proposed model is trained using a color attribute metric, extracted from a selected template image. Therefore, the designed model not only learns dataset-specific staining properties but also image-specific textural contents. Evaluated results on four different histopathology datasets show the efficacy of SA-GAN to acclimate stain contents and enhance the quality of normalization by obtaining the highest values of performance metrics. Additionally, the proposed method is also evaluated for multiclass cancer type classification task, showing a 6.9% improvement in accuracy on ICIAR 2018 hidden test data.


Sign in / Sign up

Export Citation Format

Share Document