whole slide image
Recently Published Documents


TOTAL DOCUMENTS

128
(FIVE YEARS 79)

H-INDEX

9
(FIVE YEARS 5)

2021 ◽  
Author(s):  
Yuxuan Wang ◽  
Xuechen Li ◽  
Jingxin Liu ◽  
Linlin Shen ◽  
Kunming Sun ◽  
...  

2021 ◽  
Author(s):  
Yi Zheng ◽  
Rushin Gindra ◽  
Margrit Betke ◽  
Jennifer Beane ◽  
Vijaya B Kolachalama

Deep learning is a powerful tool for assessing pathology data obtained from digitized biopsy slides. In the context of supervised learning, most methods typically divide a whole slide image (WSI) into patches, aggregate convolutional neural network outcomes on them and estimate overall disease grade. However, patch-based methods introduce label noise in training by assuming that each patch is independent with the same label as the WSI and neglect the important contextual information that is significant in disease grading. Here we present a Graph-Transformer (GT) based framework for processing pathology data, called GTP, that interprets morphological and spatial information at the WSI-level to predict disease grade. To demonstrate the applicability of our approach, we selected 3,024 hematoxylin and eosin WSIs of lung tumors and with normal histology from the Clinical Proteomic Tumor Analysis Consortium, the National Lung Screening Trial, and The Cancer Genome Atlas, and used GTP to distinguish adenocarcinoma (LUAD) and squamous cell carcinoma (LSCC) from those that have normal histology. Our model achieved consistently high performance on binary (tumor versus normal: mean overall accuracy = 0.975+/-0.013) as well as three-label (normal versus LUAD versus LSCC: mean accuracy = 0.932+/-0.019) classification on held-out test data, underscoring the power of GT-based deep learning for WSI-level classification. We also introduced a graph-based saliency mapping technique, called GraphCAM, that captures regional as well as contextual information and allows our model to highlight WSI regions that are highly associated with the class label. Taken together, our findings demonstrate GTP as a novel interpretable and effective deep learning framework for WSI-level classification.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shenghua Cheng ◽  
Sibo Liu ◽  
Jingya Yu ◽  
Gong Rao ◽  
Yuwei Xiao ◽  
...  

AbstractComputer-assisted diagnosis is key for scaling up cervical cancer screening. However, current recognition algorithms perform poorly on whole slide image (WSI) analysis, fail to generalize for diverse staining and imaging, and show sub-optimal clinical-level verification. Here, we develop a progressive lesion cell recognition method combining low- and high-resolution WSIs to recommend lesion cells and a recurrent neural network-based WSI classification model to evaluate the lesion degree of WSIs. We train and validate our WSI analysis system on 3,545 patient-wise WSIs with 79,911 annotations from multiple hospitals and several imaging instruments. On multi-center independent test sets of 1,170 patient-wise WSIs, we achieve 93.5% Specificity and 95.1% Sensitivity for classifying slides, comparing favourably to the average performance of three independent cytopathologists, and obtain 88.5% true positive rate for highlighting the top 10 lesion cells on 447 positive slides. After deployment, our system recognizes a one giga-pixel WSI in about 1.5 min.


Sign in / Sign up

Export Citation Format

Share Document