TAMI-22. SEGMENTATION OF DISTINCT TUMOR HALLMARKS OF GLIOBLASTOMA ON DIGITAL HISTOPATHOLOGY USING A HIERARCHICAL DEEP LEARNING APPROACH

2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi202-vi203
Author(s):  
Alvaro Sandino ◽  
Ruchika Verma ◽  
Yijiang Chen ◽  
David Becerra ◽  
Eduardo Romero ◽  
...  

Abstract PURPOSE Glioblastoma is a highly heterogeneous brain tumor. Primary treatment for glioblastoma involves maximally-safe surgical resection. After surgery, resected tissue slides are visually analyzed by neuro-pathologists to identify distinct histological hallmarks characterizing glioblastoma including high cellularity, necrosis, and vascular proliferation. In this work, we present a hierarchical deep learning-based strategy to automatically segment distinct Glioblastoma niches including necrosis, cellular tumor, and hyperplastic blood vessels, on digitized histopathology slides. METHODS We employed the IvyGap cohort for which Hematoxylin and eosin (H&E) slides (digitized at 20X magnification) from n=41 glioblastoma patients were available. Additionally, expert-driven segmentations of cellular tumor, necrosis, and hyperplastic blood vessels (along with other histological attributes) were made available. We randomly employed n=120 slides from 29 patients for training, n=38 slides from 6 cases for validation, and n=30 slides from 6 patients to feed our deep learning model based on Residual Network architecture (ResNet-50). ~2,000 patches of 224x224 pixels were sampled for every slide. Our hierarchical model included first segmenting necrosis from non-necrotic (i.e. cellular tumor) regions, and then from the regions segmented as non-necrotic, identifying hyperplastic blood-vessels from the rest of the cellular tumor. RESULTS Our model achieved a training accuracy of 94%, and a testing accuracy of 88% with an area under the curve (AUC) of 92% in distinguishing necrosis from non-necrotic (i.e. cellular tumor) regions. Similarly, we obtained a training accuracy of 78%, and a testing accuracy of 87% (with an AUC of 94%) in identifying hyperplastic blood vessels from the rest of the cellular tumor. CONCLUSION We developed a reliable hierarchical segmentation model for automatic segmentation of necrotic, cellular tumor, and hyperplastic blood vessels on digitized H&E-stained Glioblastoma tissue images. Future work will involve extension of our model for segmentation of pseudopalisading patterns and microvascular proliferation.

2021 ◽  
Vol 10 (16) ◽  
pp. 3591
Author(s):  
Hyejun Seo ◽  
JaeJoon Hwang ◽  
Taesung Jeong ◽  
Jonghyun Shin

The purpose of this study is to evaluate and compare the performance of six state-of-the-art convolutional neural network (CNN)-based deep learning models for cervical vertebral maturation (CVM) on lateral cephalometric radiographs, and implement visualization of CVM classification for each model using gradient-weighted class activation map (Grad-CAM) technology. A total of 600 lateral cephalometric radiographs obtained from patients aged 6–19 years between 2013 and 2020 in Pusan National University Dental Hospital were used in this study. ResNet-18, MobileNet-v2, ResNet-50, ResNet-101, Inception-v3, and Inception-ResNet-v2 were tested to determine the optimal pre-trained network architecture. Multi-class classification metrics, accuracy, recall, precision, F1-score, and area under the curve (AUC) values from the receiver operating characteristic (ROC) curve were used to evaluate the performance of the models. All deep learning models demonstrated more than 90% accuracy, with Inception-ResNet-v2 performing the best, relatively. In addition, visualizing each deep learning model using Grad-CAM led to a primary focus on the cervical vertebrae and surrounding structures. The use of these deep learning models in clinical practice will facilitate dental practitioners in making accurate diagnoses and treatment plans.


2021 ◽  
Author(s):  
Sanjeewani NA ◽  
arun kumar yadav ◽  
Mohd Akbar ◽  
mohit kumar ◽  
Divakar Yadav

<div>Automatic retinal blood vessel segmentation is very crucial to ophthalmology. It plays a vital role in the early detection of several retinal diseases such as Diabetic Retinopathy, hypertension, etc. In recent times, deep learning based methods have attained great success in automatic segmentation of retinal blood vessels from images. In this paper, a U-NET based architecture is proposed to segment the retinal blood vessels from fundus images of the eye. Furthermore, 3 pre-processing algorithms are also proposed to enhance the performance of the system. The proposed architecture has provided significant results. On the basis of experimental evaluation on the publicly available DRIVE data set, it has been observed that the average accuracy (Acc) is .9577, sensitivity (Se) is .7436, specificity (Sp) is .9838 and F1-score is .7931. The proposed system outperforms all recent state of art approaches mentioned in the literature.</div>


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Jason Kugelman ◽  
David Alonso-Caneiro ◽  
Scott A. Read ◽  
Jared Hamwood ◽  
Stephen J. Vincent ◽  
...  

Abstract The analysis of the choroid in the eye is crucial for our understanding of a range of ocular diseases and physiological processes. Optical coherence tomography (OCT) imaging provides the ability to capture highly detailed cross-sectional images of the choroid yet only a very limited number of commercial OCT instruments provide methods for automatic segmentation of choroidal tissue. Manual annotation of the choroidal boundaries is often performed but this is impractical due to the lengthy time taken to analyse large volumes of images. Therefore, there is a pressing need for reliable and accurate methods to automatically segment choroidal tissue boundaries in OCT images. In this work, a variety of patch-based and fully-convolutional deep learning methods are proposed to accurately determine the location of the choroidal boundaries of interest. The effect of network architecture, patch-size and contrast enhancement methods was tested to better understand the optimal architecture and approach to maximize performance. The results are compared with manual boundary segmentation used as a ground-truth, as well as with a standard image analysis technique. Results of total retinal layer segmentation are also presented for comparison purposes. The findings presented here demonstrate the benefit of deep learning methods for segmentation of the chorio-retinal boundary analysis in OCT images.


Author(s):  
Frank Y. Shih ◽  
Himanshu Patel

This paper presents a novel deep learning classification technique applied on optical coherence tomography (OCT) retinal images. We propose the deep neural networks based on Vgg16 pre-trained network model. The OCT retinal image dataset consists of four classes, including three most common retina diseases and one normal retina scan. Because the scale of training data is not sufficiently large, we use the transfer learning technique. Since the convolutional neural networks are sensitive to a little data change, we use data augmentation to analyze the classified results on retinal images. The input grayscale OCT scan images are converted to RGB images using colormaps. We have evaluated different types of classifiers with variant parameters in training the network architecture. Experimental results show that testing accuracy of 99.48% can be obtained as combined on all the classes.


2021 ◽  
Vol 12 ◽  
Author(s):  
Md. Parvez Islam ◽  
Yuka Nakano ◽  
Unseok Lee ◽  
Keinichi Tokuda ◽  
Nobuo Kochi

The real challenge for separating leaf pixels from background pixels in thermal images is associated with various factors such as the amount of emitted and reflected thermal radiation from the targeted plant, absorption of reflected radiation by the humidity of the greenhouse, and the outside environment. We proposed TheLNet270v1 (thermal leaf network with 270 layers version 1) to recover the leaf canopy from its background in real time with higher accuracy than previous systems. The proposed network had an accuracy of 91% (mean boundary F1 score or BF score) to distinguish canopy pixels from background pixels and then segment the image into two classes: leaf and background. We evaluated the classification (segment) performance by using more than 13,766 images and obtained 95.75% training and 95.23% validation accuracies without overfitting issues. This research aimed to develop a deep learning technique for the automatic segmentation of thermal images to continuously monitor the canopy surface temperature inside a greenhouse.


2021 ◽  
Author(s):  
Sanjeewani NA ◽  
arun kumar yadav ◽  
Mohd Akbar ◽  
mohit kumar ◽  
Divakar Yadav

<div>Automatic retinal blood vessel segmentation is very crucial to ophthalmology. It plays a vital role in the early detection of several retinal diseases such as Diabetic Retinopathy, hypertension, etc. In recent times, deep learning based methods have attained great success in automatic segmentation of retinal blood vessels from images. In this paper, a U-NET based architecture is proposed to segment the retinal blood vessels from fundus images of the eye. Furthermore, 3 pre-processing algorithms are also proposed to enhance the performance of the system. The proposed architecture has provided significant results. On the basis of experimental evaluation on the publicly available DRIVE data set, it has been observed that the average accuracy (Acc) is .9577, sensitivity (Se) is .7436, specificity (Sp) is .9838 and F1-score is .7931. The proposed system outperforms all recent state of art approaches mentioned in the literature.</div>


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yuliang Ma ◽  
Xue Li ◽  
Xiaopeng Duan ◽  
Yun Peng ◽  
Yingchun Zhang

Purpose. Retinal blood vessel image segmentation is an important step in ophthalmological analysis. However, it is difficult to segment small vessels accurately because of low contrast and complex feature information of blood vessels. The objective of this study is to develop an improved retinal blood vessel segmentation structure (WA-Net) to overcome these challenges. Methods. This paper mainly focuses on the width of deep learning. The channels of the ResNet block were broadened to propagate more low-level features, and the identity mapping pathway was slimmed to maintain parameter complexity. A residual atrous spatial pyramid module was used to capture the retinal vessels at various scales. We applied weight normalization to eliminate the impacts of the mini-batch and improve segmentation accuracy. The experiments were performed on the DRIVE and STARE datasets. To show the generalizability of WA-Net, we performed cross-training between datasets. Results. The global accuracy and specificity within datasets were 95.66% and 96.45% and 98.13% and 98.71%, respectively. The accuracy and area under the curve of the interdataset diverged only by 1%∼2% compared with the performance of the corresponding intradataset. Conclusion. All the results show that WA-Net extracts more detailed blood vessels and shows superior performance on retinal blood vessel segmentation tasks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jared Hamwood ◽  
Beat Schmutz ◽  
Michael J. Collins ◽  
Mark C. Allenby ◽  
David Alonso-Caneiro

AbstractThis paper proposes a fully automatic method to segment the inner boundary of the bony orbit in two different image modalities: magnetic resonance imaging (MRI) and computed tomography (CT). The method, based on a deep learning architecture, uses two fully convolutional neural networks in series followed by a graph-search method to generate a boundary for the orbit. When compared to human performance for segmentation of both CT and MRI data, the proposed method achieves high Dice coefficients on both orbit and background, with scores of 0.813 and 0.975 in CT images and 0.930 and 0.995 in MRI images, showing a high degree of agreement with a manual segmentation by a human expert. Given the volumetric characteristics of these imaging modalities and the complexity and time-consuming nature of the segmentation of the orbital region in the human skull, it is often impractical to manually segment these images. Thus, the proposed method provides a valid clinical and research tool that performs similarly to the human observer.


Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


Sign in / Sign up

Export Citation Format

Share Document