Integrating deep convolutional neural networks with marker-controlled watershed for overlapping nuclei segmentation in histopathology images

2020 ◽  
Vol 376 ◽  
pp. 166-179 ◽  
Author(s):  
Lipeng Xie ◽  
Jin Qi ◽  
Lili Pan ◽  
Samad Wali
2019 ◽  
Vol 1 (1) ◽  
Author(s):  
Hwejin Jung ◽  
Bilal Lodhi ◽  
Jaewoo Kang

Abstract Background Since nuclei segmentation in histopathology images can provide key information for identifying the presence or stage of a disease, the images need to be assessed carefully. However, color variation in histopathology images, and various structures of nuclei are two major obstacles in accurately segmenting and analyzing histopathology images. Several machine learning methods heavily rely on hand-crafted features which have limitations due to manual thresholding. Results To obtain robust results, deep learning based methods have been proposed. Deep convolutional neural networks (DCNN) used for automatically extracting features from raw image data have been proven to achieve great performance. Inspired by such achievements, we propose a nuclei segmentation method based on DCNNs. To normalize the color of histopathology images, we use a deep convolutional Gaussian mixture color normalization model which is able to cluster pixels while considering the structures of nuclei. To segment nuclei, we use Mask R-CNN which achieves state-of-the-art object segmentation performance in the field of computer vision. In addition, we perform multiple inference as a post-processing step to boost segmentation performance. We evaluate our segmentation method on two different datasets. The first dataset consists of histopathology images of various organ while the other consists histopathology images of the same organ. Performance of our segmentation method is measured in various experimental setups at the object-level and the pixel-level. In addition, we compare the performance of our method with that of existing state-of-the-art methods. The experimental results show that our nuclei segmentation method outperforms the existing methods. Conclusions We propose a nuclei segmentation method based on DCNNs for histopathology images. The proposed method which uses Mask R-CNN with color normalization and multiple inference post-processing provides robust nuclei segmentation results. Our method also can facilitate downstream nuclei morphological analyses as it provides high-quality features extracted from histopathology images.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew Lagree ◽  
Majidreza Mohebpour ◽  
Nicholas Meti ◽  
Khadijeh Saednia ◽  
Fang-I. Lu ◽  
...  

AbstractBreast cancer is currently the second most common cause of cancer-related death in women. Presently, the clinical benchmark in cancer diagnosis is tissue biopsy examination. However, the manual process of histopathological analysis is laborious, time-consuming, and limited by the quality of the specimen and the experience of the pathologist. This study's objective was to determine if deep convolutional neural networks can be trained, with transfer learning, on a set of histopathological images independent of breast tissue to segment tumor nuclei of the breast. Various deep convolutional neural networks were evaluated for the study, including U-Net, Mask R-CNN, and a novel network (GB U-Net). The networks were trained on a set of Hematoxylin and Eosin (H&E)-stained images of eight diverse types of tissues. GB U-Net demonstrated superior performance in segmenting sites of invasive diseases (AJI = 0.53, mAP = 0.39 & AJI = 0.54, mAP = 0.38), validated on two hold-out datasets exclusively containing breast tissue images of approximately 7,582 annotated cells. The results of the networks, trained on images independent of breast tissue, demonstrated that tumor nuclei of the breast could be accurately segmented.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Sign in / Sign up

Export Citation Format

Share Document