scholarly journals Deep Class-Specific Affinity-Guided Convolutional Network for Multimodal Unpaired Image Segmentation

Author(s):  
Jingkun Chen ◽  
Wenqi Li ◽  
Hongwei Li ◽  
Jianguo Zhang
2018 ◽  
Vol 176 ◽  
pp. 36-47 ◽  
Author(s):  
Aqing Yang ◽  
Huasheng Huang ◽  
Chan Zheng ◽  
Xunmu Zhu ◽  
Xiaofan Yang ◽  
...  

Author(s):  
Hao Zheng ◽  
Lin Yang ◽  
Jianxu Chen ◽  
Jun Han ◽  
Yizhe Zhang ◽  
...  

Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Xiaodong Huang ◽  
Hui Zhang ◽  
Li Zhuo ◽  
Xiaoguang Li ◽  
Jing Zhang

Extracting the tongue body accurately from a digital tongue image is a challenge for automated tongue diagnoses, as the blurred edge of the tongue body, interference of pathological details, and the huge difference in the size and shape of the tongue. In this study, an automated tongue image segmentation method using enhanced fully convolutional network with encoder-decoder structure was presented. In the frame of the proposed network, the deep residual network was adopted as an encoder to obtain dense feature maps, and a Receptive Field Block was assembled behind the encoder. Receptive Field Block can capture adequate global contextual prior because of its structure of the multibranch convolution layers with varying kernels. Moreover, the Feature Pyramid Network was used as a decoder to fuse multiscale feature maps for gathering sufficient positional information to recover the clear contour of the tongue body. The quantitative evaluation of the segmentation results of 300 tongue images from the SIPL-tongue dataset showed that the average Hausdorff Distance, average Symmetric Mean Absolute Surface Distance, average Dice Similarity Coefficient, average precision, average sensitivity, and average specificity were 11.2963, 3.4737, 97.26%, 95.66%, 98.97%, and 98.68%, respectively. The proposed method achieved the best performance compared with the other four deep-learning-based segmentation methods (including SegNet, FCN, PSPNet, and DeepLab v3+). There were also similar results on the HIT-tongue dataset. The experimental results demonstrated that the proposed method can achieve accurate tongue image segmentation and meet the practical requirements of automated tongue diagnoses.


Author(s):  
Hao Zheng ◽  
Yizhe Zhang ◽  
Lin Yang ◽  
Peixian Liang ◽  
Zhuo Zhao ◽  
...  

3D image segmentation plays an important role in biomedical image analysis. Many 2D and 3D deep learning models have achieved state-of-the-art segmentation performance on 3D biomedical image datasets. Yet, 2D and 3D models have their own strengths and weaknesses, and by unifying them together, one may be able to achieve more accurate results. In this paper, we propose a new ensemble learning framework for 3D biomedical image segmentation that combines the merits of 2D and 3D models. First, we develop a fully convolutional network based meta-learner to learn how to improve the results from 2D and 3D models (base-learners). Then, to minimize over-fitting for our sophisticated meta-learner, we devise a new training method that uses the results of the baselearners as multiple versions of “ground truths”. Furthermore, since our new meta-learner training scheme does not depend on manual annotation, it can utilize abundant unlabeled 3D image data to further improve the model. Extensive experiments on two public datasets (the HVSMR 2016 Challenge dataset and the mouse piriform cortex dataset) show that our approach is effective under fully-supervised, semisupervised, and transductive settings, and attains superior performance over state-of-the-art image segmentation methods.


PLoS ONE ◽  
2019 ◽  
Vol 14 (4) ◽  
pp. e0215676 ◽  
Author(s):  
Xu Ma ◽  
Xiangwu Deng ◽  
Long Qi ◽  
Yu Jiang ◽  
Hongwei Li ◽  
...  

Author(s):  
Yue Zhao ◽  
Lingming Zhang ◽  
Yang Liu ◽  
Deyu Meng ◽  
Zhiming Cui ◽  
...  

Author(s):  
V. R. S. Mani

In this chapter, the author paints a comprehensive picture of different deep learning models used in different multi-modal image segmentation tasks. This chapter is an introduction for those new to the field, an overview for those working in the field, and a reference for those searching for literature on a specific application. Methods are classified according to the different types of multi-modal images and the corresponding types of convolution neural networks used in the segmentation task. The chapter starts with an introduction to CNN topology and describes various models like Hyper Dense Net, Organ Attention Net, UNet, VNet, Dilated Fully Convolutional Network, Transfer Learning, etc.


Sign in / Sign up

Export Citation Format

Share Document