scholarly journals A Multi-Scale Feature Fusion Method Based on U-Net for Retinal Vessel Segmentation

Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 811
Author(s):  
Dan Yang ◽  
Guoru Liu ◽  
Mengcheng Ren ◽  
Bin Xu ◽  
Jiao Wang

Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations—rotating, mirroring, shifting and cropping—are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.

Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1820
Author(s):  
Yun Jiang ◽  
Huixia Yao ◽  
Zeqi Ma ◽  
Jingyao Zhang

The segmentation of retinal vessels is critical for the diagnosis of some fundus diseases. Retinal vessel segmentation requires abundant spatial information and receptive fields with different sizes while existing methods usually sacrifice spatial resolution to achieve real-time reasoning speed, resulting in inadequate vessel segmentation of low-contrast regions and weak anti-noise interference ability. The asymmetry of capillaries in fundus images also increases the difficulty of segmentation. In this paper, we proposed a two-branch network based on multi-scale attention to alleviate the above problem. First, a coarse network with multi-scale U-Net as the backbone is designed to capture more semantic information and to generate high-resolution features. A multi-scale attention module is used to obtain enough receptive fields. The other branch is a fine network, which uses the residual block of a small convolution kernel to make up for the deficiency of spatial information. Finally, we use the feature fusion module to aggregate the information of the coarse and fine networks. The experiments were performed on the DRIVE, CHASE, and STARE datasets. Respectively, the accuracy reached 96.93%, 97.58%, and 97.70%. The specificity reached 97.72%, 98.52%, and 98.94%. The F-measure reached 83.82%, 81.39%, and 84.36%. Experimental results show that compared with some state-of-art methods such as Sine-Net, SA-Net, our proposed method has better performance on three datasets.


2021 ◽  
pp. 1-14
Author(s):  
Fengli Lu ◽  
Chengcai Fu ◽  
Guoying Zhang ◽  
Jie Shi

Accurate segmentation of fractures in coal rock CT images is important for the development of coalbed methane. However, due to the large variation of fracture scale and the similarity of gray values between weak fractures and the surrounding matrix, it remains a challenging task. And there is no published dataset of coal rock, which make the task even harder. In this paper, a novel adaptive multi-scale feature fusion method based on U-net (AMSFF-U-net) is proposed for fracture segmentation in coal rock CT images. Specifically, encoder and decoder path consist of residual blocks (ReBlock), respectively. The attention skip concatenation (ASC) module is proposed to capture more representative and distinguishing features by combining the high-level and low-level features of adjacent layers. The adaptive multi-scale feature fusion (AMSFF) module is presented to adaptively fuse different scale feature maps of encoder path; it can effectively capture rich multi-scale features. In response to the lack of coal rock fractures training data, we applied a set of comprehensive data augmentation operations to increase the diversity of training samples. These extensive experiments are conducted via seven state-of-the-art methods (i.e., FCEM, U-net, Res-Unet, Unet++, MSN-Net, WRAU-Net and ours). The experiment results demonstrate that the proposed AMSFF-U-net can achieve better segmentation performance in our works, particularly for weak fractures and tiny scale fractures.


2020 ◽  
Author(s):  
Fengli Lu ◽  
Chengcai Fu ◽  
Guoying Zhang ◽  
Jie Shi

Abstract Accurate segmentation of fractures in coal rock CT images is important for safe production and the development of coalbed methane.However,to make segment coal rock fractures accurate,the challenges as the following:1)The coal rock CT images have the characteristics which are high background noise, sparse target, weak boundary information, uneven gray level, low contrast etc.; 2)There is no a public dataset of coal rock CT images;3)Limited coal rock CT images samples.In the paper,we proposed adaptive multi-scale feature fusion based residual U-uet(AMSFFRU-uet) for fracture segmentation in coal rock CT images to address the issues.In order to reduce the loss of tiny and weak fractures, dilated residual blocks (DResBlock) are embedded into the U-uet structure, which expand the receptive field and extract fracture information atdifferent scales.Furthermore, for reducing the loss of spatial information during the down-sampling process, feature maps of different sizes in the encoding branch are concatenated by adaptive multi-scale featurefusion module,which is as the input of the first up-sampling in the decoding branch.And we applieda set of comprehensive data augmentation operations to increase the diversity of training samples. Our network,U-net and ResU-net are tested on our dataset of coal rock CT images with 5 different textures.The experimental results show that compared with U-net and ResU-net, our proposed approach improve the average Dice coefficient by 5.1% and 2.9% and the average accuracy by 4.5% and 2%,respectively.Therefore,AMSFFRU-net can achieve better segmentation of coal rock fractures,and has stronger generalization ability and robustness.


2021 ◽  
Vol 8 ◽  
Author(s):  
Jiawei Zhang ◽  
Yanchun Zhang ◽  
Hailong Qiu ◽  
Wen Xie ◽  
Zeyang Yao ◽  
...  

Retinal vessel segmentation plays an important role in the diagnosis of eye-related diseases and biomarkers discovery. Existing works perform multi-scale feature aggregation in an inter-layer manner, namely inter-layer feature aggregation. However, such an approach only fuses features at either a lower scale or a higher scale, which may result in a limited segmentation performance, especially on thin vessels. This discovery motivates us to fuse multi-scale features in each layer, intra-layer feature aggregation, to mitigate the problem. Therefore, in this paper, we propose Pyramid-Net for accurate retinal vessel segmentation, which features intra-layer pyramid-scale aggregation blocks (IPABs). At each layer, IPABs generate two associated branches at a higher scale and a lower scale, respectively, and the two with the main branch at the current scale operate in a pyramid-scale manner. Three further enhancements including pyramid inputs enhancement, deep pyramid supervision, and pyramid skip connections are proposed to boost the performance. We have evaluated Pyramid-Net on three public retinal fundus photography datasets (DRIVE, STARE, and CHASE-DB1). The experimental results show that Pyramid-Net can effectively improve the segmentation performance especially on thin vessels, and outperforms the current state-of-the-art methods on all the adopted three datasets. In addition, our method is more efficient than existing methods with a large reduction in computational cost. We have released the source code at https://github.com/JerRuy/Pyramid-Net.


2021 ◽  
Vol 70 ◽  
pp. 102977
Author(s):  
Zhengjin Shi ◽  
Tianyu Wang ◽  
Zheng Huang ◽  
Feng Xie ◽  
Zihong Liu ◽  
...  

Entropy ◽  
2019 ◽  
Vol 21 (2) ◽  
pp. 168 ◽  
Author(s):  
Chang Wang ◽  
Zongya Zhao ◽  
Qiongqiong Ren ◽  
Yongtao Xu ◽  
Yi Yu

Various retinal vessel segmentation methods based on convolutional neural networks were proposed recently, and Dense U-net as a new semantic segmentation network was successfully applied to scene segmentation. Retinal vessel is tiny, and the features of retinal vessel can be learned effectively by the patch-based learning strategy. In this study, we proposed a new retinal vessel segmentation framework based on Dense U-net and the patch-based learning strategy. In the process of training, training patches were obtained by random extraction strategy, Dense U-net was adopted as a training network, and random transformation was used as a data augmentation strategy. In the process of testing, test images were divided into image patches, test patches were predicted by training model, and the segmentation result can be reconstructed by overlapping-patches sequential reconstruction strategy. This proposed method was applied to public datasets DRIVE and STARE, and retinal vessel segmentation was performed. Sensitivity (Se), specificity (Sp), accuracy (Acc), and area under each curve (AUC) were adopted as evaluation metrics to verify the effectiveness of proposed method. Compared with state-of-the-art methods including the unsupervised, supervised, and convolutional neural network (CNN) methods, the result demonstrated that our approach is competitive in these evaluation metrics. This method can obtain a better segmentation result than specialists, and has clinical application value.


2020 ◽  
Vol 16 (3) ◽  
pp. 132-145
Author(s):  
Gang Liu ◽  
Chuyi Wang

Neural network models have been widely used in the field of object detecting. The region proposal methods are widely used in the current object detection networks and have achieved well performance. The common region proposal methods hunt the objects by generating thousands of the candidate boxes. Compared to other region proposal methods, the region proposal network (RPN) method improves the accuracy and detection speed with several hundred candidate boxes. However, since the feature maps contains insufficient information, the ability of RPN to detect and locate small-sized objects is poor. A novel multi-scale feature fusion method for region proposal network to solve the above problems is proposed in this article. The proposed method is called multi-scale region proposal network (MS-RPN) which can generate suitable feature maps for the region proposal network. In MS-RPN, the selected feature maps at multiple scales are fine turned respectively and compressed into a uniform space. The generated fusion feature maps are called refined fusion features (RFFs). RFFs incorporate abundant detail information and context information. And RFFs are sent to RPN to generate better region proposals. The proposed approach is evaluated on PASCAL VOC 2007 and MS COCO benchmark tasks. MS-RPN obtains significant improvements over the comparable state-of-the-art detection models.


Sign in / Sign up

Export Citation Format

Share Document