scholarly journals (M)SLAe-Net: Multi-Scale Multi-Level Attention embedded Network for Retinal Vessel Segmentation

Author(s):  
Shreshth Saini ◽  
Geetika Agrawal
2021 ◽  
Vol 70 ◽  
pp. 102977
Author(s):  
Zhengjin Shi ◽  
Tianyu Wang ◽  
Zheng Huang ◽  
Feng Xie ◽  
Zihong Liu ◽  
...  

2019 ◽  
Vol 39 (2) ◽  
pp. 0211002 ◽  
Author(s):  
郑婷月 Zheng Tingyue ◽  
唐晨 Tang Chen ◽  
雷振坤 Lei Zhenkun

Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1820
Author(s):  
Yun Jiang ◽  
Huixia Yao ◽  
Zeqi Ma ◽  
Jingyao Zhang

The segmentation of retinal vessels is critical for the diagnosis of some fundus diseases. Retinal vessel segmentation requires abundant spatial information and receptive fields with different sizes while existing methods usually sacrifice spatial resolution to achieve real-time reasoning speed, resulting in inadequate vessel segmentation of low-contrast regions and weak anti-noise interference ability. The asymmetry of capillaries in fundus images also increases the difficulty of segmentation. In this paper, we proposed a two-branch network based on multi-scale attention to alleviate the above problem. First, a coarse network with multi-scale U-Net as the backbone is designed to capture more semantic information and to generate high-resolution features. A multi-scale attention module is used to obtain enough receptive fields. The other branch is a fine network, which uses the residual block of a small convolution kernel to make up for the deficiency of spatial information. Finally, we use the feature fusion module to aggregate the information of the coarse and fine networks. The experiments were performed on the DRIVE, CHASE, and STARE datasets. Respectively, the accuracy reached 96.93%, 97.58%, and 97.70%. The specificity reached 97.72%, 98.52%, and 98.94%. The F-measure reached 83.82%, 81.39%, and 84.36%. Experimental results show that compared with some state-of-art methods such as Sine-Net, SA-Net, our proposed method has better performance on three datasets.


Sign in / Sign up

Export Citation Format

Share Document