scholarly journals RFARN: Retinal vessel segmentation based on reverse fusion attention residual network

PLoS ONE ◽  
2021 ◽  
Vol 16 (12) ◽  
pp. e0257256
Author(s):  
Wenhuan Liu ◽  
Yun Jiang ◽  
Jingyao Zhang ◽  
Zeqi Ma

Accurate segmentation of retinal vessels is critical to the mechanism, diagnosis, and treatment of many ocular pathologies. Due to the poor contrast and inhomogeneous background of fundus imaging and the complex structure of retinal fundus images, this makes accurate segmentation of blood vessels from retinal images still challenging. In this paper, we propose an effective framework for retinal vascular segmentation, which is innovative mainly in the retinal image pre-processing stage and segmentation stage. First, we perform image enhancement on three publicly available fundus datasets based on the multiscale retinex with color restoration (MSRCR) method, which effectively suppresses noise and highlights the vessel structure creating a good basis for the segmentation phase. The processed fundus images are then fed into an effective Reverse Fusion Attention Residual Network (RFARN) for training to achieve more accurate retinal vessel segmentation. In the RFARN, we use Reverse Channel Attention Module (RCAM) and Reverse Spatial Attention Module (RSAM) to highlight the shallow details of the channel and spatial dimensions. And RCAM and RSAM are used to achieve effective fusion of deep local features with shallow global features to ensure the continuity and integrity of the segmented vessels. In the experimental results for the DRIVE, STARE and CHASE datasets, the evaluation metrics were 0.9712, 0.9822 and 0.9780 for accuracy (Acc), 0.8788, 0.8874 and 0.8352 for sensitivity (Se), 0.9803, 0.9891 and 0.9890 for specificity (Sp), area under the ROC curve(AUC) was 0.9910, 0.9952 and 0.9904, and the F1-Score was 0.8453, 0.8707 and 0.8185. In comparison with existing retinal image segmentation methods, e.g. UNet, R2UNet, DUNet, HAnet, Sine-Net, FANet, etc., our method in three fundus datasets achieved better vessel segmentation performance and results.

PLoS ONE ◽  
2017 ◽  
Vol 12 (12) ◽  
pp. e0188939 ◽  
Author(s):  
Nogol Memari ◽  
Abd Rahman Ramli ◽  
M. Iqbal Bin Saripan ◽  
Syamsiah Mashohor ◽  
Mehrdad Moghbel

Author(s):  
Shuang Xu ◽  
Zhiqiang Chen ◽  
Weiyi Cao ◽  
Feng Zhang ◽  
Bo Tao

Retinal vessels are the only deep micro vessels that can be observed in human body, the accurate identification of which has great significance on the diagnosis of hypertension, diabetes and other diseases. To this end, a retinal vessel segmentation algorithm based on residual convolution neural network is proposed according to the characteristics of the retinal vessels on fundus images. Improved residual attention module and deep supervision module are utilized, in which the low-level and high-level feature graphs are joined to construct the encoder-decoder network structure, and atrous convolution is introduced to the pyramid pooling. The experiments result on the fundus image data set DRIVE and STARE show that this algorithm can obtain complete retinal vessel segmentation as well as connected vessel stems and terminals. The average accuracy on DRIVE and STARE reaches 95.90 and 96.88%, and the average specificity is 98.85 and 97.85%, which shows superior performance compared to other methods. This algorithm is verified feasible and effective for retinal vessel segmentation of fundus images and has the ability to detect more capillaries.


2021 ◽  
Vol 38 (5) ◽  
pp. 1309-1317
Author(s):  
Jie Zhao ◽  
Qianjin Feng

Retinal vessel segmentation plays a significant role in the diagnosis and treatment of ophthalmological diseases. Recent studies have proved that deep learning can effectively segment the retinal vessel structure. However, the existing methods have difficulty in segmenting thin vessels, especially when the original image contains lesions. Based on generative adversarial network (GAN), this paper proposes a deep network with residual module and attention module (Deep Att-ResGAN). The network consists of four identical subnetworks. The output of each subnetwork is imported to the next subnetwork as contextual features that guide the segmentation. Firstly, the problems of the original image, namely, low contrast, uneven illumination, and data insufficiency, were solved through image enhancement and preprocessing. Next, an improved U-Net was adopted to serve as the generator, which stacks the residual and attention modules. These modules optimize the weight of the generator, and enhance the generalizability of the network. Further, the segmentation was refined iteratively by the discriminator, which contributes to the performance of vessel segmentation. Finally, comparative experiments were carried out on two public datasets: Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). The experimental results show that Deep Att-ResGAN outperformed the equivalent models like U-Net and GAN in most metrics. Our network achieved accuracy of 0.9565 and F1 of 0.829 on DRIVE, and accuracy of 0.9690 and F1 of 0.841 on STARE.


2016 ◽  
Vol 25 (3) ◽  
pp. 503-511 ◽  
Author(s):  
Chengzhang Zhu ◽  
Beiji Zou ◽  
Jinkai Cui ◽  
Yao Xiang ◽  
Hui Wu

Sign in / Sign up

Export Citation Format

Share Document