A novel retinal vessel extraction method based on dynamic scales allocation

Author(s):  
Duoduo Gou ◽  
Tong Ma ◽  
Ying Wei
Author(s):  
V. Sathananthavathi ◽  
G. Indumathi ◽  
Rita Mahiya ◽  
S. Priyadarshini

2020 ◽  
Vol 79 (31-32) ◽  
pp. 22337-22353 ◽  
Author(s):  
G. R. Jainish ◽  
G. Wiselin Jiji ◽  
P. Alwin Infant

2007 ◽  
Vol 63 (12) ◽  
pp. 1382-1387
Author(s):  
Teruhiko Kitagawa ◽  
Xiangrong Zhou ◽  
Takeshi Hara ◽  
Hiroshi Fujita ◽  
Ryujiro Yokoyama ◽  
...  

2021 ◽  
Vol 38 (5) ◽  
pp. 1309-1317
Author(s):  
Jie Zhao ◽  
Qianjin Feng

Retinal vessel segmentation plays a significant role in the diagnosis and treatment of ophthalmological diseases. Recent studies have proved that deep learning can effectively segment the retinal vessel structure. However, the existing methods have difficulty in segmenting thin vessels, especially when the original image contains lesions. Based on generative adversarial network (GAN), this paper proposes a deep network with residual module and attention module (Deep Att-ResGAN). The network consists of four identical subnetworks. The output of each subnetwork is imported to the next subnetwork as contextual features that guide the segmentation. Firstly, the problems of the original image, namely, low contrast, uneven illumination, and data insufficiency, were solved through image enhancement and preprocessing. Next, an improved U-Net was adopted to serve as the generator, which stacks the residual and attention modules. These modules optimize the weight of the generator, and enhance the generalizability of the network. Further, the segmentation was refined iteratively by the discriminator, which contributes to the performance of vessel segmentation. Finally, comparative experiments were carried out on two public datasets: Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). The experimental results show that Deep Att-ResGAN outperformed the equivalent models like U-Net and GAN in most metrics. Our network achieved accuracy of 0.9565 and F1 of 0.829 on DRIVE, and accuracy of 0.9690 and F1 of 0.841 on STARE.


Sign in / Sign up

Export Citation Format

Share Document