scholarly journals Bi-SANet—Bilateral Network with Scale Attention for Retinal Vessel Segmentation

Symmetry ◽  
2021 ◽  
Vol 13 (10) ◽  
pp. 1820
Author(s):  
Yun Jiang ◽  
Huixia Yao ◽  
Zeqi Ma ◽  
Jingyao Zhang

The segmentation of retinal vessels is critical for the diagnosis of some fundus diseases. Retinal vessel segmentation requires abundant spatial information and receptive fields with different sizes while existing methods usually sacrifice spatial resolution to achieve real-time reasoning speed, resulting in inadequate vessel segmentation of low-contrast regions and weak anti-noise interference ability. The asymmetry of capillaries in fundus images also increases the difficulty of segmentation. In this paper, we proposed a two-branch network based on multi-scale attention to alleviate the above problem. First, a coarse network with multi-scale U-Net as the backbone is designed to capture more semantic information and to generate high-resolution features. A multi-scale attention module is used to obtain enough receptive fields. The other branch is a fine network, which uses the residual block of a small convolution kernel to make up for the deficiency of spatial information. Finally, we use the feature fusion module to aggregate the information of the coarse and fine networks. The experiments were performed on the DRIVE, CHASE, and STARE datasets. Respectively, the accuracy reached 96.93%, 97.58%, and 97.70%. The specificity reached 97.72%, 98.52%, and 98.94%. The F-measure reached 83.82%, 81.39%, and 84.36%. Experimental results show that compared with some state-of-art methods such as Sine-Net, SA-Net, our proposed method has better performance on three datasets.

2020 ◽  
Vol 2020 ◽  
pp. 1-14 ◽  
Author(s):  
Yun Jiang ◽  
Falin Wang ◽  
Jing Gao ◽  
Wenhuan Liu

Retinal vessel segmentation has high value for the research on the diagnosis of diabetic retinopathy, hypertension, and cardiovascular and cerebrovascular diseases. Most methods based on deep convolutional neural networks (DCNN) do not have large receptive fields or rich spatial information and cannot capture global context information of the larger areas. Therefore, it is difficult to identify the lesion area, and the segmentation efficiency is poor. This paper presents a butterfly fully convolutional neural network (BFCN). First, in view of the low contrast between blood vessels and the background in retinal blood vessel images, this paper uses automatic color enhancement (ACE) technology to increase the contrast between blood vessels and the background. Second, using the multiscale information extraction (MSIE) module in the backbone network can capture the global contextual information in a larger area to reduce the loss of feature information. At the same time, using the transfer layer (T_Layer) can not only alleviate gradient vanishing problem and repair the information loss in the downsampling process but also obtain rich spatial information. Finally, for the first time in the paper, the segmentation image is postprocessed, and the Laplacian sharpening method is used to improve the accuracy of vessel segmentation. The method mentioned in this paper has been verified by the DRIVE, STARE, and CHASE datasets, with the accuracy of 0.9627, 0.9735, and 0.9688, respectively.


Entropy ◽  
2020 ◽  
Vol 22 (8) ◽  
pp. 811
Author(s):  
Dan Yang ◽  
Guoru Liu ◽  
Mengcheng Ren ◽  
Bin Xu ◽  
Jiao Wang

Computer-aided automatic segmentation of retinal blood vessels plays an important role in the diagnosis of diseases such as diabetes, glaucoma, and macular degeneration. In this paper, we propose a multi-scale feature fusion retinal vessel segmentation model based on U-Net, named MSFFU-Net. The model introduces the inception structure into the multi-scale feature extraction encoder part, and the max-pooling index is applied during the upsampling process in the feature fusion decoder of an improved network. The skip layer connection is used to transfer each set of feature maps generated on the encoder path to the corresponding feature maps on the decoder path. Moreover, a cost-sensitive loss function based on the Dice coefficient and cross-entropy is designed. Four transformations—rotating, mirroring, shifting and cropping—are used as data augmentation strategies, and the CLAHE algorithm is applied to image preprocessing. The proposed framework is tested and trained on DRIVE and STARE, and sensitivity (Sen), specificity (Spe), accuracy (Acc), and area under curve (AUC) are adopted as the evaluation metrics. Detailed comparisons with U-Net model, at last, it verifies the effectiveness and robustness of the proposed model. The Sen of 0.7762 and 0.7721, Spe of 0.9835 and 0.9885, Acc of 0.9694 and 0.9537 and AUC value of 0.9790 and 0.9680 were achieved on DRIVE and STARE databases, respectively. Results are also compared to other state-of-the-art methods, demonstrating that the performance of the proposed method is superior to that of other methods and showing its competitive results.


2021 ◽  
Vol 70 ◽  
pp. 102977
Author(s):  
Zhengjin Shi ◽  
Tianyu Wang ◽  
Zheng Huang ◽  
Feng Xie ◽  
Zihong Liu ◽  
...  

2021 ◽  
Vol 38 (5) ◽  
pp. 1309-1317
Author(s):  
Jie Zhao ◽  
Qianjin Feng

Retinal vessel segmentation plays a significant role in the diagnosis and treatment of ophthalmological diseases. Recent studies have proved that deep learning can effectively segment the retinal vessel structure. However, the existing methods have difficulty in segmenting thin vessels, especially when the original image contains lesions. Based on generative adversarial network (GAN), this paper proposes a deep network with residual module and attention module (Deep Att-ResGAN). The network consists of four identical subnetworks. The output of each subnetwork is imported to the next subnetwork as contextual features that guide the segmentation. Firstly, the problems of the original image, namely, low contrast, uneven illumination, and data insufficiency, were solved through image enhancement and preprocessing. Next, an improved U-Net was adopted to serve as the generator, which stacks the residual and attention modules. These modules optimize the weight of the generator, and enhance the generalizability of the network. Further, the segmentation was refined iteratively by the discriminator, which contributes to the performance of vessel segmentation. Finally, comparative experiments were carried out on two public datasets: Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). The experimental results show that Deep Att-ResGAN outperformed the equivalent models like U-Net and GAN in most metrics. Our network achieved accuracy of 0.9565 and F1 of 0.829 on DRIVE, and accuracy of 0.9690 and F1 of 0.841 on STARE.


2019 ◽  
Vol 39 (2) ◽  
pp. 0211002 ◽  
Author(s):  
郑婷月 Zheng Tingyue ◽  
唐晨 Tang Chen ◽  
雷振坤 Lei Zhenkun

Electronics ◽  
2021 ◽  
Vol 10 (18) ◽  
pp. 2297
Author(s):  
Toufique A. Soomro ◽  
Ahmed Ali ◽  
Nisar Ahmed Jandan ◽  
Ahmed J. Afifi ◽  
Muhammad Irfan ◽  
...  

Segmentation of retinal vessels plays a crucial role in detecting many eye diseases, and its reliable computerized implementation is becoming essential for automated retinal disease screening systems. A large number of retinal vessel segmentation algorithms are available, but these methods improve accuracy levels. Their sensitivity remains low due to the lack of proper segmentation of low contrast vessels, and this low contrast requires more attention in this segmentation process. In this paper, we have proposed new preprocessing steps for the precise extraction of retinal blood vessels. These proposed preprocessing steps are also tested on other existing algorithms to observe their impact. There are two steps to our suggested module for segmenting retinal blood vessels. The first step involves implementing and validating the preprocessing module. The second step applies these preprocessing stages to our proposed binarization steps to extract retinal blood vessels. The proposed preprocessing phase uses the traditional image-processing method to provide a much-improved segmented vessel image. Our binarization steps contained the image coherence technique for the retinal blood vessels. The proposed method gives good performance on a database accessible to the public named DRIVE and STARE. The novelty of this proposed method is that it is an unsupervised method and offers an accuracy of around 96% and sensitivity of 81% while outperforming existing approaches. Due to new tactics at each step of the proposed process, this blood vessel segmentation application is suitable for computer analysis of retinal images, such as automated screening for the early diagnosis of eye disease.


Sign in / Sign up

Export Citation Format

Share Document