retinal vessel segmentation
Recently Published Documents


TOTAL DOCUMENTS

387
(FIVE YEARS 227)

H-INDEX

30
(FIVE YEARS 9)

2022 ◽  
Vol 98 ◽  
pp. 107670
Author(s):  
Huadeng Wang ◽  
Guang Xu ◽  
Xipeng Pan ◽  
Zhenbing Liu ◽  
Ningning Tang ◽  
...  

2022 ◽  
Vol 22 (1) ◽  
Author(s):  
Jiacheng Li ◽  
Ruirui Li ◽  
Ruize Han ◽  
Song Wang

Abstract Background Retinal vessel segmentation benefits significantly from deep learning. Its performance relies on sufficient training images with accurate ground-truth segmentation, which are usually manually annotated in the form of binary pixel-wise label maps. Manually annotated ground-truth label maps, more or less, contain errors for part of the pixels. Due to the thin structure of retina vessels, such errors are more frequent and serious in manual annotations, which negatively affect deep learning performance. Methods In this paper, we develop a new method to automatically and iteratively identify and correct such noisy segmentation labels in the process of network training. We consider historical predicted label maps of network-in-training from different epochs and jointly use them to self-supervise the predicted labels during training and dynamically correct the supervised labels with noises. Results We conducted experiments on the three datasets of DRIVE, STARE and CHASE-DB1 with synthetic noises, pseudo-labeled noises, and manually labeled noises. For synthetic noise, the proposed method corrects the original noisy label maps to a more accurate label map by 4.0–$$9.8\%$$ 9.8 % on $$F_1$$ F 1 and 10.7–$$16.8\%$$ 16.8 % on PR on three testing datasets. For the other two types of noise, the method could also improve the label map quality. Conclusions Experiment results verified that the proposed method could achieve better retinal image segmentation performance than many existing methods by simultaneously correcting the noise in the initial label map.


2022 ◽  
Vol 2022 ◽  
pp. 1-10
Author(s):  
Congjun Liu ◽  
Penghui Gu ◽  
Zhiyong Xiao

Retinal vessel segmentation is essential for the detection and diagnosis of eye diseases. However, it is difficult to accurately identify the vessel boundary due to the large variations of scale in the retinal vessels and the low contrast between the vessel and the background. Deep learning has a good effect on retinal vessel segmentation since it can capture representative and distinguishing features for retinal vessels. An improved U-Net algorithm for retinal vessel segmentation is proposed in this paper. To better identify vessel boundaries, the traditional convolutional operation CNN is replaced by a global convolutional network and boundary refinement in the coding part. To better divide the blood vessel and background, the improved position attention module and channel attention module are introduced in the jumping connection part. Multiscale input and multiscale dense feature pyramid cascade modules are used to better obtain feature information. In the decoding part, convolutional long and short memory networks and deep dilated convolution are used to extract features. In public datasets, DRIVE and CHASE_DB1, the accuracy reached 96.99% and 97.51%. The average performance of the proposed algorithm is better than that of existing algorithms.


2022 ◽  
Vol 71 ◽  
pp. 103169
Author(s):  
Tariq M. Khan ◽  
Mohammad A.U. Khan ◽  
Naveed Ur Rehman ◽  
Khuram Naveed ◽  
Imran Uddin Afridi ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
pp. 403
Author(s):  
Lin Pan ◽  
Zhen Zhang ◽  
Shaohua Zheng ◽  
Liqin Huang

Automatic segmentation and centerline extraction of blood vessels from retinal fundus images is an essential step to measure the state of retinal blood vessels and achieve the goal of auxiliary diagnosis. Combining the information of blood vessel segments and centerline can help improve the continuity of results and performance. However, previous studies have usually treated these two tasks as separate research topics. Therefore, we propose a novel multitask learning network (MSC-Net) for retinal vessel segmentation and centerline extraction. The network uses a multibranch design to combine information between two tasks. Channel and atrous spatial fusion block (CAS-FB) is designed to fuse and correct the features of different branches and different scales. The clDice loss function is also used to constrain the topological continuity of blood vessel segments and centerline. Experimental results on different fundus blood vessel datasets (DRIVE, STARE, and CHASE) show that our method can obtain better segmentation and centerline extraction results at different scales and has better topological continuity than state-of-the-art methods.


Author(s):  
Shuang Xu ◽  
Zhiqiang Chen ◽  
Weiyi Cao ◽  
Feng Zhang ◽  
Bo Tao

Retinal vessels are the only deep micro vessels that can be observed in human body, the accurate identification of which has great significance on the diagnosis of hypertension, diabetes and other diseases. To this end, a retinal vessel segmentation algorithm based on residual convolution neural network is proposed according to the characteristics of the retinal vessels on fundus images. Improved residual attention module and deep supervision module are utilized, in which the low-level and high-level feature graphs are joined to construct the encoder-decoder network structure, and atrous convolution is introduced to the pyramid pooling. The experiments result on the fundus image data set DRIVE and STARE show that this algorithm can obtain complete retinal vessel segmentation as well as connected vessel stems and terminals. The average accuracy on DRIVE and STARE reaches 95.90 and 96.88%, and the average specificity is 98.85 and 97.85%, which shows superior performance compared to other methods. This algorithm is verified feasible and effective for retinal vessel segmentation of fundus images and has the ability to detect more capillaries.


2021 ◽  
Author(s):  
Zhuojie Wu ◽  
Zijian Wang ◽  
Wenxuan Zou ◽  
Fan Ji ◽  
Hao Dang ◽  
...  

2021 ◽  
Author(s):  
Weijin Xu ◽  
Huihua Yang ◽  
Mingying Zhang ◽  
Xipeng Pan ◽  
Wentao Liu ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document