Retinal Vessel Centerline Extraction Using Multiscale Matched Filter and Sparse Representation-Based Classifier

Author(s):  
Bob Zhang ◽  
Qin Li ◽  
Lei Zhang ◽  
Jane You ◽  
Fakhri Karray
2020 ◽  
Vol 12 (23) ◽  
pp. 3991
Author(s):  
Xiaobin Zhao ◽  
Wei Li ◽  
Mengmeng Zhang ◽  
Ran Tao ◽  
Pengge Ma

In recent years, with the development of compressed sensing theory, sparse representation methods have been concerned by many researchers. Sparse representation can approximate the original image information with less space storage. Sparse representation has been investigated for hyperspectral imagery (HSI) detection, where approximation of testing pixel can be obtained by solving l1-norm minimization. However, l1-norm minimization does not always yield a sufficiently sparse solution when a dictionary is not large enough or atoms present a certain level of coherence. Comparatively, non-convex minimization problems, such as the lp penalties, need much weaker incoherence constraint conditions and may achieve more accurate approximation. Hence, we propose a novel detection algorithm utilizing sparse representation with lp-norm and propose adaptive iterated shrinkage thresholding method (AISTM) for lp-norm non-convex sparse coding. Target detection is implemented by representation of the all pixels employing homogeneous target dictionary (HTD), and the output is generated according to the representation residual. Experimental results for four real hyperspectral datasets show that the detection performance of the proposed method is improved by about 10% to 30% than methods mentioned in the paper, such as matched filter (MF), sparse and low-rank matrix decomposition (SLMD), adaptive cosine estimation (ACE), constrained energy minimization (CEM), one-class support vector machine (OC-SVM), the original sparse representation detector with l1-norm, and combined sparse and collaborative representation (CSCR).


2020 ◽  
Vol 37 (5) ◽  
pp. 855-864
Author(s):  
Nagendra Pratap Singh ◽  
Vibhav Prakash Singh

The registration of segmented retinal images is mainly used for the diagnosis of various diseases such as glaucoma, diabetes, and hypertension, etc. These retinal diseases depend on the retinal vessel structure. The fast and accurate registration of segmented retinal images helps to identify the changes in vessels and the diagnosis of the diseases. This paper presents a novel binary robust invariant scalable key point (BRISK) feature-based segmented retinal image registration approach. The BRISK framework is an efficient keypoint detection, description, and matching approach. The proposed approach contains three steps, namely, pre-processing, segmentation using matched filter based Gumbel pdf, and BRISK framework for registration of segmented source and target retinal images. The effectiveness of the proposed approach is demonstrated by evaluating the normalized cross-correlation of image pairs. Based on the experimental analysis, it has been observed that the performance of the proposed approach is better in both aspect, registration performance as well as computation time with respect to SURF and Harris partial intensity invariant feature descriptor based registration.


2021 ◽  
Vol 12 (1) ◽  
pp. 403
Author(s):  
Lin Pan ◽  
Zhen Zhang ◽  
Shaohua Zheng ◽  
Liqin Huang

Automatic segmentation and centerline extraction of blood vessels from retinal fundus images is an essential step to measure the state of retinal blood vessels and achieve the goal of auxiliary diagnosis. Combining the information of blood vessel segments and centerline can help improve the continuity of results and performance. However, previous studies have usually treated these two tasks as separate research topics. Therefore, we propose a novel multitask learning network (MSC-Net) for retinal vessel segmentation and centerline extraction. The network uses a multibranch design to combine information between two tasks. Channel and atrous spatial fusion block (CAS-FB) is designed to fuse and correct the features of different branches and different scales. The clDice loss function is also used to constrain the topological continuity of blood vessel segments and centerline. Experimental results on different fundus blood vessel datasets (DRIVE, STARE, and CHASE) show that our method can obtain better segmentation and centerline extraction results at different scales and has better topological continuity than state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document