scholarly journals Fundus Image Registration Technique Based on Local Feature of Retinal Vessels

2021 ◽  
Vol 11 (23) ◽  
pp. 11201
Author(s):  
Roziana Ramli ◽  
Khairunnisa Hasikin ◽  
Mohd Yamani Idna Idris ◽  
Noor Khairiah A. Karim ◽  
Ainuddin Wahid Abdul Wahab

Feature-based retinal fundus image registration (RIR) technique aligns fundus images according to geometrical transformations estimated between feature point correspondences. To ensure accurate registration, the feature points extracted must be from the retinal vessels and throughout the image. However, noises in the fundus image may resemble retinal vessels in local patches. Therefore, this paper introduces a feature extraction method based on a local feature of retinal vessels (CURVE) that incorporates retinal vessels and noises characteristics to accurately extract feature points on retinal vessels and throughout the fundus image. The CURVE performance is tested on CHASE, DRIVE, HRF and STARE datasets and compared with six feature extraction methods used in the existing feature-based RIR techniques. From the experiment, the feature extraction accuracy of CURVE (86.021%) significantly outperformed the existing feature extraction methods (p ≤ 0.001*). Then, CURVE is paired with a scale-invariant feature transform (SIFT) descriptor to test its registration capability on the fundus image registration (FIRE) dataset. Overall, CURVE-SIFT successfully registered 44.030% of the image pairs while the existing feature-based RIR techniques (GDB-ICP, Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG) only registered less than 27.612% of the image pairs. The one-way ANOVA analysis showed that CURVE-SIFT significantly outperformed GDB-ICP (p = 0.007*), Harris-PIIFD, Ghassabi’s-SIFT, H-M 16, H-M 17 and D-Saddle-HOG (p ≤ 0.001*).

2017 ◽  
Vol 2017 ◽  
pp. 1-15 ◽  
Author(s):  
Roziana Ramli ◽  
Mohd Yamani Idna Idris ◽  
Khairunnisa Hasikin ◽  
Noor Khairiah A. Karim ◽  
Ainuddin Wahid Abdul Wahab ◽  
...  

Retinal image registration is important to assist diagnosis and monitor retinal diseases, such as diabetic retinopathy and glaucoma. However, registering retinal images for various registration applications requires the detection and distribution of feature points on the low-quality region that consists of vessels of varying contrast and sizes. A recent feature detector known as Saddle detects feature points on vessels that are poorly distributed and densely positioned on strong contrast vessels. Therefore, we propose a multiresolution difference of Gaussian pyramid with Saddle detector (D-Saddle) to detect feature points on the low-quality region that consists of vessels with varying contrast and sizes. D-Saddle is tested on Fundus Image Registration (FIRE) Dataset that consists of 134 retinal image pairs. Experimental results show that D-Saddle successfully registered 43% of retinal image pairs with average registration accuracy of 2.329 pixels while a lower success rate is observed in other four state-of-the-art retinal image registration methods GDB-ICP (28%), Harris-PIIFD (4%), H-M (16%), and Saddle (16%). Furthermore, the registration accuracy of D-Saddle has the weakest correlation (Spearman) with the intensity uniformity metric among all methods. Finally, the paired t-test shows that D-Saddle significantly improved the overall registration accuracy of the original Saddle.


2021 ◽  
Vol 13 (17) ◽  
pp. 3425
Author(s):  
Xin Zhao ◽  
Hui Li ◽  
Ping Wang ◽  
Linhai Jing

Accurate registration for multisource high-resolution remote sensing images is an essential step for various remote sensing applications. Due to the complexity of the feature and texture information of high-resolution remote sensing images, especially for images covering earthquake disasters, feature-based image registration methods need a more helpful feature descriptor to improve the accuracy. However, traditional image registration methods that only use local features at low levels have difficulty representing the features of the matching points. To improve the accuracy of matching features for multisource high-resolution remote sensing images, an image registration method based on a deep residual network (ResNet) and scale-invariant feature transform (SIFT) was proposed. It used the fusion of SIFT features and ResNet features on the basis of the traditional algorithm to achieve image registration. The proposed method consists of two parts: model construction and training and image registration using a combination of SIFT and ResNet34 features. First, a registration sample set constructed from high-resolution satellite remote sensing images was used to fine-tune the network to obtain the ResNet model. Then, for the image to be registered, the Shi_Tomas algorithm and the combination of SIFT and ResNet features were used for feature extraction to complete the image registration. Considering the difference in image sizes and scenes, five pairs of images were used to conduct experiments to verify the effectiveness of the method in different practical applications. The experimental results showed that the proposed method can achieve higher accuracies and more tie points than traditional feature-based methods.


2020 ◽  
Vol 86 (3) ◽  
pp. 177-186
Author(s):  
Matthew Plummer ◽  
Douglas Stow ◽  
Emanuel Storey ◽  
Lloyd Coulter ◽  
Nicholas Zamora ◽  
...  

Image registration is an important preprocessing step prior to detecting changes using multi-temporal image data, which is increasingly accomplished using automated methods. In high spatial resolution imagery, shadows represent a major source of illumination variation, which can reduce the performance of automated registration routines. This study evaluates the statistical relationship between shadow presence and image registration accuracy, and whether masking and normalizing shadows leads to improved automatic registration results. Eighty-eight bitemporal aerial image pairs were co-registered using software called Scale Invariant Features Transform (<small>SIFT</small>) and Random Sample Consensus (<small>RANSAC</small>) Alignment (<small>SARA</small>). Co-registration accuracy was assessed at different levels of shadow coverage and shadow movement within the images. The primary outcomes of this study are (1) the amount of shadow in a multi-temporal image pair is correlated with the accuracy/success of automatic co-registration; (2) masking out shadows prior to match point select does not improve the success of image-to-image co-registration; and (3) normalizing or brightening shadows can help match point routines find more match points and therefore improve performance of automatic co-registration. Normalizing shadows via a standard linear correction provided the most reliable co-registration results in image pairs containing substantial amounts of relative shadow movement, but had minimal effect for pairs with stationary shadows.


2021 ◽  
Vol 11 ◽  
Author(s):  
Hao Fu ◽  
Weiming Mi ◽  
Boju Pan ◽  
Yucheng Guo ◽  
Junjie Li ◽  
...  

Pancreatic ductal adenocarcinoma (PDAC) is one of the deadliest cancer types worldwide, with the lowest 5-year survival rate among all kinds of cancers. Histopathology image analysis is considered a gold standard for PDAC detection and diagnosis. However, the manual diagnosis used in current clinical practice is a tedious and time-consuming task and diagnosis concordance can be low. With the development of digital imaging and machine learning, several scholars have proposed PDAC analysis approaches based on feature extraction methods that rely on field knowledge. However, feature-based classification methods are applicable only to a specific problem and lack versatility, so that the deep-learning method is becoming a vital alternative to feature extraction. This paper proposes the first deep convolutional neural network architecture for classifying and segmenting pancreatic histopathological images on a relatively large WSI dataset. Our automatic patch-level approach achieved 95.3% classification accuracy and the WSI-level approach achieved 100%. Additionally, we visualized the classification and segmentation outcomes of histopathological images to determine which areas of an image are more important for PDAC identification. Experimental results demonstrate that our proposed model can effectively diagnose PDAC using histopathological images, which illustrates the potential of this practical application.


Author(s):  
Fan Zhang

With the development of computer technology, the simulation authenticity of virtual reality technology is getting higher and higher, and the accurate recognition of human–computer interaction gestures is also the key technology to enhance the authenticity of virtual reality. This article briefly introduced three different gesture feature extraction methods: scale invariant feature transform, local binary pattern and histogram of oriented gradients (HOG), and back-propagation (BP) neural network for classifying and recognizing different gestures. The gesture feature vectors obtained by three feature extraction methods were used as input data of BP neural network respectively and were simulated in MATLAB software. The results showed that the information of feature gesture diagram extracted by HOG was the closest to the original one; the BP neural network that applied HOG extracted feature vectors converged to stability faster and had the smallest error when it was stable; in the aspect of gesture recognition, the BP neural network that applied HOG extracted feature vector had higher accuracy and precision and lower false alarm rate.


2013 ◽  
Vol 411-414 ◽  
pp. 1598-1604
Author(s):  
Ye Teng An ◽  
Hong Cui Wang ◽  
Song Gun Hyon ◽  
Sai Chen ◽  
Jian Wu Dang

Bone-conducted life sounds are useful for monitoring human healthy situation. Although a number of feature extraction methods were proposed for air-conducted speech, they may not meet the requirements of the recognition task for bone-conducted life sounds since there is a large difference between air-conducted speech and bone-conducted life sounds. In order to obtain features that can characterize bone-conducted signals, in this study, we first analyze the property of bone-conducted life sounds itself and compare each kind of life sounds in the frequency region. Then we adopt the methods of F-ratio and improved F-ratio separately to measure the dependences between frequency components and characteristics of life sounds. According to the result of analysis, we design a new adaptive frequency filter to extract the desired discriminative feature. The new feature is combined with the Hidden Markov Model and applied to classify different kinds of bone-conducted life sounds. The experimental results show that the error rate using the proposed feature based on State mean F-ratio is reduced by 7.2% compared with the MFCC feature.


2010 ◽  
Vol 97-101 ◽  
pp. 1273-1276 ◽  
Author(s):  
Gang Yu ◽  
Ying Zi Lin ◽  
Sagar Kamarthi

Texture classification is a necessary task in a wider variety of application areas such as manufacturing, textiles, and medicine. In this paper, we propose a novel wavelet-based feature extraction method for robust, scale invariant and rotation invariant texture classification. The method divides the 2-D wavelet coefficient matrices into 2-D clusters and then computes features from the energies inherent in these clusters. The features that contain the information effective for classifying texture images are computed from the energy content of the clusters, and these feature vectors are input to a neural network for texture classification. The results show that the discrimination performance obtained with the proposed cluster-based feature extraction method is superior to that obtained using conventional feature extraction methods, and robust to the rotation and scale invariant texture classification.


2019 ◽  
Vol 7 (6) ◽  
pp. 178
Author(s):  
Armagan Elibol ◽  
Nak Young Chong

Image registration is one of the most fundamental and widely used tools in optical mapping applications. It is mostly achieved by extracting and matching salient points (features) described by vectors (feature descriptors) from images. While matching the descriptors, mismatches (outliers) do appear. Probabilistic methods are then applied to remove outliers and to find the transformation (motion) between images. These methods work in an iterative manner. In this paper, an efficient way of integrating geometric invariants into feature-based image registration is presented aiming at improving the performance of image registration in terms of both computational time and accuracy. To do so, geometrical properties that are invariant to coordinate transforms are studied. This would be beneficial to all methods that use image registration as an intermediate step. Experimental results are presented using both semi-synthetically generated data and real image pairs from underwater environments.


Robotica ◽  
2014 ◽  
Vol 34 (9) ◽  
pp. 1923-1947 ◽  
Author(s):  
Salam Dhou ◽  
Yuichi Motai

SUMMARYAn efficient method for tracking a target using a single Pan-Tilt-Zoom (PTZ) camera is proposed. The proposed Scale-Invariant Optical Flow (SIOF) method estimates the motion of the target and rotates the camera accordingly to keep the target at the center of the image. Also, SIOF estimates the scale of the target and changes the focal length relatively to adjust the Field of View (FoV) and keep the target appear in the same size in all captured frames. SIOF is a feature-based tracking method. Feature points used are extracted and tracked using Optical Flow (OF) and Scale-Invariant Feature Transform (SIFT). They are combined in groups and used to achieve robust tracking. The feature points in these groups are used within a twist model to recover the 3D free motion of the target. The merits of this proposed method are (i) building an efficient scale-invariant tracking method that tracks the target and keep it in the FoV of the camera with the same size, and (ii) using tracking with prediction and correction to speed up the PTZ control and achieve smooth camera control. Experimental results were performed on online video streams and validated the efficiency of the proposed method SIOF, comparing with OF, SIFT, and other tracking methods. The proposed SIOF has around 36% less average tracking error and around 70% less tracking overshoot than OF.


Sign in / Sign up

Export Citation Format

Share Document