Rotation Moment Invariant Feature Extraction Techniques for Image Matching

2014 ◽  
Vol 721 ◽  
pp. 775-778 ◽  
Author(s):  
Yi Qiang Lai

In recently years, extracting images invariance features are gaining more attention in image matching field. Various types of methods have been used to match image successfully in a number of applications. But in mostly literatures, the rotation moment invariant properties of these invariants have not been studied widely. In this paper, we present a novel method based on Polar Harmonic Transforms (PHTs) which is consisted of a set of orthogonal projection bases to extract rotation moment invariant features. The experimental results show that the kernel computation of PHTs is simple and image features is extracted accurately in image matching. Hence polar harmonic transforms have provided a powerful tool for image matching.

2019 ◽  
pp. 1-3
Author(s):  
Anita Kaklotar

Breast cancer is the primary and the most common disease found among women. Today, mammography is the most powerful screening technique used for early detection of cancer which increases the chance of successful treatment. In order to correctly detect the mammogram images as being cancerous or malignant, there is a need of a classier. With this objective, an attempt is made to analyze different feature extraction techniques and classiers. In the proposed system we rst do the preprocessing of the mammogram images, where the unwanted noise and disturbances in the mammograms are removed. Features are then extracted from the mammogram images using Gray Level Co-Occurrences Matrix (GLCM) and Scale Invariant Feature Transform (SIFT). Finally, the features are classied using classiers like HiCARe (Classier based on High Condence Association Rule Agreements), Support Vector Machine (SVM), Naïve Bayes classier and K-NN Classier. Further we test the images and classify them as benign or malignant class.


Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5778
Author(s):  
Baifan Chen ◽  
Hong Chen ◽  
Baojun Song ◽  
Grace Gong

Three-dimensional point cloud registration (PCReg) has a wide range of applications in computer vision, 3D reconstruction and medical fields. Although numerous advances have been achieved in the field of point cloud registration in recent years, large-scale rigid transformation is a problem that most algorithms still cannot effectively handle. To solve this problem, we propose a point cloud registration method based on learning and transform-invariant features (TIF-Reg). Our algorithm includes four modules, which are the transform-invariant feature extraction module, deep feature embedding module, corresponding point generation module and decoupled singular value decomposition (SVD) module. In the transform-invariant feature extraction module, we design TIF in SE(3) (which means the 3D rigid transformation space) which contains a triangular feature and local density feature for points. It fully exploits the transformation invariance of point clouds, making the algorithm highly robust to rigid transformation. The deep feature embedding module embeds TIF into a high-dimension space using a deep neural network, further improving the expression ability of features. The corresponding point cloud is generated using an attention mechanism in the corresponding point generation module, and the final transformation for registration is calculated in the decoupled SVD module. In an experiment, we first train and evaluate the TIF-Reg method on the ModelNet40 dataset. The results show that our method keeps the root mean squared error (RMSE) of rotation within 0.5∘ and the RMSE of translation error close to 0 m, even when the rotation is up to [−180∘, 180∘] or the translation is up to [−20 m, 20 m]. We also test the generalization of our method on the TUM3D dataset using the model trained on Modelnet40. The results show that our method’s errors are close to the experimental results on Modelnet40, which verifies the good generalization ability of our method. All experiments prove that the proposed method is superior to state-of-the-art PCReg algorithms in terms of accuracy and complexity.


Kursor ◽  
2018 ◽  
Vol 9 (2) ◽  
Author(s):  
Hendro Nugroho ◽  
Eka Prakarsa Mandyartha

In the findings of the statue of Ganesha in Trowulan Mojokerto area is no longer intact, because the statue of Ganesha is found to have been on the surface of soil or underground, so the archaeologist is very difficult to categorize the findings. This research proposes to overcome the above problems it is necessary to the Image Retrieval system (image retrieval) that can provide information about the results of the discovery of such historic objects. For the image taken as Image Retrieval as an example of research trials is the Ganesha Arca. From the Ganesha Statue is searched for feature extraction value by using Moment Invariant method, after which to get the result of image retrieval using Manhattan method. Image Retrieval information system work is image of Ganesa Arca in pre-processing with size 200x260 pixel BMP, then image in edge detection using Roberts method and extraction of Moment Invariant feature and inserted into database as data traning. For data testing the same process with data traning so searched the closest distance using Manhattan method. From the results of 15 image testing statues Ganesha level to the accuracy of the true states there is 62% and stated wrong 38%. Research can be further developed using various methods to improve image retrieval accuracy


Feature extraction is one of the most essential phase in biometric authentication. It helps in extracting and measuring the biometric image as ideal as possible. These features sets can be used further for image matching, recognition or learning techniques in supervised algorithms. In the proposed work a novel features extraction method for finger knuckle print is explored with comparative analysis. The proposed scheme is based on different mechanical variables and its efficiency also proved by plotting different curves in Matlab R2009a.


2020 ◽  
Vol 8 (5) ◽  
pp. 1325-1329

For organizations requiring high security clearance, multimodal sources of biometric scans are preferred. Computational models for the unimodal biometric scans have so far been well recognized but research into multimodal scans and their models have been gaining momentum recently. For every biometric we used separately feature extraction techniques and we combined those features in efficient way to get robust combination. In this paper, a novel method for fusion of the scan images from the different modes has been introduced. The method is based on representation of data in terms of its sparsity. Feature coupling and correlation information are obtained from the biometric images. The images from each mode are fused by taking into account a quality measure. The algorithms are kernelised so as to handle nonlinear data efficiently. The result of the proposed system is compared to already existing image fusion methods to show its advantage over them.


2011 ◽  
Vol 268-270 ◽  
pp. 2178-2184
Author(s):  
Shang Bo Zhou ◽  
Kai Kang

The SIFT (scale invariant feature transform) algorithm has been successfully used in the image matching field. In this paper, a simplified SIFT algorithm is designed. The number of layers in the Gaussian pyramid is reduced. When it is comparing the keypoints, it uses an outspreading method. The new method can reduce the comparison time and matching time. Although the new algorithm (C-SIFT algorithm) has less matching accuracy than the SIFT algorithm, it adopts a distortion detection method to abandon the wrong matching. Then it uses the coordinate displacement to determine the tracking position. Experimental results show that C-SIFT algorithm can perform steadily and timely.


Sign in / Sign up

Export Citation Format

Share Document