Evolutionary-based generation of rotation and scale invariant texture descriptors from SIFT keypoints

2021 ◽  
Author(s):  
Mohamed Hazgui ◽  
Haythem Ghazouani ◽  
Walid Barhoumi
Author(s):  
Satyavratan Govindarajan ◽  
Ramakrishnan Swaminathan

In this work, automated abnormality detection using keypoint information from Speeded-Up Robust feature (SURF) and Scale Invariant Feature Transform (SIFT) descriptors in chest Radiographic (CR) images is investigated and compared. Computerized image analysis using artificial intelligence is crucial to detect subtle and non-specific alterations of Tuberculosis (TB). For this, the healthy and TB CRs are subjected to lung field segmentation. SURF and SIFT keypoints are extracted from the segmented lung images. Statistical features from keypoints, its scale and orientation are computed. Discrimination of TB from healthy is performed using SVM. Results show that the SURF and SIFT methods are able to extract local keypoint information in CRs. Linear SVM is found to perform better with precision of 88.9% and AUC of 91% in TB detection for combined features. Hence, the application of keypoint techniques is found to have clinical relevance in the automated screening of non-specific TB abnormalities using CRs.


Author(s):  
Zhe Zhang ◽  
Goldie Nejat

In this paper a unique landmark identification method is proposed for identifying large distinguishable landmarks for 3D Visual Simultaneous Localization and Mapping (SLAM) in unknown cluttered urban search and rescue (USAR) environments. The novelty of the method is the utilization of both 3D (i.e., depth images) and 2D images. By utilizing a Scale Invariant Feature Transform (SIFT) -based approach and incorporating 3D depth imagery, we can achieve more reliable and robust recognition and matching of landmarks from multiple images for 3D mapping of the environment. Preliminary experiments utilizing the proposed methodology verify: (i) its ability to identify clusters of SIFT keypoints in both 3D and 2D images for representation of potential landmarks in the scene, and (ii) the use of the identified landmarks in constructing a 3D map of unknown cluttered USAR environments.


Robotica ◽  
2015 ◽  
Vol 34 (10) ◽  
pp. 2400-2413
Author(s):  
Jaime Boal ◽  
Álvaro Sánchez-Miralles

SUMMARYIn the context of topological mapping, the automatic segmentation of an environment into meaningful and distinct locations is still regarded as an open problem. This paper presents an algorithm to extract places online from image sequences based on the algebraic connectivity of graphs or Fiedler value, which provides an insight into how well connected several consecutive observations are. The main contribution of the proposed method is that it is a theoretically supported alternative to tuning thresholds on similarities, which is a difficult task and environment dependent. It can accommodate any type of feature detector and matching procedure, as it only requires non-negative similarities as input, and is therefore able to deal with descriptors of variable length, to which statistical techniques are difficult to apply. The method has been validated in an office environment using exclusively visual information. Two different types of features, a bag-of-words model built from scale invariant feature transform (SIFT) keypoints, and a more complex fingerprint based on vertical lines, color histograms, and a few Star keypoints, are employed to demonstrate that the method can be applied to both fixed and variable length descriptors with similar results.


Author(s):  
Ayoub Karine ◽  
Abdelmalek Toumi ◽  
Ali Khenchaf ◽  
Mohammed El Hassouni

In this paper, we propose a novel approach to recognize radar targets on inverse synthetic aperture radar (ISAR) and synthetic aperture radar (SAR) images. This approach is based on the multiple salient keypoint descriptors (MSKD) and multitask sparse representation based classification (MSRC). Thus, to characterize the targets in the radar images, we combine the scale-invariant feature transform (SIFT) and the saliency map. The goal of this combination is to reduce the SIFT keypoints and their time computing time by maintaining only those located in the target area (salient region). Then, we compute the feature vectors of the resulting salient SIFT keypoints (MSKD). This methodology is applied for both training and test images. The MSKD of the training images leads to construct the dictionary of a sparse convex optimization problem. To achieve the recognition, we adopt the MSRC taking into consideration each vector in the MSKD as a task. This classifier solves the sparse representation problem for each task over the dictionary and determines the class of the radar image according to all sparse reconstruction errors (residuals). The effectiveness of the proposed approach method has been demonstrated by a set of extensive empirical results on ISAR and SAR images databases. The results show the ability of our method to predict adequately the aircraft and the ground targets.


2012 ◽  
Vol 151 ◽  
pp. 458-462
Author(s):  
Ming Xin ◽  
Sheng Wei Li ◽  
Miao Hui Zhang

Few literatures employ SIFT (scale-invariant feature transform) for tracking because it is time-consuming. However, we found that SIFT can be adapted to real-time tracking by employing it on a subarea of the whole image. In this paper the particle filter based method exploits SIFT features to handle challenging scenarios such as partial occlusions, scale variations and moderate deformations. As proposed in our method, not a brute-force feature extraction in the whole image, we firstly extract SIFT keypoints in the object search region only for once, through matching SIFT features between object search region and object template, the number of matched keypoints is obtained, which is utilized to compute the particle weights. Finally, we can obtain an optimal estimate to object location by the particle filter framework. Comparative experiments with quantitative evaluations are provided, which indicate that the proposed method is both robust and faster.


2020 ◽  
Vol 12 (1) ◽  
pp. 22-27
Author(s):  
Miljan Đorđević ◽  
Milan Milivojević ◽  
Ana Gavrovska

Nowadays advantages in face-based modification using DeepFake algorithms made it possible to replace a face of one person with a face of another person. Thus, it is possible to make not only copy-move modifications, but to implement artificial intelligence and deep learning for replacing face movements from one person to another. Still images can be converted into video sequences. Consequently, the contemporaries, historical figures or even animated characters can be lively presented. Deepfakes are becoming more and more successful and it is difficult to detect them in some cases. In this paper we explain the video sequences we produced (e.g. using X2Face method, and First Order Motion Model for Image Animation) and perform deepfake video analysis using SIFT (Scale Invariant Feature Transform) based approach. The experiments show the simplicity in video forgery production, as well as the possible role of SIFT keypoints detection in differentiation between the deeply forged and original video content.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Ayman El Mobacher ◽  
Nicholas Mitri ◽  
Mariette Awad

Using local invariant features has been proven by published literature to be powerful for image processing and pattern recognition tasks. However, in energy aware environments, these invariant features would not scale easily because of their computational requirements. Motivated to find an efficient building recognition algorithm based on scale invariant feature transform (SIFT) keypoints, we present in this paper uSee, a supervised learning framework which exploits the symmetrical and repetitive structural patterns in buildings to identify subsets of relevant clusters formed by these keypoints. Once an image is captured by a smart phone, uSee preprocesses it using variations in gradient angle- and entropy-based measures before extracting the building signature and comparing its representative SIFT keypoints against a repository of building images. Experimental results on 2 different databases confirm the effectiveness of uSee in delivering, at a greatly reduced computational cost, the high matching scores for building recognition that local descriptors can achieve. With only 14.3% of image SIFT keypoints, uSee exceeded prior literature results by achieving an accuracy of 99.1% on the Zurich Building Database with no manual rotation; thus saving significantly on the computational requirements of the task at hand.


Sign in / Sign up

Export Citation Format

Share Document