PCA as Dimensionality Reduction for Large-Scale Image Retrieval Systems

2017 ◽  
Vol 8 (4) ◽  
pp. 45-58 ◽  
Author(s):  
Mohammed Amin Belarbi ◽  
Saïd Mahmoudi ◽  
Ghalem Belalem

Dimensionality reduction in large-scale image research plays an important role for their performance in different applications. In this paper, we explore Principal Component Analysis (PCA) as a dimensionality reduction method. For this purpose, first, the Scale Invariant Feature Transform (SIFT) features and Speeded Up Robust Features (SURF) are extracted as image features. Second, the PCA is applied to reduce the dimensions of SIFT and SURF feature descriptors. By comparing multiple sets of experimental data with different image databases, we have concluded that PCA with a reduction in the range, can effectively reduce the computational cost of image features, and maintain the high retrieval performance as well

Data ◽  
2018 ◽  
Vol 3 (4) ◽  
pp. 52 ◽  
Author(s):  
Oleksii Gorokhovatskyi ◽  
Volodymyr Gorokhovatskyi ◽  
Olena Peredrii

In this paper, we propose an investigation of the properties of structural image recognition methods in the cluster space of characteristic features. Recognition, which is based on key point descriptors like SIFT (Scale-invariant Feature Transform), SURF (Speeded Up Robust Features), ORB (Oriented FAST and Rotated BRIEF), etc., often relating to the search for corresponding descriptor values between an input image and all etalon images, which require many operations and time. Recognition of the previously quantized (clustered) sets of descriptor features is described. Clustering is performed across the complete set of etalon image descriptors and followed by screening, which allows for representation of each etalon image in vector form as a distribution of clusters. Due to such representations, the number of computation and comparison procedures, which are the core of the recognition process, might be reduced tens of times. Respectively, the preprocessing stage takes additional time for clustering. The implementation of the proposed approach was tested on the Leeds Butterfly dataset. The dependence of cluster amount on recognition performance and processing time was investigated. It was proven that recognition may be performed up to nine times faster with only a moderate decrease in quality recognition compared to searching for correspondences between all existing descriptors in etalon images and input one without quantization.


Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 612
Author(s):  
Khadija Kanwal ◽  
Khawaja Tehseen Ahmad ◽  
Rashid Khan ◽  
Aliya Tabassum Abbasi ◽  
Jing Li

This article presents symmetry of sampling, scoring, scaling, filtering and suppression over deep convolutional neural networks in combination with a novel content-based image retrieval scheme to retrieve highly accurate results. For this, fusion of ResNet generated signatures is performed with the innovative image features. In the first step, symmetric sampling is performed on the images from the neighborhood key points. Thereafter, the rotated sampling patterns and pairwise comparisons are performed, which return image smoothing by applying standard deviation. These values of smoothed intensity are calculated as per local gradients. Box filtering adjusts the results of approximation of Gaussian with standard deviation to the lowest scale and suppressed by non-maximal technique. The resulting feature sets are scaled at various levels with parameterized smoothened images. The principal component analysis (PCA) reduced feature vectors are combined with the ResNet generated feature. Spatial color coordinates are integrated with convolutional neural network (CNN) extracted features to comprehensively represent the color channels. The proposed method is experimentally applied on challenging datasets including Cifar-100 (10), Cifar-10 (10), ALOT (250), Corel-10000 (10), Corel-1000 (10) and Fashion (15). The presented method shows remarkable results on texture datasets ALOT with 250 categories and fashion (15). The proposed method reports significant results on Cifar-10 and Cifar-100 benchmarks. Moreover, outstanding results are obtained for the Corel-1000 dataset in comparison with state-of-the-art methods.


2020 ◽  
Vol 10 (24) ◽  
pp. 8994
Author(s):  
Dong-Hwa Jang ◽  
Kyeong-Seok Kwon ◽  
Jung-Kon Kim ◽  
Ka-Young Yang ◽  
Jong-Bok Kim

Currently, invasive and external radio frequency identification (RFID) devices and pet tags are widely used for dog identification. However, social problems such as abandoning and losing dogs are constantly increasing. A more effective alternative to the existing identification method is required and the biometrics can be the alternative. This paper proposes an effective dog muzzle recognition method to identify individual dogs. The proposed method consists of preprocessing, feature extraction, matching, and postprocessing. For preprocessing, proposed resize and histogram equalization are used. For feature extraction algorithm, Scale Invariant Feature Transform (SIFT), Speeded Up Robust Features (SURF), Binary Robust Invariant Scaling Keypoints (BRISK) and Oriented FAST, and Rotated BRIEF (ORB) are applied and compared. For matching, Fast Library for Approximate Nearest Neighbors (FLANN) is used for SIFT and SURF, and hamming distance are used for BRISK and ORB. For postprocessing, two techniques to reduce incorrect matches are proposed. The proposed method was evaluated with 55 dog muzzle pattern images acquired from 11 dogs and 990 images augmented by the image deformation (i.e., angle, illumination, noise, affine transform). The best Equal Error Rate (EER) of the proposed method was 0.35%, and ORB was the most appropriate for the dog muzzle pattern recognition.


2015 ◽  
Vol 4 (3) ◽  
pp. 70-89
Author(s):  
Ramesh Chand Pandey ◽  
Sanjay Kumar Singh ◽  
K K Shukla

Copy-Move is one of the most common technique for digital image tampering or forgery. Copy-Move in an image might be done to duplicate something or to hide an undesirable region. In some cases where these images are used for important purposes such as evidence in court of law, it is important to verify their authenticity. In this paper the authors propose a novel method to detect single region Copy-Move Forgery Detection (CMFD) using Speed-Up Robust Features (SURF), Histogram Oriented Gradient (HOG), Scale Invariant Features Transform (SIFT), and hybrid features such as SURF-HOG and SIFT-HOG. SIFT and SURF image features are immune to various transformations like rotation, scaling, translation, so SIFT and SURF image features help in detecting Copy-Move regions more accurately in compared to other image features. Further the authors have detected multiple regions COPY-MOVE forgery using SURF and SIFT image features. Experimental results demonstrate commendable performance of proposed methods.


2013 ◽  
Vol 461 ◽  
pp. 792-800
Author(s):  
Bo Zhao ◽  
Hong Wei Zhao ◽  
Ping Ping Liu ◽  
Gui He Qin

We describe a novel mobile visual search system based on the saliencymechanism and sparse coding principle of the human visual system (HVS). In the featureextraction step, we first divide an image into different regions using thesaliency extraction algorithm. Then scale-invariant feature transform (SIFT)descriptors in all regions are extracted while regional identities arepreserved based on their various saliency levels. According to the sparsecoding principle in the HVS, we adopt a local neighbor preserving Hash functionto establish the binary sparse expression of the SIFT features. In the searchingstep, the nearest neighbors matched to the hashing codes are processed accordingto different saliency levels. Matching scores of images in the database arederived from the matching of hashing codes. Subsequently, the matching scoresof all levels are weighed by degrees of saliency to obtain the initial set of results. In order to further ensure matching accuracy, we propose an optimized retrieval scheme based on global texture information. We conduct extensive experiments on an actual mobile platform in large-scale datasets by using Corel-1000. The resultsshow that the proposed method outperforms the state-of-the-art algorithms on accuracyrate, and no significant increase in the running time of the feature extractionand retrieval can be observed.


2011 ◽  
Vol 23 (6) ◽  
pp. 1080-1090 ◽  
Author(s):  
Seiji Aoyagi ◽  
◽  
Atsushi Kohama ◽  
Yuki Inaura ◽  
Masato Suzuki ◽  
...  

For an indoor mobile robot’s Simultaneous Localization And Mapping (SLAM), a method of processing only one monocular image (640×480 pixel) of the environment is proposed. This method imitates a human’s ability to grasp at a glance the overall situation of a room, i.e., its layout and any objects or obstacles in it. Specific object recognition of a desk through the use of several camera angles is dealt with as one example. The proposed method has the following steps. 1) The bag-of-keypoints method is applied to the image to detect the existence of the object in the input image. 2) If the existence of the object is verified, the angle of the object is further detected using the bag-ofkeypoints method. 3) The candidates for the projection from template image to input image are obtained using Scale Invariant Feature Transform (SIFT) or edge information. Whether or not the projected area correctly corresponds to the object is checked using the AdaBoost classifier, based on various image features such as Haar-like features. Through these steps, the desk is eventually extractedwith angle information if it exists in the image.


2021 ◽  
Vol 24 (2) ◽  
pp. 78-86
Author(s):  
Zainab N. Sultani ◽  
◽  
Ban N. Dhannoon ◽  

Image classification is acknowledged as one of the most critical and challenging tasks in computer vision. The bag of visual words (BoVW) model has proven to be very efficient for image classification tasks since it can effectively represent distinctive image features in vector space. In this paper, BoVW using Scale-Invariant Feature Transform (SIFT) and Oriented Fast and Rotated BRIEF(ORB) descriptors are adapted for image classification. We propose a novel image classification system using image local feature information obtained from both SIFT and ORB local feature descriptors. As a result, the constructed SO-BoVW model presents highly discriminative features, enhancing the classification performance. Experiments on Caltech-101 and flowers dataset prove the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document