scholarly journals Deep Learning Using Symmetry, FAST Scores, Shape-Based Filtering and Spatial Mapping Integrated with CNN for Large Scale Image Retrieval

Symmetry ◽  
2020 ◽  
Vol 12 (4) ◽  
pp. 612
Author(s):  
Khadija Kanwal ◽  
Khawaja Tehseen Ahmad ◽  
Rashid Khan ◽  
Aliya Tabassum Abbasi ◽  
Jing Li

This article presents symmetry of sampling, scoring, scaling, filtering and suppression over deep convolutional neural networks in combination with a novel content-based image retrieval scheme to retrieve highly accurate results. For this, fusion of ResNet generated signatures is performed with the innovative image features. In the first step, symmetric sampling is performed on the images from the neighborhood key points. Thereafter, the rotated sampling patterns and pairwise comparisons are performed, which return image smoothing by applying standard deviation. These values of smoothed intensity are calculated as per local gradients. Box filtering adjusts the results of approximation of Gaussian with standard deviation to the lowest scale and suppressed by non-maximal technique. The resulting feature sets are scaled at various levels with parameterized smoothened images. The principal component analysis (PCA) reduced feature vectors are combined with the ResNet generated feature. Spatial color coordinates are integrated with convolutional neural network (CNN) extracted features to comprehensively represent the color channels. The proposed method is experimentally applied on challenging datasets including Cifar-100 (10), Cifar-10 (10), ALOT (250), Corel-10000 (10), Corel-1000 (10) and Fashion (15). The presented method shows remarkable results on texture datasets ALOT with 250 categories and fashion (15). The proposed method reports significant results on Cifar-10 and Cifar-100 benchmarks. Moreover, outstanding results are obtained for the Corel-1000 dataset in comparison with state-of-the-art methods.

2017 ◽  
Vol 8 (4) ◽  
pp. 45-58 ◽  
Author(s):  
Mohammed Amin Belarbi ◽  
Saïd Mahmoudi ◽  
Ghalem Belalem

Dimensionality reduction in large-scale image research plays an important role for their performance in different applications. In this paper, we explore Principal Component Analysis (PCA) as a dimensionality reduction method. For this purpose, first, the Scale Invariant Feature Transform (SIFT) features and Speeded Up Robust Features (SURF) are extracted as image features. Second, the PCA is applied to reduce the dimensions of SIFT and SURF feature descriptors. By comparing multiple sets of experimental data with different image databases, we have concluded that PCA with a reduction in the range, can effectively reduce the computational cost of image features, and maintain the high retrieval performance as well


2018 ◽  
pp. 1307-1321
Author(s):  
Vinh-Tiep Nguyen ◽  
Thanh Duc Ngo ◽  
Minh-Triet Tran ◽  
Duy-Dinh Le ◽  
Duc Anh Duong

Large-scale image retrieval has been shown remarkable potential in real-life applications. The standard approach is based on Inverted Indexing, given images are represented using Bag-of-Words model. However, one major limitation of both Inverted Index and Bag-of-Words presentation is that they ignore spatial information of visual words in image presentation and comparison. As a result, retrieval accuracy is decreased. In this paper, the authors investigate an approach to integrate spatial information into Inverted Index to improve accuracy while maintaining short retrieval time. Experiments conducted on several benchmark datasets (Oxford Building 5K, Oxford Building 5K+100K and Paris 6K) demonstrate the effectiveness of our proposed approach.


Author(s):  
Vinh-Tiep Nguyen ◽  
Thanh Duc Ngo ◽  
Minh-Triet Tran ◽  
Duy-Dinh Le ◽  
Duc Anh Duong

Large-scale image retrieval has been shown remarkable potential in real-life applications. The standard approach is based on Inverted Indexing, given images are represented using Bag-of-Words model. However, one major limitation of both Inverted Index and Bag-of-Words presentation is that they ignore spatial information of visual words in image presentation and comparison. As a result, retrieval accuracy is decreased. In this paper, the authors investigate an approach to integrate spatial information into Inverted Index to improve accuracy while maintaining short retrieval time. Experiments conducted on several benchmark datasets (Oxford Building 5K, Oxford Building 5K+100K and Paris 6K) demonstrate the effectiveness of our proposed approach.


2020 ◽  
Vol 12 (23) ◽  
pp. 3978
Author(s):  
Tianyou Chu ◽  
Yumin Chen ◽  
Liheng Huang ◽  
Zhiqiang Xu ◽  
Huangyuan Tan

Street view image retrieval aims to estimate the image locations by querying the nearest neighbor images with the same scene from a large-scale reference dataset. Query images usually have no location information and are represented by features to search for similar results. The deep local features (DELF) method shows great performance in the landmark retrieval task, but the method extracts many features so that the feature file is too large to load into memory when training the features index. The memory size is limited, and removing the part of features simply causes a great retrieval precision loss. Therefore, this paper proposes a grid feature-point selection method (GFS) to reduce the number of feature points in each image and minimize the precision loss. Convolutional Neural Networks (CNNs) are constructed to extract dense features, and an attention module is embedded into the network to score features. GFS divides the image into a grid and selects features with local region high scores. Product quantization and an inverted index are used to index the image features to improve retrieval efficiency. The retrieval performance of the method is tested on a large-scale Hong Kong street view dataset, and the results show that the GFS reduces feature points by 32.27–77.09% compared with the raw feature. In addition, GFS has a 5.27–23.59% higher precision than other methods.


Sign in / Sign up

Export Citation Format

Share Document