Combination of Local Feature Extraction for Image Retrieval

Author(s):  
S. Sankara Narayanan ◽  
D. Vinod ◽  
Suganya Athisayamani ◽  
A. Robert Singh
2018 ◽  
Vol 11 (1) ◽  
pp. 42
Author(s):  
Ahmad Wahyu Rosyadi ◽  
Renest Danardono ◽  
Siprianus Septian Manek ◽  
Agus Zainal Arifin

One of the techniques in region based image retrieval (RBIR) is comparing the global feature of an entire image and the local feature of image’s sub-block in query and database image. The determined sub-block must be able to detect an object with varying sizes and locations. So the sub-block with flexible size and location is needed. We propose a new method for local feature extraction by determining the flexible size and location of sub-block based on the transition region in region based image retrieval. Global features of both query and database image are extracted using invariant moment. Local features in database and query image are extracted using hue, saturation, and value (HSV) histogram and local binary patterns (LBP). There are several steps to extract the local feature of sub-block in the query image. First, preprocessing is conducted to get the transition region, then the flexible sub-block is determined based on the transition region. Afterward, the local feature of sub-block is extracted. The result of this application is the retrieved images ordered by the most similar to the query image. The local feature extraction with the proposed method is effective for image retrieval with precision and recall value are 57%.


2021 ◽  
Vol 13 (22) ◽  
pp. 4518
Author(s):  
Xin Zhao ◽  
Jiayi Guo ◽  
Yueting Zhang ◽  
Yirong Wu

The semantic segmentation of remote sensing images requires distinguishing local regions of different classes and exploiting a uniform global representation of the same-class instances. Such requirements make it necessary for the segmentation methods to extract discriminative local features between different classes and to explore representative features for all instances of a given class. While common deep convolutional neural networks (DCNNs) can effectively focus on local features, they are limited by their receptive field to obtain consistent global information. In this paper, we propose a memory-augmented transformer (MAT) to effectively model both the local and global information. The feature extraction pipeline of the MAT is split into a memory-based global relationship guidance module and a local feature extraction module. The local feature extraction module mainly consists of a transformer, which is used to extract features from the input images. The global relationship guidance module maintains a memory bank for the consistent encoding of the global information. Global guidance is performed by memory interaction. Bidirectional information flow between the global and local branches is conducted by a memory-query module, as well as a memory-update module, respectively. Experiment results on the ISPRS Potsdam and ISPRS Vaihingen datasets demonstrated that our method can perform competitively with state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document