The image retrieval task: implications for the design and evaluation of image databases

1997 ◽  
Vol 3 (1) ◽  
pp. 181-199 ◽  
Author(s):  
Raya Fidel
2021 ◽  
Vol 13 (23) ◽  
pp. 4786
Author(s):  
Zhen Wang ◽  
Nannan Wu ◽  
Xiaohan Yang ◽  
Bingqi Yan ◽  
Pingping Liu

As satellite observation technology rapidly develops, the number of remote sensing (RS) images dramatically increases, and this leads RS image retrieval tasks to be more challenging in terms of speed and accuracy. Recently, an increasing number of researchers have turned their attention to this issue, as well as hashing algorithms, which map real-valued data onto a low-dimensional Hamming space and have been widely utilized to respond quickly to large-scale RS image search tasks. However, most existing hashing algorithms only emphasize preserving point-wise or pair-wise similarity, which may lead to an inferior approximate nearest neighbor (ANN) search result. To fix this problem, we propose a novel triplet ordinal cross entropy hashing (TOCEH). In TOCEH, to enhance the ability of preserving the ranking orders in different spaces, we establish a tensor graph representing the Euclidean triplet ordinal relationship among RS images and minimize the cross entropy between the probability distribution of the established Euclidean similarity graph and that of the Hamming triplet ordinal relation with the given binary code. During the training process, to avoid the non-deterministic polynomial (NP) hard problem, we utilize a continuous function instead of the discrete encoding process. Furthermore, we design a quantization objective function based on the principle of preserving triplet ordinal relation to minimize the loss caused by the continuous relaxation procedure. The comparative RS image retrieval experiments are conducted on three publicly available datasets, including UC Merced Land Use Dataset (UCMD), SAT-4 and SAT-6. The experimental results show that the proposed TOCEH algorithm outperforms many existing hashing algorithms in RS image retrieval tasks.


Author(s):  
Henning Müller ◽  
Jayashree Kalpathy-Cramer ◽  
Charles E. Kahn ◽  
William Hatt ◽  
Steven Bedrick ◽  
...  

2014 ◽  
Vol 573 ◽  
pp. 529-536
Author(s):  
T. Kanimozhi ◽  
K. Latha

Image retrieval system becoming a more popular in all the disciplines of image search. In real-time, interactive image retrieval system has become more accurate, fast and scalable to large collection of image databases. This paper presents a unique method for an image retrieval system based on firefly algorithm, which improve the accuracy and computation time of the image retrieval system. The firefly algorithm is utilized to optimize the image retrieval process via search for nearly optimal combinations between the corresponding features as well as finding out approximate optimized weights for similarities with respect to the features. The proposed method is able to dynamically reflect the user’s intention in the retrieval process by optimizing the objective function. The Efficiency of the proposed method is compared with other existing image retrieval methods through precision and recall. The performance of the method is experimented on the Corel and Caltech database images.


2018 ◽  
Vol 45 (1) ◽  
pp. 117-135 ◽  
Author(s):  
Amna Sarwar ◽  
Zahid Mehmood ◽  
Tanzila Saba ◽  
Khurram Ashfaq Qazi ◽  
Ahmed Adnan ◽  
...  

The advancements in the multimedia technologies result in the growth of the image databases. To retrieve images from such image databases using visual attributes of the images is a challenging task due to the close visual appearance among the visual attributes of these images, which also introduces the issue of the semantic gap. In this article, we recommend a novel method established on the bag-of-words (BoW) model, which perform visual words integration of the local intensity order pattern (LIOP) feature and local binary pattern variance (LBPV) feature to reduce the issue of the semantic gap and enhance the performance of the content-based image retrieval (CBIR). The recommended method uses LIOP and LBPV features to build two smaller size visual vocabularies (one from each feature), which are integrated together to build a larger size of the visual vocabulary, which also contains complementary features of both descriptors. Because for efficient CBIR, the smaller size of the visual vocabulary improves the recall, while the bigger size of the visual vocabulary improves the precision or accuracy of the CBIR. The comparative analysis of the recommended method is performed on three image databases, namely, WANG-1K, WANG-1.5K and Holidays. The experimental analysis of the recommended method on these image databases proves its robust performance as compared with the recent CBIR methods.


2021 ◽  
Vol 32 (4) ◽  
pp. 1-13
Author(s):  
Xia Feng ◽  
Zhiyi Hu ◽  
Caihua Liu ◽  
W. H. Ip ◽  
Huiying Chen

In recent years, deep learning has achieved remarkable results in the text-image retrieval task. However, only global image features are considered, and the vital local information is ignored. This results in a failure to match the text well. Considering that object-level image features can help the matching between text and image, this article proposes a text-image retrieval method that fuses salient image feature representation. Fusion of salient features at the object level can improve the understanding of image semantics and thus improve the performance of text-image retrieval. The experimental results show that the method proposed in the paper is comparable to the latest methods, and the recall rate of some retrieval results is better than the current work.


2020 ◽  
Vol 12 (23) ◽  
pp. 3978
Author(s):  
Tianyou Chu ◽  
Yumin Chen ◽  
Liheng Huang ◽  
Zhiqiang Xu ◽  
Huangyuan Tan

Street view image retrieval aims to estimate the image locations by querying the nearest neighbor images with the same scene from a large-scale reference dataset. Query images usually have no location information and are represented by features to search for similar results. The deep local features (DELF) method shows great performance in the landmark retrieval task, but the method extracts many features so that the feature file is too large to load into memory when training the features index. The memory size is limited, and removing the part of features simply causes a great retrieval precision loss. Therefore, this paper proposes a grid feature-point selection method (GFS) to reduce the number of feature points in each image and minimize the precision loss. Convolutional Neural Networks (CNNs) are constructed to extract dense features, and an attention module is embedded into the network to score features. GFS divides the image into a grid and selects features with local region high scores. Product quantization and an inverted index are used to index the image features to improve retrieval efficiency. The retrieval performance of the method is tested on a large-scale Hong Kong street view dataset, and the results show that the GFS reduces feature points by 32.27–77.09% compared with the raw feature. In addition, GFS has a 5.27–23.59% higher precision than other methods.


2008 ◽  
Vol 30 (6) ◽  
pp. 1003-1013 ◽  
Author(s):  
N. Alajlan ◽  
M.S. Kamel ◽  
G.H. Freeman

2015 ◽  
Vol 39 ◽  
pp. 55-61 ◽  
Author(s):  
Jayashree Kalpathy-Cramer ◽  
Alba García Seco de Herrera ◽  
Dina Demner-Fushman ◽  
Sameer Antani ◽  
Steven Bedrick ◽  
...  

2021 ◽  
Vol 12 (2) ◽  
Author(s):  
João V. O. Novaes ◽  
Lúcio F. D. Santos ◽  
Luiz Olmes Carvalho ◽  
Daniel De Oliveira ◽  
Marcos V. N. Bedo ◽  
...  

Similarity searches can be modeled by means of distances following the Metric Spaces Theory and constitute a fast and explainable query mechanism behind content-based image retrieval (CBIR) tasks. However, classical distance-based queries, e.g., Range and k-Nearest Neighbors, may be unsuitable for exploring large datasets because the retrieved elements are often similar among themselves. Although similarity searching is enriched with the imposition of rules to foster result diversification, the fine-tuning of the diversity query is still an open issue, which is is usually carried out with and a non-optimal expensive computational inspection. This paper introduces J-EDA, a practical workbench implemented in Java that supports the tuning of similarity and diversity search parameters by enabling the automatic and parallel exploration of multiple search settings regarding a user-posed content-based image retrieval task. J-EDA implements a wide variety of classical and diversity-driven search queries, as well as many CBIR settings such as feature extractors for images, distance functions, and relevance feedback techniques. Accordingly, users can define multiple query settings and inspect their performances for spotting the most suitable parameterization for a content-based image retrieval problem at hand. The workbench reports the experimental performances with several internal and external evaluation metrics such as P × R and Mean Average Precision (mAP), which are calculated towards either incremental or batch procedures performed with or without human interaction.


Sign in / Sign up

Export Citation Format

Share Document