Improving Image Retrieval Using the Context-Aware Saliency Areas

2015 ◽  
Vol 734 ◽  
pp. 596-599 ◽  
Author(s):  
Deng Ping Fan ◽  
Juan Wang ◽  
Xue Mei Liang

The Context-Aware Saliency (CA) model—is a new model used for saliency detection—has strong limitations: It is very time consuming. This paper improved the shortcoming of this model namely Fast-CA and proposed a novel framework for image retrieval and image representation. The proposed framework derives from Fast-CA and multi-texton histogram. And the mechanisms of visual attention are simulated and used to detect saliency areas of an image. Furthermore, a very simple threshold method is adopted to detect the dominant saliency areas. Color, texture and edge features are further extracted to describe image content within the dominant saliency areas, and then those features are integrated into one entity as image representation, where image representation is so called the dominant saliency areas histogram (DSAH) and used for image retrieval. Experimental results indicate that our algorithm outperform multi-texton histogram (MTH) and edge histogram descriptors (EHD) on Corel dataset with 10000 natural images.

Author(s):  
Annalisa Appice ◽  
Angelo Cannarile ◽  
Antonella Falini ◽  
Donato Malerba ◽  
Francesca Mazzia ◽  
...  

AbstractSaliency detection mimics the natural visual attention mechanism that identifies an imagery region to be salient when it attracts visual attention more than the background. This image analysis task covers many important applications in several fields such as military science, ocean research, resources exploration, disaster and land-use monitoring tasks. Despite hundreds of models have been proposed for saliency detection in colour images, there is still a large room for improving saliency detection performances in hyperspectral imaging analysis. In the present study, an ensemble learning methodology for saliency detection in hyperspectral imagery datasets is presented. It enhances saliency assignments yielded through a robust colour-based technique with new saliency information extracted by taking advantage of the abundance of spectral information on multiple hyperspectral images. The experiments performed with the proposed methodology provide encouraging results, also compared to several competitors.


Author(s):  
Noureddine Abbadeni

This chapter describes an approach based on human perception to content-based image representation and retrieval. We consider textured images and propose to model the textural content of images by a set of features having a perceptual meaning and their application to content-based image retrieval. We present a new method to estimate a set of perceptual textural features, namely coarseness, directionality, contrast and busyness. The proposed computational measures are based on two representations: the original images representation and the autocovariance function (associated with images) representation. The correspondence of the proposed computational measures to human judgments is shown using a psychometric method based on the Spearman rank-correlation coefficient. The set of computational measures is applied to content-based image retrieval on a large image data set, the well-known Brodatz database. Experimental results show a strong correlation between the proposed computational textural measures and human perceptual judgments. The benchmarking of retrieval performance, done using the recall measure, shows interesting results. Furthermore, results merging/fusion returned by each of the two representations is shown to allow significant improvement in retrieval effectiveness.


Array ◽  
2020 ◽  
Vol 7 ◽  
pp. 100027
Author(s):  
S. Sathiamoorthy ◽  
A. Saravanan ◽  
R. Ponnusamy

2018 ◽  
Vol 70 (1) ◽  
pp. 47-65 ◽  
Author(s):  
Wei Lu ◽  
Heng Ding ◽  
Jiepu Jiang

Purpose The purpose of this paper is to utilize document expansion techniques for improving image representation and retrieval. This paper proposes a concise framework for tag-based image retrieval (TBIR). Design/methodology/approach The proposed approach includes three core components: a strategy of selecting expansion (similar) images from the whole corpus (e.g. cluster-based or nearest neighbor-based); a technique for assessing image similarity, which is adopted for selecting expansion images (text, image, or mixed); and a model for matching the expanded image representation with the search query (merging or separate). Findings The results show that applying the proposed method yields significant improvements in effectiveness, and the method obtains better performance on the top of the rank and makes a great improvement on some topics with zero score in baseline. Moreover, nearest neighbor-based expansion strategy outperforms the cluster-based expansion strategy, and using image features for selecting expansion images is better than using text features in most cases, and the separate method for calculating the augmented probability P(q|RD) is able to erase the negative influences of error images in RD. Research limitations/implications Despite these methods only outperform on the top of the rank instead of the entire rank list, TBIR on mobile platforms still can benefit from this approach. Originality/value Unlike former studies addressing the sparsity, vocabulary mismatch, and tag relatedness in TBIR individually, the approach proposed by this paper addresses all these issues with a single document expansion framework. It is a comprehensive investigation of document expansion techniques in TBIR.


2003 ◽  
Vol 03 (01) ◽  
pp. 119-143 ◽  
Author(s):  
ZHIYONG WANG ◽  
ZHERU CHI ◽  
DAGAN FENG ◽  
AH CHUNG TSOI

Content-based image retrieval has become an essential technique in multimedia data management. However, due to the difficulties and complications involved in the various image processing tasks, a robust semantic representation of image content is still very difficult (if not impossible) to achieve. In this paper, we propose a novel content-based image retrieval approach with relevance feedback using adaptive processing of tree-structure image representation. In our approach, each image is first represented with a quad-tree, which is segmentation free. Then a neural network model with the Back-Propagation Through Structure (BPTS) learning algorithm is employed to learn the tree-structure representation of the image content. This approach that integrates image representation and similarity measure in a single framework is applied to the relevance feedback of the content-based image retrieval. In our approach, an initial ranking of the database images is first carried out based on the similarity between the query image and each of the database images according to global features. The user is then asked to categorize the top retrieved images into similar and dissimilar groups. Finally, the BPTS neural network model is used to learn the user's intention for a better retrieval result. This process continues until satisfactory retrieval results are achieved. In the refining process, a fine similarity grading scheme can also be adopted to improve the retrieval performance. Simulations on texture images and scenery pictures have demonstrated promising results which compare favorably with the other relevance feedback methods tested.


Sign in / Sign up

Export Citation Format

Share Document