Improving Image Retrieval Using the Context-Aware Saliency Areas
The Context-Aware Saliency (CA) model—is a new model used for saliency detection—has strong limitations: It is very time consuming. This paper improved the shortcoming of this model namely Fast-CA and proposed a novel framework for image retrieval and image representation. The proposed framework derives from Fast-CA and multi-texton histogram. And the mechanisms of visual attention are simulated and used to detect saliency areas of an image. Furthermore, a very simple threshold method is adopted to detect the dominant saliency areas. Color, texture and edge features are further extracted to describe image content within the dominant saliency areas, and then those features are integrated into one entity as image representation, where image representation is so called the dominant saliency areas histogram (DSAH) and used for image retrieval. Experimental results indicate that our algorithm outperform multi-texton histogram (MTH) and edge histogram descriptors (EHD) on Corel dataset with 10000 natural images.