scholarly journals Effective Image Representation using Double Colour Histograms for Content-Based Image Retrieval

Informatica ◽  
2021 ◽  
Vol 45 (7) ◽  
Author(s):  
Ezekiel Mensah Martey ◽  
Hang Lei ◽  
Xiaoyu Li ◽  
Obed Appiah
Author(s):  
Noureddine Abbadeni

This chapter describes an approach based on human perception to content-based image representation and retrieval. We consider textured images and propose to model the textural content of images by a set of features having a perceptual meaning and their application to content-based image retrieval. We present a new method to estimate a set of perceptual textural features, namely coarseness, directionality, contrast and busyness. The proposed computational measures are based on two representations: the original images representation and the autocovariance function (associated with images) representation. The correspondence of the proposed computational measures to human judgments is shown using a psychometric method based on the Spearman rank-correlation coefficient. The set of computational measures is applied to content-based image retrieval on a large image data set, the well-known Brodatz database. Experimental results show a strong correlation between the proposed computational textural measures and human perceptual judgments. The benchmarking of retrieval performance, done using the recall measure, shows interesting results. Furthermore, results merging/fusion returned by each of the two representations is shown to allow significant improvement in retrieval effectiveness.


2003 ◽  
Vol 03 (01) ◽  
pp. 119-143 ◽  
Author(s):  
ZHIYONG WANG ◽  
ZHERU CHI ◽  
DAGAN FENG ◽  
AH CHUNG TSOI

Content-based image retrieval has become an essential technique in multimedia data management. However, due to the difficulties and complications involved in the various image processing tasks, a robust semantic representation of image content is still very difficult (if not impossible) to achieve. In this paper, we propose a novel content-based image retrieval approach with relevance feedback using adaptive processing of tree-structure image representation. In our approach, each image is first represented with a quad-tree, which is segmentation free. Then a neural network model with the Back-Propagation Through Structure (BPTS) learning algorithm is employed to learn the tree-structure representation of the image content. This approach that integrates image representation and similarity measure in a single framework is applied to the relevance feedback of the content-based image retrieval. In our approach, an initial ranking of the database images is first carried out based on the similarity between the query image and each of the database images according to global features. The user is then asked to categorize the top retrieved images into similar and dissimilar groups. Finally, the BPTS neural network model is used to learn the user's intention for a better retrieval result. This process continues until satisfactory retrieval results are achieved. In the refining process, a fine similarity grading scheme can also be adopted to improve the retrieval performance. Simulations on texture images and scenery pictures have demonstrated promising results which compare favorably with the other relevance feedback methods tested.


2019 ◽  
Vol 2019 ◽  
pp. 1-21 ◽  
Author(s):  
Afshan Latif ◽  
Aqsa Rasheed ◽  
Umer Sajid ◽  
Jameel Ahmed ◽  
Nouman Ali ◽  
...  

Multimedia content analysis is applied in different real-world computer vision applications, and digital images constitute a major part of multimedia data. In last few years, the complexity of multimedia contents, especially the images, has grown exponentially, and on daily basis, more than millions of images are uploaded at different archives such as Twitter, Facebook, and Instagram. To search for a relevant image from an archive is a challenging research problem for computer vision research community. Most of the search engines retrieve images on the basis of traditional text-based approaches that rely on captions and metadata. In the last two decades, extensive research is reported for content-based image retrieval (CBIR), image classification, and analysis. In CBIR and image classification-based models, high-level image visuals are represented in the form of feature vectors that consists of numerical values. The research shows that there is a significant gap between image feature representation and human visual understanding. Due to this reason, the research presented in this area is focused to reduce the semantic gap between the image feature representation and human visual understanding. In this paper, we aim to present a comprehensive review of the recent development in the area of CBIR and image representation. We analyzed the main aspects of various image retrieval and image representation models from low-level feature extraction to recent semantic deep-learning approaches. The important concepts and major research studies based on CBIR and image representation are discussed in detail, and future research directions are concluded to inspire further research in this area.


2020 ◽  
Vol 79 (37-38) ◽  
pp. 26995-27021
Author(s):  
Lorenzo Putzu ◽  
Luca Piras ◽  
Giorgio Giacinto

Abstract Given the great success of Convolutional Neural Network (CNN) for image representation and classification tasks, we argue that Content-Based Image Retrieval (CBIR) systems could also leverage on CNN capabilities, mainly when Relevance Feedback (RF) mechanisms are employed. On the one hand, to improve the performances of CBIRs, that are strictly related to the effectiveness of the descriptors used to represent an image, as they aim at providing the user with images similar to an initial query image. On the other hand, to reduce the semantic gap between the similarity perceived by the user and the similarity computed by the machine, by exploiting an RF mechanism where the user labels the returned images as being relevant or not concerning her interests. Consequently, in this work, we propose a CBIR system based on transfer learning from a CNN trained on a vast image database, thus exploiting the generic image representation that it has already learned. Then, the pre-trained CNN is also fine-tuned exploiting the RF supplied by the user to reduce the semantic gap. In particular, after the user’s feedback, we propose to tune and then re-train the CNN according to the labelled set of relevant and non-relevant images. Then, we suggest different strategies to exploit the updated CNN for returning a novel set of images that are expected to be relevant to the user’s needs. Experimental results on different data sets show the effectiveness of the proposed mechanisms in improving the representation power of the CNN with respect to the user concept of image similarity. Moreover, the pros and cons of the different approaches can be clearly pointed out, thus providing clear guidelines for the implementation in production environments.


Author(s):  
Yonghong Tian ◽  
Shuqiang Jiang ◽  
Tiejun Huang ◽  
Wen Gao

With the rapid growth of image collections, content-based image retrieval (CBIR) has been an active area of research with notable recent progress. However, automatic image retrieval by semantics still remains a challenging problem. In this chapter, the authors will describe two promising techniques towards semantic image retrieval—semantic image classification and automatic image annotation. For each technique, four aspects are presented: task definition, image representation, computational models, and evaluation. Finally, they will give a brief discussion of their application in image retrieval.


Author(s):  
Ming Zhang ◽  
Reda Alhajj

Content-Based Image Retrieval (CBIR) aims to search images that are perceptually similar to the querybased on visual content of the images without the help of annotations. The current CBIR systems use global features (e.g., color, texture, and shape) as image descriptors, or usefeatures extracted from segmented regions (called region-based descriptors). In the former case, descriptors are not discriminative enough at the object level and are sensitive to object occlusion or background clutter, thus fail to give satisfactory result. In the latter case, the features are sensitive to the image segmentation, which is a difficult task in its own right. In addition, the region-based descriptors are still not invariant to varying imaging conditions. In this chapter, we look at the CBIR from the object detection/recognition point of view and introduce the local feature-based image representation methods recently developed in object detection/recognition area. These local descriptors are highly distinctive and robust to imaging condition change. In addition to image representation, we also introduce the other two key issues of CBIR: similarity measurement for image descriptor comparison and the index structure for similarity search.


Sign in / Sign up

Export Citation Format

Share Document