A Survey on Emotional Semantic Mapping in Image Retrieval

2012 ◽  
Vol 532-533 ◽  
pp. 1297-1302
Author(s):  
Zeng Rong Liu ◽  
Zhi Li ◽  
Xue Li Yu

Emotion plays an important role in the human perception and decision-making process. Human comprehension and perception of images is subjective, and not merely rely on lower-level visual features. Semantic gap is regarded as the most important challenge of image retrieval. In this paper, we analyzed the emotional features as well as emotional semantic description of images, which comes from the image emotional semantics retrieval framework. And also the mapping ways and means were summarized from image visual features to emotional semantics. Finally, the disadvantages of emotional semantic mapping and developing tendency were discussed.

Author(s):  
Noha Saleeb

Previous research tests and experiments have provided evidence for the disparity between human perception of space in the physical environment and the 3D virtual environment. This could have dire effects on the decision-making process throughout the whole construction lifecycle of an asset due to non-precision of perceived spaces. Results have shown an infidelity in displaying the actual dimensions of the space in the 3D virtual environment, and previous research by the author has identified the magnitude of this disparity. However, there has been inconclusive reasoning behind the causes for this disparity. This chapter aims to investigate and highlight different psychophysical factors that might cause this difference in perception, and compare these factors with previously investigated research.


2012 ◽  
Vol 263-266 ◽  
pp. 2488-2492
Author(s):  
You Ping Zhong ◽  
Biao Peng ◽  
Jun Li ◽  
Chong Yang Zhang

To support content based image retrieval, MPEG-7 is developed to define the content interfaces for images. In MPEG-7, Dominant Color Descriptor (DCD) is considered as the most important feature, and is widely used to describe the color features of an image. To support semantic queries from users, we proposed a color feature semantic mapping method in this work, which can translate the DCD values into semantic color names. The semantic mapping method is realized by constructing a mapping table between the DCD values and the semantic color names. To validate the effectiveness of our mapping method, an image retrieval experiment is conducted. From the comparison with the manually indexed description, the proposed mapping method is proved to be effective by the experiment results. Our work is very important to automatically generate the semantic description of an image and then support the users’ semantic retrieval queries.


2018 ◽  
Vol 6 (9) ◽  
pp. 259-273
Author(s):  
Priyanka Saxena ◽  
Shefali

Content Based Image Retrieval system automatically retrieves the most relevant images to the query image by extracting the visual features instead of keywords from images. Over the years, several researches have been conducted in this field but the system still faces the challenge of semantic gap and subjectivity of human perception. This paper proposes the extraction of low-level visual features by employing color moment, Local Binary Pattern and Canny Edge Detection techniques for extracting color, texture and edge features respectively. The combination of these features is used in conjunction with Support Vector Machine to reduce the retrieval time and improve the overall precision. Also, the challenge of semantic gap between low and high level features is addressed by incorporating Relevance Feedback. Average precision value of 0.782 was obtained by combining the color, texture and edge features, 0.896 was obtained by using combined features with SVM, 0.882 was obtained by using combined features with Relevance Feedback to overcome the challenge of semantic gap. Experimental results exhibit improved performance than other state of the art techniques.


Author(s):  
Iker Gondra

In content-based image retrieval (CBIR), a set of low-level features are extracted from an image to represent its visual content. Retrieval is performed by image example where a query image is given as input by the user and an appropriate similarity measure is used to find the best matches in the corresponding feature space. This approach suffers from the fact that there is a large discrepancy between the low-level visual features that one can extract from an image and the semantic interpretation of the image’s content that a particular user may have in a given situation. That is, users seek semantic similarity, but we can only provide similarity based on low-level visual features extracted from the raw pixel data, a situation known as the semantic gap. The selection of an appropriate similarity measure is thus an important problem. Since visual content can be represented by different attributes, the combination and importance of each set of features varies according to the user’s semantic intent. Thus, the retrieval strategy should be adaptive so that it can accommodate the preferences of different users. Relevance feedback (RF) learning has been proposed as a technique aimed at reducing the semantic gap. It works by gathering semantic information from user interaction. Based on the user’s feedback on the retrieval results, the retrieval scheme is adjusted. By providing an image similarity measure under human perception, RF learning can be seen as a form of supervised learning that finds relations between high-level semantic interpretations and low-level visual properties. That is, the feedback obtained within a single query session is used to personalize the retrieval strategy and thus enhance retrieval performance. In this chapter we present an overview of CBIR and related work on RF learning. We also present our own previous work on a RF learning-based probabilistic region relevance learning algorithm for automatically estimating the importance of each region in an image based on the user’s semantic intent.


2011 ◽  
Vol 268-270 ◽  
pp. 1427-1432
Author(s):  
Chang Yong Ri ◽  
Min Yao

This paper presented the key problems to shorten “semantic gap” between low-level visual features and high-level semantic features to implement high-level semantic image retrieval. First, introduced ontology based semantic image description and semantic extraction methods based on machine learning. Then, illustrated image grammar on the high-level semantic image understanding and retrieval, and-or graph and context based methods of semantic image. Finally, we discussed the development directions and research emphases in this field.


Author(s):  
Byeongtae Ahn

Image semantic retrieval has been a crux to bridge "semantic gap" between the simple visual features and the abundant semantics delivered by a image. Effective image retrieval using semantics is one of the major challenges in image retrieval. We suggest a semantic retrieval and clustering method of image using image annotation user interface. And also design and implement a image semantic search management system that facilitates image management and semantic retrieval, which fully relies on the MPEG-7 standard as information base, and using a native XML database, which is Berkeley DB XML.


Author(s):  
Iker Gondra

In content-based image retrieval (CBIR), a set of low-level features are extracted from an image to represent its visual content. Retrieval is performed by image example where a query image is given as input by the user and an appropriate similarity measure is used to find the best matches in the corresponding feature space. This approach suffers from the fact that there is a large discrepancy between the low-level visual features that one can extract from an image and the semantic interpretation of the image’s content that a particular user may have in a given situation. That is, users seek semantic similarity, but we can only provide similarity based on low-level visual features extracted from the raw pixel data, a situation known as the semantic gap. The selection of an appropriate similarity measure is thus an important problem. Since visual content can be represented by different attributes, the combination and importance of each set of features varies according to the user’s semantic intent. Thus, the retrieval strategy should be adaptive so that it can accommodate the preferences of different users. Relevance feedback (RF) learning has been proposed as a technique aimed at reducing the semantic gap. It works by gathering semantic information from user interaction. Based on the user’s feedback on the retrieval results, the retrieval scheme is adjusted. By providing an image similarity measure under human perception, RF learning can be seen as a form of supervised learning that finds relations between high-level semantic interpretations and low-level visual properties. That is, the feedback obtained within a single query session is used to personalize the retrieval strategy and thus enhance retrieval performance. In this chapter we present an overview of CBIR and related work on RF learning. We also present our own previous work on a RF learning-based probabilistic region relevance learning algorithm for automatically estimating the importance of each region in an image based on the user’s semantic intent.


2010 ◽  
Vol 159 ◽  
pp. 638-643
Author(s):  
Ying Ma ◽  
Lao Mo Zhang ◽  
Jin Xing Ma

With the development of information technology and multimedia technology, more and more images appear and have become a part of our daily life. Efficient image searching, storing, retrieval and browsing tools are in high need in various domains, including face and fingerprint recognition, publishing, medicine, architecture, remote sensing, fashion etc. Thus, many image retrieval systems have been developed to meet the need. The aim of content-based retrieval systems is to provide maximum support in bridging the semantic gap between the simplicity of available visual features and the richness of the user semantics. In this paper, we discuss the main technologies for reducing the semantic gap, namely, object-ontology, machine learning, relevance feedback.


Author(s):  
Ramy Ebeid ◽  
Ahmed Salem ◽  
M. B. Senousy

Big Data is increasingly used on almost the entire planet, both online and offline. It is not related only to computers. It makes a new trend in the decision-making process and the analysis of this data will predict the results based on the explored knowledge of big data using Clustering algorithms. The response time of performance and speed presents an important challenge to classify this monstrous data. K-means and big k-mean algorithms solve this problem. In this paper, researcher find the best K value using the elbow method, then use two ways in the first sequential processing and the second is parallel processing, then apply the K-mean algorithm and the big K-mean on shared memory to make a comparative study find which one is the best in different data sizes. The analysis performed by R studio environment.


2014 ◽  
Vol 23 (2) ◽  
pp. 104-111 ◽  
Author(s):  
Mary Ann Abbott ◽  
Debby McBride

The purpose of this article is to outline a decision-making process and highlight which portions of the augmentative and alternative communication (AAC) evaluation process deserve special attention when deciding which features are required for a communication system in order to provide optimal benefit for the user. The clinician then will be able to use a feature-match approach as part of the decision-making process to determine whether mobile technology or a dedicated device is the best choice for communication. The term mobile technology will be used to describe off-the-shelf, commercially available, tablet-style devices like an iPhone®, iPod Touch®, iPad®, and Android® or Windows® tablet.


Sign in / Sign up

Export Citation Format

Share Document