scholarly journals Image Searching System

2021 ◽  
Vol 6 (2) ◽  
pp. 161-167
Author(s):  
Eduard Yakubchykt ◽  
◽  
Iryna Yurchak

Finding similar images on a visual sample is a difficult AI task, to solve which many works are devoted. The problem is to determine the essential properties of images of low and higher semantic level. Based on them, a vector of features is built, which will be used in the future to compare pairs of images. Each pair always includes an image from the collection and a sample image that the user is looking for. The result of the comparison is a quantity called the visual relativity of the images. Image properties are called features and are evaluated by calculation algorithms. Image features can be divided into low-level and high-level. Low-level features include basic colors, textures, shapes, significant elements of the whole image. These features are used as part of more complex recognition tasks. The main progress is in the definition of high-level features, which is associated with understanding the content of images. In this paper, research of modern algorithms is done for finding similar images in large multimedia databases. The main problems of determining high-level image features, algorithms of overcoming them and application of effective algorithms are described. The algorithms used to quickly determine the semantic content and improve the search accuracy of similar images are presented. The aim: The purpose of work is to conduct comparative analysis of modern image retrieval algorithms and retrieve its weakness and strength.

Author(s):  
Ranjan Parekh ◽  
Nalin Sharda

Semantic characterization is necessary for developing intelligent multimedia databases, because humans tend to search for media content based on their inherent semantics. However, automated inference of semantic concepts derived from media components stored in a database is still a challenge. The aim of this chapter is to demonstrate how layered architectures and “visual keywords” can be used to develop intelligent search systems for multimedia databases. The layered architecture is used to extract meta-data from multimedia components at various layers of abstractions. While the lower layers handle physical file attributes and low-level features, the upper layers handle high-level features and attempts to remove ambiguities inherent in them. To access the various abstracted features, a query schema is presented, which provides a single point of access while establishing hierarchical pathways between feature-classes. Minimization of the semantic gap is addressed using the concept of “visual keyword” (VK). “Visual keywords” are segmented portions of images with associated low- and high-level features, implemented within a semantic layer on top of the standard low-level features layer, for characterizing semantic content in media components. Semantic information is however predominantly expressed in textual form, and hence is susceptible to the limitations of textual descriptors – viz. ambiguities related to synonyms, homonyms, hypernyms, and hyponyms. To handle such ambiguities, this chapter proposes a domain specific ontology-based layer on top of the semantic layer, to increase the effectiveness of the search process.


Author(s):  
Anne H.H. Ngu ◽  
Jialie Shen ◽  
John Shepherd

The optimized distance-based access methods currently available for multimedia databases are based on two major assumptions: a suitable distance function is known a priori, and the dimensionality of image features is low. The standard approach to building image databases is to represent images via vectors based on low-level visual features and make retrieval based on these vectors. However, due to the large gap between the semantic notions and low-level visual content, it is extremely difficult to define a distance function that accurately captures the similarity of images as perceived by humans. Furthermore, popular dimension reduction methods suffer from either the inability to capture the nonlinear correlations among raw data or very expensive training cost. To address the problems, in this chapter we introduce a new indexing technique called Combining Multiple Visual Features (CMVF) that integrates multiple visual features to get better query effectiveness. Our approach is able to produce low-dimensional image feature vectors that include not only low-level visual properties but also high-level semantic properties. The hybrid architecture can produce feature vectors that capture the salient properties of images yet are small enough to allow the use of existing high-dimensional indexing methods to provide efficient and effective retrieval.


2013 ◽  
Vol 411-414 ◽  
pp. 1372-1376
Author(s):  
Wei Tin Lin ◽  
Shyi Chyi Cheng ◽  
Chih Lang Lin ◽  
Chen Kuei Yang

An approach to improve the regions of interesting (ROIs) selection accuracy automatically for medical images is proposed. The aim of the study is to select the most interesting regions of image features that good for diffuse objects detection or classification. We use the AHP (Analytic Hierarchy Process) to obtain physicians high-level diagnosis vectors and are clustered using the well-known K-Means clustering algorithm. The system also automatically extracts low-level image features for improving to detect liver diseases from ultrasound images. The weights of low-level features are adaptively updated according the feature variances in the class. Finally, the high-level diagnosis decision is made based on the high-level diagnosis vectors for the top K near neighbors from the medical experts classified database. Experimental results show the effectiveness of the system.


2011 ◽  
Vol 271-273 ◽  
pp. 1090-1095
Author(s):  
Yu Tang Guo ◽  
Chang Gang Han

Due to the existing of the semantic gap, images with the same or similar low level features are possibly different on semantic level. How to find the underlying relationship between the high-level semantic and low level features is one of the difficult problems for image annotation. In this paper, a new image annotation method based on graph spectral clustering with the consistency of semantics is proposed with detailed analysis on the advantages and disadvantages of the existed image annotation methods. The proposed method firstly cluster image into several semantic classes by semantic similarity measurement in the semantic subspace. Within each semantic class, images are re-clustered with visual features of region Then, the joint probability distribution of blobs and words was modeled by using Multiple-Bernoulli Relevance Model. We can annotate a unannotated image by using the joint distribution. Experimental results show the the effectiveness of the proposed approach in terms of quality of the image annotation. the consistency of high-level semantics and low level features is efficiently achieved.


2019 ◽  
Author(s):  
Kathryn E Schertz ◽  
Omid Kardan ◽  
Marc Berman

It has recently been shown that the perception of visual features of the environment can influence thought content. Both low-level (e.g., fractalness) and high-level (e.g., presence of water) visual features of the environment can influence thought content, in real-world and experimental settings where these features can make people more reflective and contemplative in their thoughts. It remains to be seen, however, if these visual features retain their influence on thoughts in the absence of overt semantic content, which could indicate a more fundamental mechanism for this effect. In this study, we removed this limitation, by creating scrambled edge versions of images, which maintain edge content from the original images but remove scene identification. Non-straight edge density is one visual feature which has been shown to influence many judgements about objects and landscapes, and has also been associated with thoughts of spirituality. We extend previous findings by showing that non-straight edges retain their influence on the selection of a “Spiritual & Life Journey” topic after scene identification removal. These results strengthen the implication of a causal role for the perception of low-level visual features on the influence of higher-order cognitive function, by demonstrating that in the absence of overt semantic content, low-level features, such as edges, influence cognitive processes.


2016 ◽  
Vol 6 (3) ◽  
pp. 137-154 ◽  
Author(s):  
Hui Wei

Abstract We have two motivations. Firstly, semantic gap is a tough problem puzzling almost all sub-fields of Artificial Intelligence. We think semantic gap is the conflict between the abstractness of high-level symbolic definition and the details, diversities of low-level stimulus. Secondly, in object recognition, a pre-defined prototype of object is crucial and indispensable for bi-directional perception processing. On the one hand this prototype was learned from perceptional experience, and on the other hand it should be able to guide future downward processing. Human can do this very well, so physiological mechanism is simulated here. We utilize a mechanism of classical and non-classical receptive field (nCRF) to design a hierarchical model and form a multi-layer prototype of an object. This also is a realistic definition of concept, and a representation of denoting semantic. We regard this model as the most fundamental infrastructure that can ground semantics. Here a AND-OR tree is constructed to record prototypes of a concept, in which either raw data at low-level or symbol at high-level is feasible, and explicit production rules are also available. For the sake of pixel processing, knowledge should be represented in a data form; for the sake of scene reasoning, knowledge should be represented in a symbolic form. The physiological mechanism happens to be the bridge that can join them together seamlessly. This provides a possibility for finding a solution to semantic gap problem, and prevents discontinuity in low-order structures.


2015 ◽  
Vol 114 (2) ◽  
pp. 846-856 ◽  
Author(s):  
Ronen Sosnik ◽  
Eliyahu Chaim ◽  
Tamar Flash

Stopping performance is known to depend on low-level motion features, such as movement velocity. It is not known, however, whether it is also subject to high-level motion constraints. Here, we report results of 15 subjects instructed to connect four target points depicted on a digitizing tablet and stop “as rapidly as possible” upon hearing a “stop” cue (tone). Four subjects connected target points with straight paths, whereas 11 subjects generated movements corresponding to coarticulation between adjacent movement components. For the noncoarticulating and coarticulating subjects, stopping performance was not correlated or only weakly correlated with motion velocity, respectively. The generation of a straight, point-to-point movement or a smooth, curved trajectory was not disturbed by the occurrence of a stop cue. Overall, the results indicate that stopping performance is subject to high-level motion constraints, such as the completion of a geometrical plan, and that globally planned movements, once started, must run to completion, providing evidence for the definition of a motion primitive as an unstoppable motion element.


Author(s):  
Kalaivani Anbarasan ◽  
Chitrakala S.

The content based image retrieval system retrieves relevant images based on image features. The lack of performance in the content based image retrieval system is due to the semantic gap. Image annotation is a solution to bridge the semantic gap between low-level content features and high-level semantic concepts Image annotation is defined as tagging images with a single or multiple keywords based on low-level image features. The major issue in building an effective annotation framework is the integration of both low level visual features and high-level textual information into an annotation model. This chapter focus on new statistical-based image annotation model towards semantic based image retrieval system. A multi-label image annotation with multi-level tagging system is introduced to annotate image regions with class labels and extract color, location and topological tags of segmented image regions. The proposed method produced encouraging results and the experimental results outperformed state-of-the-art methods


Author(s):  
Alan Wee-Chung Liew ◽  
Ngai-Fong Law

With the rapid growth of Internet and multimedia systems, the use of visual information has increased enormously, such that indexing and retrieval techniques have become important. Historically, images are usually manually annotated with metadata such as captions or keywords (Chang & Hsu, 1992). Image retrieval is then performed by searching images with similar keywords. However, the keywords used may differ from one person to another. Also, many keywords can be used for describing the same image. Consequently, retrieval results are often inconsistent and unreliable. Due to these limitations, there is a growing interest in content-based image retrieval (CBIR). These techniques extract meaningful information or features from an image so that images can be classified and retrieved automatically based on their contents. Existing image retrieval systems such as QBIC and Virage extract the so-called low-level features such as color, texture and shape from an image in the spatial domain for indexing. Low-level features sometimes fail to represent high level semantic image features as they are subjective and depend greatly upon user preferences. To bridge the gap, a top-down retrieval approach involving high level knowledge can complement these low-level features. This articles deals with various aspects of CBIR. This includes bottom-up feature- based image retrieval in both the spatial and compressed domains, as well as top-down task-based image retrieval using prior knowledge.


Sign in / Sign up

Export Citation Format

Share Document