scholarly journals CONTENT BASED IMAGE RETRIEVAL

Author(s):  
HARSHADA ANAND KHUTWAD ◽  
RAVINDRA JINADATTA VAIDYA

Content Based Image Retrieval is an interesting and most emerging field in the area of ‘Image Search’, finding similar images for the given query image from the image database. Current approaches include the use of color, texture and shape information. Considering these features in individual, most of the retrievals are poor in results and sometimes we are getting some non relevant images for the given query image. So, this dissertation proposes a method in which combination of color and texture features of the image is used to improve the retrieval results in terms of its accuracy. For color, color histogram based color correlogram technique and for texture wavelet decomposition technique is used. Color and texture based image

2021 ◽  
Vol 5 (1) ◽  
pp. 28
Author(s):  
Fawzi Abdul Azeez Salih ◽  
Alan Anwer Abdulla

The rapid advancement and exponential evolution in the multimedia applications raised the attentional research on content-based image retrieval (CBIR). The technique has a significant role for searching and finding similar images to the query image through extracting the visual features. In this paper, an approach of two layers of search has been developed which is known as two-layer based CBIR. The first layer is concerned with comparing the query image to all images in the dataset depending on extracting the local feature using bag of features (BoF) mechanism which leads to retrieve certain most similar images to the query image. In other words, first step aims to eliminate the most dissimilar images to the query image to reduce the range of search in the dataset of images. In the second layer, the query image is compared to the images obtained in the first layer based on extracting the (texture and color)-based features. The Discrete Wavelet Transform (DWT) and Local Binary Pattern (LBP) were used as texture features. However, for the color features, three different color spaces were used, namely RGB, HSV, and YCbCr. The color spaces are utilized by calculating the mean and entropy for each channel separately. Corel-1K was used for evaluating the proposed approach. The experimental results prove the superior performance of the proposed concept of two-layer over the current state-of-the-art techniques in terms of precision rate in which achieved 82.15% and 77.27% for the top-10 and top-20, respectively.


In this paper, we proposed a fusion feature extraction method for content based image retrieval. The feature is extracted by focusing on the texture and shape features of the visual image by using the Local Binary Pattern (LBP – texture feature) and Edge Histogram Descriptor (EHD – shape feature). The SVD is used for decreasing the number of the feature vector of images. The Kd-tree is used for reducing the retrieval time. The input to this system is a query image and Database (the reference images) and the output is the top n most similar images for the query image. The proposed system is evaluated by using (precision and recall) to measure the retrieval effectiveness. The values of the recall are between [43% –93%] and the average recall is 64.3%. The values of precision are between [30%-100%] and the average is 72.86% for the entire system and for both databases


2019 ◽  
Vol 8 (3) ◽  
pp. 3649-3653

We present a framework that permits in classifying medical images so as to recognize conceivable diseases that affected. This is done by Image retrieval from the collection of dataset by inputting the query image. Content based Image retrieval (CBIR) is the way toward seeking comparable pictures from a picture database dependent on the visual substance of the given query image. Even though some studies present general method in image extraction, there are no efficient methods in medical image retrieval with accuracy. To overcome and to eliminate these flaws our proposed CBIR method examined with the accurate and efficient way for feature extraction from medical images. The images used are grey scale image. The dataset holds the n number of images related to medical particularly brain tumor images. To retrieve the related images from the dataset and get the corresponding details, image is given as an input i.e., query image. Initially, the query image is analyzed by shape, texture and histogram and the result obtained from this is compared with the similar images in dataset. The similarities between the images are found by implementing the Matching Score algorithm. This algorithm provides accuracy in matching the image that helps greatly at the time of classification. The results of computation is said to be the features for the given image. Also the cost for processing the image is comparatively low. The technique has been examined on standard image dataset and satisfactory results have been achieved


Author(s):  
Dange B J ◽  
Yadav S K ◽  
Kshirsagar D B

A Novel data fusion technique to support text-based and content-based image retrieval combining different heterogeneous features is proposed. The user need to give just a single click on an query image and images recovered by content based search are re-positioned dependent on their visual and texture similitudes to the query image.Textual and visual expansions are integrated to capture user intention without additional human feedback. Expanded keywords helps in extending positive model images and furthermore develop the image pool to include more relevant images. A lot of visual features which are both efficient and effective for image search are chosen. The n-dimensional feature vector for both colour and texture is reduced to single dimension each, used for comparing the similarity with query image using suitable distance metrics. Further only the images retrieved as a result of text based search and image re-ranking process are compared during run time for finding the similar images; not the entire database. This considerably reduces the computational complexity and improves the search efficiency. With improved feature extraction capturing textual and visual similarities, the proposed one click image search framework gives a productive robotized recovery of comparable images giving promising results with improvement in retrieval efficiency.


2021 ◽  
Vol 8 (7) ◽  
pp. 97-105
Author(s):  
Ali Ahmed ◽  
◽  
Sara Mohamed ◽  

Content-Based Image Retrieval (CBIR) systems retrieve images from the image repository or database in which they are visually similar to the query image. CBIR plays an important role in various fields such as medical diagnosis, crime prevention, web-based searching, and architecture. CBIR consists mainly of two stages: The first is the extraction of features and the second is the matching of similarities. There are several ways to improve the efficiency and performance of CBIR, such as segmentation, relevance feedback, expansion of queries, and fusion-based methods. The literature has suggested several methods for combining and fusing various image descriptors. In general, fusion strategies are typically divided into two groups, namely early and late fusion strategies. Early fusion is the combination of image features from more than one descriptor into a single vector before the similarity computation, while late fusion refers either to the combination of outputs produced by various retrieval systems or to the combination of different rankings of similarity. In this study, a group of color and texture features is proposed to be used for both methods of fusion strategies. Firstly, an early combination of eighteen color features and twelve texture features are combined into a single vector representation and secondly, the late fusion of three of the most common distance measures are used in the late fusion stage. Our experimental results on two common image datasets show that our proposed method has good performance retrieval results compared to the traditional way of using single features descriptor and also has an acceptable retrieval performance compared to some of the state-of-the-art methods. The overall accuracy of our proposed method is 60.6% and 39.07% for Corel-1K and GHIM-10K ‎datasets, respectively.


2019 ◽  
Vol 33 (19) ◽  
pp. 1950213 ◽  
Author(s):  
Vibhav Prakash Singh ◽  
Rajeev Srivastava ◽  
Yadunath Pathak ◽  
Shailendra Tiwari ◽  
Kuldeep Kaur

Content-based image retrieval (CBIR) system generally retrieves images based on the matching of the query image from all the images of the database. This exhaustive matching and searching slow down the image retrieval process. In this paper, a fast and effective CBIR system is proposed which uses supervised learning-based image management and retrieval techniques. It utilizes machine learning approaches as a prior step for speeding up image retrieval in the large database. For the implementation of this, first, we extract statistical moments and the orthogonal-combination of local binary patterns (OC-LBP)-based computationally light weighted color and texture features. Further, using some ground truth annotation of images, we have trained the multi-class support vector machine (SVM) classifier. This classifier works as a manager and categorizes the remaining images into different libraries. However, at the query time, the same features are extracted and fed to the SVM classifier. SVM detects the class of query and searching is narrowed down to the corresponding library. This supervised model with weighted Euclidean Distance (ED) filters out maximum irrelevant images and speeds up the searching time. This work is evaluated and compared with the conventional model of the CBIR system on two benchmark databases, and it is found that the proposed work is significantly encouraging in terms of retrieval accuracy and response time for the same set of used features.


2014 ◽  
Vol 12 (7) ◽  
pp. 3742-3748 ◽  
Author(s):  
Sumathi Ganesan ◽  
T.S. Subashini

Of late, the amount of digital X-ray images that are produced in hospitals is increasing incredibly fast. Efficient storing, processing and retrieving of X-ray images have thus become an important research topic. With the exponential need that arises in the search for the clinically relevant and visually similar medical images over a vast database, the arena of digital imaging techniques is forced to provide a potential and path-breaking methodology in the midst of technical advancements so as to give the best match in accordance to the user’s query image. CBIR helps doctors to compare X-rays of their current patients with images from similar cases and they could also use these images as queries to find the similar entries in the X-ray database. This paper focuses on six different classes of X-ray images, viz. chest, skull, foot, spine, pelvic and palm for efficient image retrieval. Initially the various X-rays are automatically classified into the six-different classes using BPNN and SVM as classifiers and GLCM co-efficient as features for classification. Indexing is done to make the retrieval fast and retrieval of similar images is based on the city block distance.  


Author(s):  
Priyesh Tiwari ◽  
Shivendra Nath Sharan ◽  
Kulwant Singh ◽  
Suraj Kamya

Content based image retrieval (CBIR), is an application of real-world computer vision domain where from a query image, similar images are searched from the database. The research presented in this paper aims to find out best features and classification model for optimum results for CBIR system.Five different set of feature combinations in two different color domains (i.e., RGB & HSV) are compared and evaluated using Neural Network Classifier, where best results obtained are 88.2% in terms of classifier accuracy. Color moments feature used comprises of: Mean, Standard Deviation,Kurtosis and Skewness. Histogram features is calculated via 10 probability bins. Wang-1k dataset is used to evaluate the CBIR system performance for image retrieval.Research concludes that integrated multi-level 3D color-texture feature yields most accurate results and also performs better in comparison to individually computed color and texture features.


2011 ◽  
Vol 61 (5) ◽  
pp. 415 ◽  
Author(s):  
Madasu Hanmandlu ◽  
Anirban Das

<p>Content-based image retrieval focuses on intuitive and efficient methods for retrieving images from databases based on the content of the images. A new entropy function that serves as a measure of information content in an image termed as 'an information theoretic measure' is devised in this paper. Among the various query paradigms, 'query by example' (QBE) is adopted to set a query image for retrieval from a large image database. In this paper, colour and texture features are extracted using the new entropy function and the dominant colour is considered as a visual feature for a particular set of images. Thus colour and texture features constitute the two-dimensional feature vector for indexing the images. The low dimensionality of the feature vector speeds up the atomic query. Indices in a large database system help retrieve the images relevant to the query image without looking at every image in the database. The entropy values of colour and texture and the dominant colour are considered for measuring the similarity. The utility of the proposed image retrieval system based on the information theoretic measures is demonstrated on a benchmark dataset.</p><p><strong>Defence Science Journal, 2011, 61(5), pp.415-430</strong><strong><strong>, DOI:http://dx.doi.org/10.14429/dsj.61.1177</strong></strong></p>


10.29007/w4sr ◽  
2018 ◽  
Author(s):  
Yin-Fu Huang ◽  
Bo-Rong Chen

With the rapid progress of network technologies and multimedia data, information retrieval techniques gradually become content-based, and not text-based yet. In this paper, we propose a content-based image retrieval system to query similar images in a real image database. First, we employ segmentation and main object detection to separate the main object from an image. Then, we extract MPEG-7 features from the object and select relevant features using the SAHS algorithm. Next, two approaches “one-against- all” and “one-against-one” are proposed to build the classifiers based on SVM. To further reduce indexing complexity, K-means clustering is used to generate MPEG-7 signatures. Thus, we combine the classes predicted by the classifiers and the results based on the MPEG-7 signatures, and find out the similar images to a query image. Finally, the experimental results show that our method is feasible in image searching from the real image database and more effective than the other methods.


Sign in / Sign up

Export Citation Format

Share Document