A novel method for content-based image retrieval to improve the effectiveness of the bag-of-words model using a support vector machine

2018 ◽  
Vol 45 (1) ◽  
pp. 117-135 ◽  
Author(s):  
Amna Sarwar ◽  
Zahid Mehmood ◽  
Tanzila Saba ◽  
Khurram Ashfaq Qazi ◽  
Ahmed Adnan ◽  
...  

The advancements in the multimedia technologies result in the growth of the image databases. To retrieve images from such image databases using visual attributes of the images is a challenging task due to the close visual appearance among the visual attributes of these images, which also introduces the issue of the semantic gap. In this article, we recommend a novel method established on the bag-of-words (BoW) model, which perform visual words integration of the local intensity order pattern (LIOP) feature and local binary pattern variance (LBPV) feature to reduce the issue of the semantic gap and enhance the performance of the content-based image retrieval (CBIR). The recommended method uses LIOP and LBPV features to build two smaller size visual vocabularies (one from each feature), which are integrated together to build a larger size of the visual vocabulary, which also contains complementary features of both descriptors. Because for efficient CBIR, the smaller size of the visual vocabulary improves the recall, while the bigger size of the visual vocabulary improves the precision or accuracy of the CBIR. The comparative analysis of the recommended method is performed on three image databases, namely, WANG-1K, WANG-1.5K and Holidays. The experimental analysis of the recommended method on these image databases proves its robust performance as compared with the recent CBIR methods.

2020 ◽  
Vol 7 (2) ◽  
pp. 349
Author(s):  
Budiman Baso ◽  
Nanik Suciati

<p class="Abstrak">Ragam motif pada tenun Nusa Tenggara Timur (NTT) seperti flora, fauna dan geometris menjadi suatu keunikan yang dapat membedakan daerah asal dan jenis dari tenun tersebut. Pada penelitian ini, sistem temu kembali citra berbasis isi atau <em>Content-Based Image Retrieval</em> (CBIR) diimplementasikan pada citra tenun NTT sehingga user dapat mencari citra tenun pada <em>database</em> menggunakan citra <em>query </em>berdasarkan fitur visual yang terkandung dalam citra. Seringkali citra <em>query</em> yang diinputkan <em>user</em> memiliki skala, rotasi dan pencahayaan yang bervariasi, sehingga diperlukan suatu metode ektraksi fitur yang dapat mengakomodasi variasi tersebut. Sistem temu kembali citra tenun pada penelitian ini menggunakan model <em>Bag of Visual Words</em> (BoVW) dari <em>keypoints</em> pada citra yang diekstrak dengan metode <em>Speeded Up Robust Feature</em> (SURF). BoVW dibangun menggunakan K-Means untuk menghasilkan <em>visual vocabulary</em> dari <em>keypoints</em> pada seluruh citra <em>training</em>. Representasi BoVW diharapkan dapat menangani variasi skala dan rotasi pada citra. Sedangkan untuk mengatasi variasi pencahayaan pada citra, dilakukan perbaikan kualitas citra dengan menggunakan <em>Contrast Limited Adaptive Histogram Equalization</em> (CLAHE). Percobaan dilakukan dengan membandingkan kinerja dari representasi BoVW yang dibangun menggunakan fitur SURF dengan <em>Maximally Stable Extremal Regions</em> (MSER) pada temu kembali citra tenun. Hasil uji coba menunjukkan bahwa metode SURF menghasilkan rata-rata akurasi 89,86% dan waktu komputasi 9,94 detik, sedangkan MSER menghasilkan rata-rata akurasi 84,04% dan waktu komputasi 1,95 detik.</p><p class="Abstrak"> </p><p class="Abstrak"><em><strong>Abstract</strong></em></p><p class="Abstract"><em>The variety of motifs in East Nusa Tenggara tenun such as flora, fauna and geometric is an unique thing that can distinguish the region of origin and type of the tenun. In this study, the Content-Based Image Retrieval (CBIR) system is implemented in the tenun image. With Content-based techniques Users can search tenun images on the image database by using query images based on visual features contained in the image. Often the query image that the user enters has a different scale, rotation and lighting, so a feature extraction method is needed that can accommodate these differences. The tenun image retrieval system in this study used the Bag of Visual Words (BoVW) model of the keypoints in the extracted image using the Speeded Up Robust Feature (SURF) method. BoVW was built using K-Means to produce visual vocabulary from keypoints on all training images. The representation of BoVW is expected to be able to handle scale variations and rotations in images. Whereas to overcome the lighting variations in the image, image quality improvement is done by using Contrast Limited Adaptive Histogram Equalization (CLAHE). The experiment was conducted by comparing the performance of the BoVW representation which was built using the SURF feature with Maximally Stable Extremal Regions (MSER) at the tenun image retrieval. The results of the trial showed that SURF obtained higher accuracy in all conditions of tenun image data with an average value of 89.86% whereas MSER obtained an average accuracy value of 84.04%. But MSER's computation time is 1.95 seconds faster than SURF which is 9.94 seconds.</em></p><p class="Abstrak"><em><strong><br /></strong></em></p>


Author(s):  
KEISUKE KAMEYAMA ◽  
SOO-NYOUN KIM ◽  
MICHITERU SUZUKI ◽  
KAZUO TORAICHI ◽  
TAKASHI YAMAMOTO

An improvement to the content-based image retrieval (CBIR) system for kaou images which has been developed by the authors group is introduced. Kaous are handwritten monograms found on old Japanese documents in a Chinese character-like shapes with artistic decorations. Kaous play an important role in the research of historical documents, which involve browsing and comparison of numerous samples. In this work, a novel method of kaou image modeling for CBIR is introduced, which incorporates the shade information of a closed kaou region in addition to the conventionally used contour characteristics. Dissimilarity of query and dictionary images were calculated as a weighted sum of elementary differences in the positions, contour shapes and colors of the component regions. These elementary differences were evaluated using relaxation matching and empirically defined distance functions. In the experiments, a set of 2455 kaou images were used. It was found that apparently similar kaou images could be retrieved by the proposed method, improving the retrieval quality. .


Selection of feature extraction method is incredibly recondite task in Content Based Image Retrieval (CBIR). In this paper, CBIR is implemented using collaboration of color; texture and shape attribute to improve the feature discriminating property. The implementation is divided in to three steps such as preprocessing, features extraction, classification. We have proposed color histogram features for color feature extraction, Local Binary Pattern (LBP) for texture feature extraction, and Histogram of oriented gradients (HOG) for shape attribute extraction. For the classification support vector machine classifier is applied. Experimental results show that combination of all three features outperforms the individual feature or combination of two feature extraction techniques


2018 ◽  
pp. 1307-1321
Author(s):  
Vinh-Tiep Nguyen ◽  
Thanh Duc Ngo ◽  
Minh-Triet Tran ◽  
Duy-Dinh Le ◽  
Duc Anh Duong

Large-scale image retrieval has been shown remarkable potential in real-life applications. The standard approach is based on Inverted Indexing, given images are represented using Bag-of-Words model. However, one major limitation of both Inverted Index and Bag-of-Words presentation is that they ignore spatial information of visual words in image presentation and comparison. As a result, retrieval accuracy is decreased. In this paper, the authors investigate an approach to integrate spatial information into Inverted Index to improve accuracy while maintaining short retrieval time. Experiments conducted on several benchmark datasets (Oxford Building 5K, Oxford Building 5K+100K and Paris 6K) demonstrate the effectiveness of our proposed approach.


Author(s):  
Shang Liu ◽  
Xiao Bai

In this chapter, the authors present a new method to improve the performance of current bag-of-words based image classification process. After feature extraction, they introduce a pairwise image matching scheme to select the discriminative features. Only the label information from the training-sets is used to update the feature weights via an iterative matching processing. The selected features correspond to the foreground content of the images, and thus highlight the high level category knowledge of images. Visual words are constructed on these selected features. This novel method could be used as a refinement step for current image classification and retrieval process. The authors prove the efficiency of their method in three tasks: supervised image classification, semi-supervised image classification, and image retrieval.


Author(s):  
Vinh-Tiep Nguyen ◽  
Thanh Duc Ngo ◽  
Minh-Triet Tran ◽  
Duy-Dinh Le ◽  
Duc Anh Duong

Large-scale image retrieval has been shown remarkable potential in real-life applications. The standard approach is based on Inverted Indexing, given images are represented using Bag-of-Words model. However, one major limitation of both Inverted Index and Bag-of-Words presentation is that they ignore spatial information of visual words in image presentation and comparison. As a result, retrieval accuracy is decreased. In this paper, the authors investigate an approach to integrate spatial information into Inverted Index to improve accuracy while maintaining short retrieval time. Experiments conducted on several benchmark datasets (Oxford Building 5K, Oxford Building 5K+100K and Paris 6K) demonstrate the effectiveness of our proposed approach.


2019 ◽  
Vol 33 (19) ◽  
pp. 1950213 ◽  
Author(s):  
Vibhav Prakash Singh ◽  
Rajeev Srivastava ◽  
Yadunath Pathak ◽  
Shailendra Tiwari ◽  
Kuldeep Kaur

Content-based image retrieval (CBIR) system generally retrieves images based on the matching of the query image from all the images of the database. This exhaustive matching and searching slow down the image retrieval process. In this paper, a fast and effective CBIR system is proposed which uses supervised learning-based image management and retrieval techniques. It utilizes machine learning approaches as a prior step for speeding up image retrieval in the large database. For the implementation of this, first, we extract statistical moments and the orthogonal-combination of local binary patterns (OC-LBP)-based computationally light weighted color and texture features. Further, using some ground truth annotation of images, we have trained the multi-class support vector machine (SVM) classifier. This classifier works as a manager and categorizes the remaining images into different libraries. However, at the query time, the same features are extracted and fed to the SVM classifier. SVM detects the class of query and searching is narrowed down to the corresponding library. This supervised model with weighted Euclidean Distance (ED) filters out maximum irrelevant images and speeds up the searching time. This work is evaluated and compared with the conventional model of the CBIR system on two benchmark databases, and it is found that the proposed work is significantly encouraging in terms of retrieval accuracy and response time for the same set of used features.


Sign in / Sign up

Export Citation Format

Share Document