An effective bag-of-visual-words framework for SAR image classification

2011 ◽  
Author(s):  
Jie Feng ◽  
L. C. Jiao ◽  
Xiangrong Zhang ◽  
Ruican Niu
2021 ◽  
Vol 24 (2) ◽  
pp. 78-86
Author(s):  
Zainab N. Sultani ◽  
◽  
Ban N. Dhannoon ◽  

Image classification is acknowledged as one of the most critical and challenging tasks in computer vision. The bag of visual words (BoVW) model has proven to be very efficient for image classification tasks since it can effectively represent distinctive image features in vector space. In this paper, BoVW using Scale-Invariant Feature Transform (SIFT) and Oriented Fast and Rotated BRIEF(ORB) descriptors are adapted for image classification. We propose a novel image classification system using image local feature information obtained from both SIFT and ORB local feature descriptors. As a result, the constructed SO-BoVW model presents highly discriminative features, enhancing the classification performance. Experiments on Caltech-101 and flowers dataset prove the effectiveness of the proposed method.


Author(s):  
Jalila Filali ◽  
Hajer Baazaoui Zghal ◽  
Jean Martinet

With the rapid growth of image collections, image classification and annotation has been active areas of research with notable recent progress. Bag-of-Visual-Words (BoVW) model, which relies on building visual vocabulary, has been widely used in this area. Recently, attention has been shifted to the use of advanced architectures which are characterized by multi-level processing. Hierarchical Max-Pooling (HMAX) model has attracted a great deal of attention in image classification. To improve image classification and annotation, several approaches based on ontologies have been proposed. However, image classification and annotation remain a challenging problem due to many related issues like the problem of ambiguity between classes. This problem can affect the quality of both classification and annotation results. In this paper, we propose an ontology-based image classification and annotation approach. Our contributions consist of the following: (1) exploiting ontological relationships between classes during both image classification and annotation processes; (2) combining the outputs of hypernym–hyponym classifiers to lead to a better discrimination between classes; and (3) annotating images by combining hypernym and hyponym classification results in order to improve image annotation and to reduce the ambiguous and inconsistent annotations. The aim is to improve image classification and annotation by using ontologies. Several strategies have been experimented, and the obtained results have shown that our proposal improves image classification and annotation.


Author(s):  
Yuanyuan Zuo ◽  
Bo Zhang

The sparse representation based classification algorithm has been used to solve the problem of human face recognition, but the image database is restricted to human frontal faces with only slight illumination and expression changes. This paper applies the sparse representation based algorithm to the problem of generic image classification, with a certain degree of intra-class variations and background clutter. Experiments are conducted with the sparse representation based algorithm and Support Vector Machine (SVM) classifiers on 25 object categories selected from the Caltech101 dataset. Experimental results show that without the time-consuming parameter optimization, the sparse representation based algorithm achieves comparable performance with SVM. The experiments also demonstrate that the algorithm is robust to a certain degree of background clutter and intra-class variations with the bag-of-visual-words representations. The sparse representation based algorithm can also be applied to generic image classification task when the appropriate image feature is used.


Sign in / Sign up

Export Citation Format

Share Document