Large visual words for large scale image classification

Author(s):  
Sheng Tang ◽  
Hui Chen ◽  
Ke Lv ◽  
Yong-Dong Zhang
2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Zhihang Ji ◽  
Sining Wu ◽  
Fan Wang ◽  
Lijuan Xu ◽  
Yan Yang ◽  
...  

In the context of image classification, bag-of-visual-words mode is widely used for image representation. In recent years several works have aimed at exploiting color or spatial information to improve the representation. In this paper two kinds of representation vectors, namely, Global Color Co-occurrence Vector (GCCV) and Local Color Co-occurrence Vector (LCCV), are proposed. Both of them make use of the color and co-occurrence information of the superpixels in an image. GCCV describes the global statistical distribution of the colorful superpixels with embedding the spatial information between them. By this way, it is capable of capturing the color and structure information in large scale. Unlike the GCCV, LCCV, which is embedded in the Riemannian manifold space, reflects the color information within the superpixels in detail. It records the higher-order distribution of the color between the superpixels within a neighborhood by aggregating the co-occurrence information in the second-order pooling way. In the experiment, we incorporate the two proposed representation vectors with feature vector like LLC or CNN by Multiple Kernel Learning (MKL) technology. Several challenging datasets for visual classification are tested on the novel framework, and experimental results demonstrate the effectiveness of the proposed method.


2014 ◽  
Vol 24 (07) ◽  
pp. 1450024 ◽  
Author(s):  
YU-BIN YANG ◽  
YA-NAN LI ◽  
YANG GAO ◽  
HUJUN YIN ◽  
YE TANG

In this paper, a structurally enhanced incremental neural learning technique is proposed to learn a discriminative codebook representation of images for effective image classification applications. In order to accommodate the relationships such as structures and distributions among visual words into the codebook learning process, we develop an online codebook graph learning method based on a novel structurally enhanced incremental learning technique, called as "visualization-induced self-organized incremental neural network (ViSOINN)". The hidden structural information in the images is embedded into the graph representation evolving dynamically with the adaptive and competitive learning mechanism. Afterwards, image features can be coded using a sub-graph extraction process based on the learned codebook graph, and a classifier is subsequently used to complete the image classification task. Compared with other codebook learning algorithms originated from the classical Bag-of-Features (BoF) model, ViSOINN holds the following advantages: (1) it learns codebook efficiently and effectively from a small training set; (2) it models the relationships among visual words in metric scaling fashion, so preserving high discriminative power; (3) it automatically learns the codebook without a fixed pre-defined size; and (4) it enhances and preserves better the structure of the data. These characteristics help to improve image classification performance and make it more suitable for handling large-scale image classification tasks. Experimental results on the widely used Caltech-101 and Caltech-256 benchmark datasets demonstrate that ViSOINN achieves markedly improved performance and reduces the computational cost considerably.


Diagnostics ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 1384
Author(s):  
Yin Dai ◽  
Yifan Gao ◽  
Fayu Liu

Over the past decade, convolutional neural networks (CNN) have shown very competitive performance in medical image analysis tasks, such as disease classification, tumor segmentation, and lesion detection. CNN has great advantages in extracting local features of images. However, due to the locality of convolution operation, it cannot deal with long-range relationships well. Recently, transformers have been applied to computer vision and achieved remarkable success in large-scale datasets. Compared with natural images, multi-modal medical images have explicit and important long-range dependencies, and effective multi-modal fusion strategies can greatly improve the performance of deep models. This prompts us to study transformer-based structures and apply them to multi-modal medical images. Existing transformer-based network architectures require large-scale datasets to achieve better performance. However, medical imaging datasets are relatively small, which makes it difficult to apply pure transformers to medical image analysis. Therefore, we propose TransMed for multi-modal medical image classification. TransMed combines the advantages of CNN and transformer to efficiently extract low-level features of images and establish long-range dependencies between modalities. We evaluated our model on two datasets, parotid gland tumors classification and knee injury classification. Combining our contributions, we achieve an improvement of 10.1% and 1.9% in average accuracy, respectively, outperforming other state-of-the-art CNN-based models. The results of the proposed method are promising and have tremendous potential to be applied to a large number of medical image analysis tasks. To our best knowledge, this is the first work to apply transformers to multi-modal medical image classification.


2021 ◽  
Vol 10 (9) ◽  
pp. 25394-25398
Author(s):  
Chitra Desai

Deep learning models have demonstrated improved efficacy in image classification since the ImageNet Large Scale Visual Recognition Challenge started since 2010. Classification of images has further augmented in the field of computer vision with the dawn of transfer learning. To train a model on huge dataset demands huge computational resources and add a lot of cost to learning. Transfer learning allows to reduce on cost of learning and also help avoid reinventing the wheel. There are several pretrained models like VGG16, VGG19, ResNet50, Inceptionv3, EfficientNet etc which are widely used.   This paper demonstrates image classification using pretrained deep neural network model VGG16 which is trained on images from ImageNet dataset. After obtaining the convolutional base model, a new deep neural network model is built on top of it for image classification based on fully connected network. This classifier will use features extracted from the convolutional base model.


2011 ◽  
Author(s):  
Jie Feng ◽  
L. C. Jiao ◽  
Xiangrong Zhang ◽  
Ruican Niu

2018 ◽  
pp. 1307-1321
Author(s):  
Vinh-Tiep Nguyen ◽  
Thanh Duc Ngo ◽  
Minh-Triet Tran ◽  
Duy-Dinh Le ◽  
Duc Anh Duong

Large-scale image retrieval has been shown remarkable potential in real-life applications. The standard approach is based on Inverted Indexing, given images are represented using Bag-of-Words model. However, one major limitation of both Inverted Index and Bag-of-Words presentation is that they ignore spatial information of visual words in image presentation and comparison. As a result, retrieval accuracy is decreased. In this paper, the authors investigate an approach to integrate spatial information into Inverted Index to improve accuracy while maintaining short retrieval time. Experiments conducted on several benchmark datasets (Oxford Building 5K, Oxford Building 5K+100K and Paris 6K) demonstrate the effectiveness of our proposed approach.


Author(s):  
Shang Liu ◽  
Xiao Bai

In this chapter, the authors present a new method to improve the performance of current bag-of-words based image classification process. After feature extraction, they introduce a pairwise image matching scheme to select the discriminative features. Only the label information from the training-sets is used to update the feature weights via an iterative matching processing. The selected features correspond to the foreground content of the images, and thus highlight the high level category knowledge of images. Visual words are constructed on these selected features. This novel method could be used as a refinement step for current image classification and retrieval process. The authors prove the efficiency of their method in three tasks: supervised image classification, semi-supervised image classification, and image retrieval.


Sign in / Sign up

Export Citation Format

Share Document