High-level Image Classification by Synergizing Image Captioning with BERT

Author(s):  
Xiaohong Yu ◽  
Yoseop Ahn ◽  
Jaehoon Jeong
2020 ◽  
Author(s):  
Ying Bi ◽  
Bing Xue ◽  
Mengjie Zhang

© Springer International Publishing AG, part of Springer Nature 2018. Feature extraction is an essential process for image data dimensionality reduction and classification. However, feature extraction is very difficult and often requires human intervention. Genetic Programming (GP) can achieve automatic feature extraction and image classification but the majority of existing methods extract low-level features from raw images without any image-related operations. Furthermore, the work on the combination of image-related operators/descriptors in GP for feature extraction and image classification is limited. This paper proposes a multi-layer GP approach (MLGP) to performing automatic high-level feature extraction and classification. A new program structure, a new function set including a number of image operators/descriptors and two region detectors, and a new terminal set are designed in this approach. The performance of the proposed method is examined on six different data sets of varying difficulty and compared with five GP based methods and 42 traditional image classification methods. Experimental results show that the proposed method achieves better or comparable performance than these baseline methods. Further analysis on the example programs evolved by the proposed MLGP method reveals the good interpretability of MLGP and gives insight into how this method can effectively extract high-level features for image classification.


Author(s):  
Shang Liu ◽  
Xiao Bai

In this chapter, the authors present a new method to improve the performance of current bag-of-words based image classification process. After feature extraction, they introduce a pairwise image matching scheme to select the discriminative features. Only the label information from the training-sets is used to update the feature weights via an iterative matching processing. The selected features correspond to the foreground content of the images, and thus highlight the high level category knowledge of images. Visual words are constructed on these selected features. This novel method could be used as a refinement step for current image classification and retrieval process. The authors prove the efficiency of their method in three tasks: supervised image classification, semi-supervised image classification, and image retrieval.


2019 ◽  
Vol 56 (11) ◽  
pp. 111001
Author(s):  
贺琪 Qi He ◽  
李瑶 Yao Li ◽  
宋巍 Wei Song ◽  
黄冬梅 Dongmei Huang ◽  
何盛琪 Shengqi He ◽  
...  

2008 ◽  
Vol 18 (03) ◽  
pp. 195-205 ◽  
Author(s):  
WEIBAO ZOU ◽  
ZHERU CHI ◽  
KING CHUEN LO

Image classification is a challenging problem in organizing a large image database. However, an effective method for such an objective is still under investigation. A method based on wavelet analysis to extract features for image classification is presented in this paper. After an image is decomposed by wavelet, the statistics of its features can be obtained by the distribution of histograms of wavelet coefficients, which are respectively projected onto two orthogonal axes, i.e., x and y directions. Therefore, the nodes of tree representation of images can be represented by the distribution. The high level features are described in low dimensional space including 16 attributes so that the computational complexity is significantly decreased. 2800 images derived from seven categories are used in experiments. Half of the images were used for training neural network and the other images used for testing. The features extracted by wavelet analysis and the conventional features are used in the experiments to prove the efficacy of the proposed method. The classification rate on the training data set with wavelet analysis is up to 91%, and the classification rate on the testing data set reaches 89%. Experimental results show that our proposed approach for image classification is more effective.


2020 ◽  
Vol 34 (4) ◽  
pp. 571-584
Author(s):  
Rajarshi Biswas ◽  
Michael Barz ◽  
Daniel Sonntag

AbstractImage captioning is a challenging multimodal task. Significant improvements could be obtained by deep learning. Yet, captions generated by humans are still considered better, which makes it an interesting application for interactive machine learning and explainable artificial intelligence methods. In this work, we aim at improving the performance and explainability of the state-of-the-art method Show, Attend and Tell by augmenting their attention mechanism using additional bottom-up features. We compute visual attention on the joint embedding space formed by the union of high-level features and the low-level features obtained from the object specific salient regions of the input image. We embed the content of bounding boxes from a pre-trained Mask R-CNN model. This delivers state-of-the-art performance, while it provides explanatory features. Further, we discuss how interactive model improvement can be realized through re-ranking caption candidates using beam search decoders and explanatory features. We show that interactive re-ranking of beam search candidates has the potential to outperform the state-of-the-art in image captioning.


Data ◽  
2019 ◽  
Vol 4 (4) ◽  
pp. 139
Author(s):  
Changhoon Jeong ◽  
Sung-Eun Jang ◽  
Sanghyuck Na ◽  
Juntae Kim

Recently, deep learning-based methods for solving multi-modal tasks such as image captioning, multi-modal classification, and cross-modal retrieval have attracted much attention. To apply deep learning for such tasks, large amounts of data are needed for training. However, although there are several Korean single-modal datasets, there are not enough Korean multi-modal datasets. In this paper, we introduce a KTS (Korean tourist spot) dataset for Korean multi-modal deep-learning research. The KTS dataset has four modalities (image, text, hashtags, and likes) and consists of 10 classes related to Korean tourist spots. All data were extracted from Instagram and preprocessed. We performed two experiments, image classification and image captioning with the dataset, and they showed appropriate results. We hope that many researchers will use this dataset for multi-modal deep-learning research.


Sign in / Sign up

Export Citation Format

Share Document