Sentence learning on deep convolutional networks for image Caption Generation

Author(s):  
Dong-Jin Kim ◽  
Donggeun Yoo ◽  
Bonggeun Sim ◽  
In So Kweon
Author(s):  
Teng Jiang ◽  
Liang Gong ◽  
Yupu Yang

Attention-based encoder–decoder framework has greatly improved image caption generation tasks. The attention mechanism plays a transitional role by transforming static image features into sequential captions. To generate reasonable captions, it is of great significance to detect spatial characteristics of images. In this paper, we propose a spatial relational attention approach to consider spatial positions and attributes. Image features are firstly weighted by the attention mechanism. Then they are concatenated with contextual features to form a spatial–visual tensor. The tensor is feature extracted by a fully convolutional network to produce visual concepts for the decoder network. The fully convolutional layers maintain spatial topology of images. Experiments conducted on the three benchmark datasets, namely Flickr8k, Flickr30k and MSCOCO, demonstrate the effectiveness of our proposed approach. Captions generated by the spatial relational attention method precisely capture spatial relations of objects.


2018 ◽  
Vol 06 (10) ◽  
pp. 53-55
Author(s):  
Sailee P. Pawaskar ◽  
J. A. Laxminarayana

Author(s):  
Feng Chen ◽  
Songxian Xie ◽  
Xinyi Li ◽  
Jintao Tang ◽  
Kunyuan Pang ◽  
...  

Author(s):  
Xinyuan Qi ◽  
Zhiguo Cao ◽  
Yang Xiao ◽  
Jian Wang ◽  
Chao Zhang

Sign in / Sign up

Export Citation Format

Share Document