Image Caption Generator using Siamese Graph Convolutional Networks and LSTM

2022 ◽  
Author(s):  
Athul Kumar ◽  
Aarchi Agrawal ◽  
K S Ashin Shanly ◽  
Sudip Das ◽  
Nidhin Harilal
Author(s):  
Teng Jiang ◽  
Liang Gong ◽  
Yupu Yang

Attention-based encoder–decoder framework has greatly improved image caption generation tasks. The attention mechanism plays a transitional role by transforming static image features into sequential captions. To generate reasonable captions, it is of great significance to detect spatial characteristics of images. In this paper, we propose a spatial relational attention approach to consider spatial positions and attributes. Image features are firstly weighted by the attention mechanism. Then they are concatenated with contextual features to form a spatial–visual tensor. The tensor is feature extracted by a fully convolutional network to produce visual concepts for the decoder network. The fully convolutional layers maintain spatial topology of images. Experiments conducted on the three benchmark datasets, namely Flickr8k, Flickr30k and MSCOCO, demonstrate the effectiveness of our proposed approach. Captions generated by the spatial relational attention method precisely capture spatial relations of objects.


2018 ◽  
Vol 06 (10) ◽  
pp. 53-55
Author(s):  
Sailee P. Pawaskar ◽  
J. A. Laxminarayana

Author(s):  
Hao Chen ◽  
Yue Xu ◽  
Feiran Huang ◽  
Zengde Deng ◽  
Wenbing Huang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document