scholarly journals VAA: Visual Aligning Attention Model for Remote Sensing Image Captioning

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 137355-137364 ◽  
Author(s):  
Zhengyuan Zhang ◽  
Wenkai Zhang ◽  
Wenhui Diao ◽  
Menglong Yan ◽  
Xin Gao ◽  
...  
2020 ◽  
Vol 12 (6) ◽  
pp. 939 ◽  
Author(s):  
Yangyang Li ◽  
Shuangkang Fang ◽  
Licheng Jiao ◽  
Ruijiao Liu ◽  
Ronghua Shang

The task of image captioning involves the generation of a sentence that can describe an image appropriately, which is the intersection of computer vision and natural language. Although the research on remote sensing image captions has just started, it has great significance. The attention mechanism is inspired by the way humans think, which is widely used in remote sensing image caption tasks. However, the attention mechanism currently used in this task is mainly aimed at images, which is too simple to express such a complex task well. Therefore, in this paper, we propose a multi-level attention model, which is a closer imitation of attention mechanisms of human beings. This model contains three attention structures, which represent the attention to different areas of the image, the attention to different words, and the attention to vision and semantics. Experiments show that our model has achieved better results than before, which is currently state-of-the-art. In addition, the existing datasets for remote sensing image captioning contain a large number of errors. Therefore, in this paper, a lot of work has been done to modify the existing datasets in order to promote the research of remote sensing image captioning.


2019 ◽  
Vol 11 (20) ◽  
pp. 2349 ◽  
Author(s):  
Zhengyuan Zhang ◽  
Wenhui Diao ◽  
Wenkai Zhang ◽  
Menglong Yan ◽  
Xin Gao ◽  
...  

Significant progress has been made in remote sensing image captioning by encoder-decoder frameworks. The conventional attention mechanism is prevalent in this task but still has some drawbacks. The conventional attention mechanism only uses visual information about the remote sensing images without considering using the label information to guide the calculation of attention masks. To this end, a novel attention mechanism, namely Label-Attention Mechanism (LAM), is proposed in this paper. LAM additionally utilizes the label information of high-resolution remote sensing images to generate natural sentences to describe the given images. It is worth noting that, instead of high-level image features, the predicted categories’ word embedding vectors are adopted to guide the calculation of attention masks. Representing the content of images in the form of word embedding vectors can filter out redundant image features. In addition, it can also preserve pure and useful information for generating complete sentences. The experimental results from UCM-Captions, Sydney-Captions and RSICD demonstrate that LAM can improve the model’s performance for describing high-resolution remote sensing images and obtain better S m scores compared with other methods. S m score is a hybrid scoring method derived from the AI Challenge 2017 scoring method. In addition, the validity of LAM is verified by the experiment of using true labels.


2020 ◽  
Vol 12 (11) ◽  
pp. 1874
Author(s):  
Kun Fu ◽  
Yang Li ◽  
Wenkai Zhang ◽  
Hongfeng Yu ◽  
Xian Sun

The encoder–decoder framework has been widely used in the remote sensing image captioning task. When we need to extract remote sensing images containing specific characteristics from the described sentences for research, rich sentences can improve the final extraction results. However, the Long Short-Term Memory (LSTM) network used in decoders still loses some information in the picture over time when the generated caption is long. In this paper, we present a new model component named the Persistent Memory Mechanism (PMM), which can expand the information storage capacity of LSTM with an external memory. The external memory is a memory matrix with a predetermined size. It can store all the hidden layer vectors of LSTM before the current time step. Thus, our method can effectively solve the above problem. At each time step, the PMM searches previous information related to the input information at the current time from the external memory. Then the PMM will process the captured long-term information and predict the next word with the current information. In addition, it updates its memory with the input information. This method can pick up the long-term information missed from the LSTM but useful to the caption generation. By applying this method to image captioning, our CIDEr scores on datasets UCM-Captions, Sydney-Captions, and RSICD increased by 3%, 5%, and 7%, respectively.


Author(s):  
Zhengyuan Zhang ◽  
Wenkai Zhang ◽  
Menglong Yan ◽  
Xin Gao ◽  
Kun Fu ◽  
...  

Author(s):  
Yun Meng ◽  
Yu Gu ◽  
Xiutiao Ye ◽  
Jingxian Tian ◽  
Shuang Wang ◽  
...  

2019 ◽  
Vol 165 ◽  
pp. 32-40
Author(s):  
S Chandeesh Kumar ◽  
M Hemalatha ◽  
S Badri Narayan ◽  
P Nandhini

Sign in / Sign up

Export Citation Format

Share Document