Remote sensing image scene classification based on generative adversarial networks

2018 ◽  
Vol 9 (7) ◽  
pp. 617-626 ◽  
Author(s):  
Suhui Xu ◽  
Xiaodong Mu ◽  
Dong Chai ◽  
Xiongmei Zhang
2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jian Huang ◽  
Shanhui Liu ◽  
Yutian Tang ◽  
Xiushan Zhang

With the continuous development of deep learning in computer vision, semantic segmentation technology is constantly employed for processing remote sensing images. For instance, it is a key technology to automatically mark important objects such as ships or port land from port area remote sensing images. However, the existing supervised semantic segmentation model based on deep learning requires a large number of training samples. Otherwise, it will not be able to correctly learn the characteristics of the target objects, which results in the poor performance or even failure of semantic segmentation task. Since the target objects such as ships may move from time to time, it is nontrivial to collect enough samples to achieve satisfactory segmentation performance. And this severely hinders the performance improvement of most of existing augmentation methods. To tackle this problem, in this paper, we propose an object-level remote sensing image augmentation approach based on leveraging the U-Net-based generative adversarial networks. Specifically, our proposed approach consists two components including the semantic tag image generator and the U-Net GAN-based translator. To evaluate the effectiveness of the proposed approach, comprehensive experiments are conducted on a public dataset HRSC2016. State-of-the-art generative models, DCGAN, WGAN, and CycleGAN, are selected as baselines. According to the experimental results, our proposed approach significantly outperforms the baselines in terms of not only drawing the outlines of target objects but also capturing their meaningful details.


Sign in / Sign up

Export Citation Format

Share Document