scholarly journals Generative Adversarial Networks for Hard Negative Mining in CNN-Based SAR-Optical Image Matching

Author(s):  
Lloyd H. Hughes ◽  
Michael Schmitt ◽  
Xiao Xiang Zhu
IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 217554-217572
Author(s):  
Wen-Liang Du ◽  
Yong Zhou ◽  
Jiaqi Zhao ◽  
Xiaolin Tian

2018 ◽  
Vol 10 (10) ◽  
pp. 1552 ◽  
Author(s):  
Lloyd Hughes ◽  
Michael Schmitt ◽  
Xiao Zhu

In this paper, we propose a generative framework to produce similar yet novel samples for a specified image. We then propose the use of these images as hard-negatives samples, within the framework of hard-negative mining, in order to improve the performance of classification networks in applications which suffer from sparse labelled training data. Our approach makes use of a variational autoencoder (VAE) which is trained in an adversarial manner in order to learn a latent distribution of the training data, as well as to be able to generate realistic, high quality image patches. We evaluate our proposed generative approach to hard-negative mining on a synthetic aperture radar (SAR) and optical image matching task. Using an existing SAR-optical matching network as the basis for our investigation, we compare the performance of the matching network trained using our approach to the baseline method, as well as to two other hard-negative mining methods. Our proposed generative architecture is able to generate realistic, very high resolution (VHR) SAR image patches which are almost indistinguishable from real imagery. Furthermore, using the patches as hard-negative samples, we are able to improve the overall accuracy, and significantly decrease the false positive rate of the SAR-optical matching task—thus validating our generative hard-negative mining approaches’ applicability to improve training in data sparse applications.


Sign in / Sign up

Export Citation Format

Share Document