Deep Adversarial Domain Adaptation Method for Cross-Domain Classification in High-Resolution Remote Sensing Images

2019 ◽  
Vol 56 (11) ◽  
pp. 112801
Author(s):  
滕文秀 Wenxiu Teng ◽  
王妮 Ni Wang ◽  
陈泰生 Taisheng Chen ◽  
王本林 Benlin Wang ◽  
陈梦琳 Menglin Chen ◽  
...  
Author(s):  
G. Gao ◽  
M. Zhang ◽  
Y. Gu

Multi-temporal remote sensing images classification is very useful for monitoring the land cover changes. Traditional approaches in this field mainly face to limited labelled samples and spectral drift of image information. With spatial resolution improvement, “pepper and salt” appears and classification results will be effected when the pixelwise classification algorithms are applied to high-resolution satellite images, in which the spatial relationship among the pixels is ignored. For classifying the multi-temporal high resolution images with limited labelled samples, spectral drift and “pepper and salt” problem, an object-based manifold alignment method is proposed. Firstly, multi-temporal multispectral images are cut to superpixels by simple linear iterative clustering (SLIC) respectively. Secondly, some features obtained from superpixels are formed as vector. Thirdly, a majority voting manifold alignment method aiming at solving high resolution problem is proposed and mapping the vector data to alignment space. At last, all the data in the alignment space are classified by using KNN method. Multi-temporal images from different areas or the same area are both considered in this paper. In the experiments, 2 groups of multi-temporal HR images collected by China GF1 and GF2 satellites are used for performance evaluation. Experimental results indicate that the proposed method not only has significantly outperforms than traditional domain adaptation methods in classification accuracy, but also effectively overcome the problem of “pepper and salt”.


2021 ◽  
Vol 13 (21) ◽  
pp. 4386
Author(s):  
Ying Chen ◽  
Qi Liu ◽  
Teng Wang ◽  
Bin Wang ◽  
Xiaoliang Meng

In recent years, object detection has shown excellent results on a large number of annotated data, but when there is a discrepancy between the annotated data and the real test data, the performance of the trained object detection model is often degraded when it is directly transferred to the real test dataset. Compared with natural images, remote sensing images have great differences in appearance and quality. Traditional methods need to re-label all image data before interpretation, which will consume a lot of manpower and time. Therefore, it is of practical significance to study the Cross-Domain Adaptation Object Detection (CDAOD) of remote sensing images. To solve the above problems, our paper proposes a Rotation-Invariant and Relation-Aware (RIRA) CDAOD network. We trained the network at the image-level and the prototype-level based on a relation aware graph to align the feature distribution and added the rotation-invariant regularizer to deal with the rotation diversity. The Faster R-CNN network was adopted as the backbone framework of the network. We conducted experiments on two typical remote sensing building detection datasets, and set three domain adaptation scenarios: WHU 2012 → WHU 2016, Inria (Chicago) → Inria (Austin), and WHU 2012 → Inria (Austin). [d=Y.C.]The results show that our method can effectively improve the detection effect in the target domain, and outperform competing methods by obtaining optimal results in all three scenarios.The results show that our method can effectively improve the detection effect in the target domain, and in the comparison methods, we get the optimal results in all three scenarios.


Sign in / Sign up

Export Citation Format

Share Document