scholarly journals ADVERSARIAL DISCRIMINATIVE DOMAIN ADAPTATION FOR DEFORESTATION DETECTION

Author(s):  
J. Noa ◽  
P. J. Soto ◽  
G. A. O. P. Costa ◽  
D. Wittich ◽  
R. Q. Feitosa ◽  
...  

Abstract. Although very efficient in a number of application fields, deep learning based models are known to demand large amounts of labeled data for training. Particularly for remote sensing applications, responding to that demand is generally expensive and time consuming. Moreover, supervised training methods tend to perform poorly when they are tested with a set of samples that does not match the general characteristics of the training set. Domain adaptation methods can be used to mitigate those problems, especially in applications where labeled data is only available for a particular region or epoch, i.e., for a source domain, but not for a target domain on which the model should be tested. In this work we introduce a domain adaptation approach based on representation matching for the deforestation detection task. The approach follows the Adversarial Discriminative Domain Adaptation (ADDA) framework, and we introduce a margin-based regularization constraint in the learning process that promotes a better convergence of the model parameters during training. The approach is evaluated using three different domains, which represent sites in different forest biomes. The experimental results show that the approach is successful in the adaptation of most of the domain combination scenarios, usually with considerable gains in relation to the baselines.

2019 ◽  
Vol 11 (12) ◽  
pp. 1492
Author(s):  
Guangjiao Zhou ◽  
Ye Zhang

A primary problem faced during previous research was the gap in limited and unbalanced quantity of prior samples between computer classification tasks and targeted remote sensing applications. This paper presents the fusion method to overcome this limitation. It offers a novel method based on knowledge transfer and feature association, a strong combination of transfer learning and data fusion. The former reuses layers trained on complete data sets to compute a mid-level representation of the specific target. The latter brings additional information from heterogeneous sources to enrich the features in the target domain. Firstly, a basic convolutional neural network (B_CNN) is pretrained on to the CIFAR-10 dataset to produce a stable model responsible for general feature extraction from multiple inputs. Secondly, a transfer CNN (Trans_CNN) with fine-tuned and transferred parameters is constraint-trained to fit and switch between differing tasks. Meanwhile, the feature association (FA) frames a new feature space to achieve integration between training and testing samples from different sensors. Finally, on-line detection can be completed based on Trans_CNN to explore a state-of-the-art method to overcome the inadequate sample problems in real remote sensing applications rather than produce an unrolled version of training methods or structural improvement in CNN. Experimental results show that target detection rates without homogeneous prior samples can reach 85%. Under these conditions, the traditional CNN model is invalid.


Author(s):  
A. Paul ◽  
F. Rottensteiner ◽  
C. Heipke

In this paper we address the problem of classification of remote sensing images in the framework of transfer learning with a focus on domain adaptation. The main novel contribution is a method for transductive transfer learning in remote sensing on the basis of logistic regression. Logistic regression is a discriminative probabilistic classifier of low computational complexity, which can deal with multiclass problems. This research area deals with methods that solve problems in which labelled training data sets are assumed to be available only for a source domain, while classification is needed in the target domain with different, yet related characteristics. Classification takes place with a model of weight coefficients for hyperplanes which separate features in the transformed feature space. In term of logistic regression, our domain adaptation method adjusts the model parameters by iterative labelling of the target test data set. These labelled data features are iteratively added to the current training set which, at the beginning, only contains source features and, simultaneously, a number of source features are deleted from the current training set. Experimental results based on a test series with synthetic and real data constitutes a first proof-of-concept of the proposed method.


Author(s):  
Renjun Xu ◽  
Pelen Liu ◽  
Yin Zhang ◽  
Fang Cai ◽  
Jindong Wang ◽  
...  

Domain adaptation (DA) has achieved a resounding success to learn a good classifier by leveraging labeled data from a source domain to adapt to an unlabeled target domain. However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes. This is a much more challenging problem, since it can easily result in negative transfer due to the mismatch between the unknown and known classes. Existing researches are susceptible to misclassification when target domain unknown samples in the feature space distributed near the decision boundary learned from the labeled source domain. To overcome this, we propose Joint Partial Optimal Transport (JPOT), fully utilizing information of not only the labeled source domain but also the discriminative representation of unknown class in the target domain. The proposed joint discriminative prototypical compactness loss can not only achieve intra-class compactness and inter-class separability, but also estimate the mean and variance of the unknown class through backpropagation, which remains intractable for previous methods due to the blindness about the structure of the unknown classes. To our best knowledge, this is the first optimal transport model for OSDA. Extensive experiments demonstrate that our proposed model can significantly boost the performance of open set domain adaptation on standard DA datasets.


Author(s):  
P. J. Soto ◽  
G. A. O. P. Costa ◽  
R. Q. Feitosa ◽  
P. N. Happ ◽  
M. X. Ortega ◽  
...  

Abstract. Deep learning classification models require large amounts of labeled training data to perform properly, but the production of reference data for most Earth observation applications is a labor intensive, costly process. In that sense, transfer learning is an option to mitigate the demand for labeled data. In many remote sensing applications, however, the accuracy of a deep learning-based classification model trained with a specific dataset drops significantly when it is tested on a different dataset, even after fine-tuning. In general, this behavior can be credited to the domain shift phenomenon. In remote sensing applications, domain shift can be associated with changes in the environmental conditions during the acquisition of new data, variations of objects’ appearances, geographical variability and different sensor properties, among other aspects. In recent years, deep learning-based domain adaptation techniques have been used to alleviate the domain shift problem. Recent improvements in domain adaptation technology rely on techniques based on Generative Adversarial Networks (GANs), such as the Cycle-Consistent Generative Adversarial Network (CycleGAN), which adapts images across different domains by learning nonlinear mapping functions between the domains. In this work, we exploit the CycleGAN approach for domain adaptation in a particular change detection application, namely, deforestation detection in the Amazon forest. Experimental results indicate that the proposed approach is capable of alleviating the effects associated with domain shift in the context of the target application.


2021 ◽  
Author(s):  
bin wang ◽  
Gang Li ◽  
Chao Wu ◽  
WeiShan Zhang ◽  
Jiehan Zhou ◽  
...  

Abstract Unsupervised federated domain adaptation uses the knowledge from several distributed unlabelled source domains to complete the learning on the unlabelled target domain. Some of the existing methods have limited effectiveness and involve frequent communication. This paper proposes a framework to solve the distributed multi-source domain adaptation problem, referred as self-supervised federated domain adaptation (SFDA). Specifically, a multi-domain model generalization balance (MDMGB) is proposed to aggregate the models from multiple source domains in each round of communication. A weighted strategy based on centroid similarity is also designed for SFDA. SFDA conducts self-supervised training on the target domain to tackle domain shift. Compared with the classical federated adversarial domain adaptation algorithm, SFDA is not only strong in communication cost and privacy protection but also improves in the accuracy of the model.


2020 ◽  
Vol 34 (04) ◽  
pp. 5940-5947 ◽  
Author(s):  
Hui Tang ◽  
Kui Jia

Given labeled instances on a source domain and unlabeled ones on a target domain, unsupervised domain adaptation aims to learn a task classifier that can well classify target instances. Recent advances rely on domain-adversarial training of deep networks to learn domain-invariant features. However, due to an issue of mode collapse induced by the separate design of task and domain classifiers, these methods are limited in aligning the joint distributions of feature and category across domains. To overcome it, we propose a novel adversarial learning method termed Discriminative Adversarial Domain Adaptation (DADA). Based on an integrated category and domain classifier, DADA has a novel adversarial objective that encourages a mutually inhibitory relation between category and domain predictions for any input instance. We show that under practical conditions, it defines a minimax game that can promote the joint distribution alignment. Except for the traditional closed set domain adaptation, we also extend DADA for extremely challenging problem settings of partial and open set domain adaptation. Experiments show the efficacy of our proposed methods and we achieve the new state of the art for all the three settings on benchmark datasets.


2021 ◽  
Vol 9 ◽  
pp. 1355-1373
Author(s):  
Guy Rotman ◽  
Amir Feder ◽  
Roi Reichart

Abstract Recent improvements in the predictive quality of natural language processing systems are often dependent on a substantial increase in the number of model parameters. This has led to various attempts of compressing such models, but existing methods have not considered the differences in the predictive power of various model components or in the generalizability of the compressed models. To understand the connection between model compression and out-of-distribution generalization, we define the task of compressing language representation models such that they perform best in a domain adaptation setting. We choose to address this problem from a causal perspective, attempting to estimate the average treatment effect (ATE) of a model component, such as a single layer, on the model’s predictions. Our proposed ATE-guided Model Compression scheme (AMoC), generates many model candidates, differing by the model components that were removed. Then, we select the best candidate through a stepwise regression model that utilizes the ATE to predict the expected performance on the target domain. AMoC outperforms strong baselines on dozens of domain pairs across three text classification and sequence tagging tasks.1


Sign in / Sign up

Export Citation Format

Share Document