scholarly journals Differentially Private Optimal Transport: Application to Domain Adaptation

Author(s):  
Nam LeTien ◽  
Amaury Habrard ◽  
Marc Sebban

Optimal transport has received much attention during the past few years to deal with domain adaptation tasks. The goal is to transfer knowledge from a source domain to a target domain by finding a transportation of minimal cost moving the source distribution to the target one. In this paper, we address the challenging task of privacy preserving domain adaptation by optimal transport. Using the Johnson-Lindenstrauss transform together with some noise, we present the first differentially private optimal transport model and show how it can be directly applied on both unsupervised and semi-supervised domain adaptation scenarios. Our theoretically grounded method allows the optimization of the transportation plan and the Wasserstein distance between the two distributions while protecting the data of both domains.We perform an extensive series of experiments on various benchmarks (VisDA, Office-Home and Office-Caltech datasets) that demonstrates the efficiency of our method compared to non-private strategies.

Author(s):  
Renjun Xu ◽  
Pelen Liu ◽  
Yin Zhang ◽  
Fang Cai ◽  
Jindong Wang ◽  
...  

Domain adaptation (DA) has achieved a resounding success to learn a good classifier by leveraging labeled data from a source domain to adapt to an unlabeled target domain. However, in a general setting when the target domain contains classes that are never observed in the source domain, namely in Open Set Domain Adaptation (OSDA), existing DA methods failed to work because of the interference of the extra unknown classes. This is a much more challenging problem, since it can easily result in negative transfer due to the mismatch between the unknown and known classes. Existing researches are susceptible to misclassification when target domain unknown samples in the feature space distributed near the decision boundary learned from the labeled source domain. To overcome this, we propose Joint Partial Optimal Transport (JPOT), fully utilizing information of not only the labeled source domain but also the discriminative representation of unknown class in the target domain. The proposed joint discriminative prototypical compactness loss can not only achieve intra-class compactness and inter-class separability, but also estimate the mean and variance of the unknown class through backpropagation, which remains intractable for previous methods due to the blindness about the structure of the unknown classes. To our best knowledge, this is the first optimal transport model for OSDA. Extensive experiments demonstrate that our proposed model can significantly boost the performance of open set domain adaptation on standard DA datasets.


Symmetry ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 1994
Author(s):  
Ping Li ◽  
Zhiwei Ni ◽  
Xuhui Zhu ◽  
Juan Song ◽  
Wenying Wu

Domain adaptation manages to learn a robust classifier for target domain, using the source domain, but they often follow different distributions. To bridge distribution shift between the two domains, most of previous works aim to align their feature distributions through feature transformation, of which optimal transport for domain adaptation has attract researchers’ interest, as it can exploit the local information of the two domains in the process of mapping the source instances to the target ones by minimizing Wasserstein distance between their feature distributions. However, it may weaken the feature discriminability of source domain, thus degrade domain adaptation performance. To address this problem, this paper proposes a two-stage feature-based adaptation approach, referred to as optimal transport with dimensionality reduction (OTDR). In the first stage, we apply the dimensionality reduction with intradomain variant maximization but source intraclass compactness minimization, to separate data samples as much as possible and enhance the feature discriminability of the source domain. In the second stage, we leverage optimal transport-based technique to preserve the local information of the two domains. Notably, the desirable properties in the first stage can mitigate the degradation of feature discriminability of the source domain in the second stage. Extensive experiments on several cross-domain image datasets validate that OTDR is superior to its competitors in classification accuracy.


2020 ◽  
Vol 34 (07) ◽  
pp. 12975-12983
Author(s):  
Sicheng Zhao ◽  
Guangzhi Wang ◽  
Shanghang Zhang ◽  
Yang Gu ◽  
Yaxian Li ◽  
...  

Deep neural networks suffer from performance decay when there is domain shift between the labeled source domain and unlabeled target domain, which motivates the research on domain adaptation (DA). Conventional DA methods usually assume that the labeled data is sampled from a single source distribution. However, in practice, labeled data may be collected from multiple sources, while naive application of the single-source DA algorithms may lead to suboptimal solutions. In this paper, we propose a novel multi-source distilling domain adaptation (MDDA) network, which not only considers the different distances among multiple sources and the target, but also investigates the different similarities of the source samples to the target ones. Specifically, the proposed MDDA includes four stages: (1) pre-train the source classifiers separately using the training data from each source; (2) adversarially map the target into the feature space of each source respectively by minimizing the empirical Wasserstein distance between source and target; (3) select the source training samples that are closer to the target to fine-tune the source classifiers; and (4) classify each encoded target feature by corresponding source classifier, and aggregate different predictions using respective domain weight, which corresponds to the discrepancy between each source and target. Extensive experiments are conducted on public DA benchmarks, and the results demonstrate that the proposed MDDA significantly outperforms the state-of-the-art approaches. Our source code is released at: https://github.com/daoyuan98/MDDA.


Author(s):  
Tanguy Kerdoncuff ◽  
Rémi Emonet ◽  
Marc Sebban

Domain Adaptation aims at benefiting from a labeled dataset drawn from a source distribution to learn a model from examples generated from a different but related target distribution. Creating a domain-invariant representation between the two source and target domains is the most widely technique used. A simple and robust way to perform this task consists in (i) representing the two domains by subspaces described by their respective eigenvectors and (ii) seeking a mapping function which aligns them. In this paper, we propose to use Optimal Transport (OT) and its associated Wassertein distance to perform this alignment. While the idea of using OT in domain adaptation is not new, the original contribution of this paper is two-fold: (i) we derive a generalization bound on the target error involving several Wassertein distances. This prompts us to optimize the ground metric of OT to reduce the target risk; (ii) from this theoretical analysis, we design an algorithm (MLOT) which optimizes a Mahalanobis distance leading to a transportation plan that adapts better. Extensive experiments demonstrate the effectiveness of this original approach.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6994
Author(s):  
Siqi Bai ◽  
Yongjie Luo ◽  
Qun Wan

Wireless fingerprinting localization (FL) systems identify locations by building radio fingerprint maps, aiming to provide satisfactory location solutions for the complex environment. However, the radio map is easy to change, and the cost of building a new one is high. One research focus is to transfer knowledge from the old radio maps to a new one. Feature-based transfer learning methods help by mapping the source fingerprint and the target fingerprint to a common hidden domain, then minimize the maximum mean difference (MMD) distance between the empirical distributions in the latent domain. In this paper, the optimal transport (OT)-based transfer learning is adopted to directly map the fingerprint from the source domain to the target domain by minimizing the Wasserstein distance so that the data distribution of the two domains can be better matched and the positioning performance in the target domain is improved. Two channel-models are used to simulate the transfer scenarios, and the public measured data test further verifies that the transfer learning based on OT has better accuracy and performance when the radio map changes in FL, indicating the importance of the method in this field.


Sensors ◽  
2020 ◽  
Vol 20 (12) ◽  
pp. 3606
Author(s):  
Han Sun ◽  
Xinyi Chen ◽  
Ling Wang ◽  
Dong Liang ◽  
Ningzhong Liu ◽  
...  

Deep neural networks have been successfully applied in domain adaptation which uses the labeled data of source domain to supplement useful information for target domain. Deep Adaptation Network (DAN) is one of these efficient frameworks, it utilizes Multi-Kernel Maximum Mean Discrepancy (MK-MMD) to align the feature distribution in a reproducing kernel Hilbert space. However, DAN does not perform very well in feature level transfer, and the assumption that source and target domain share classifiers is too strict in different adaptation scenarios. In this paper, we further improve the adaptability of DAN by incorporating Domain Confusion (DC) and Classifier Adaptation (CA). To achieve this, we propose a novel domain adaptation method named C2DAN. Our approach first enables Domain Confusion (DC) by using a domain discriminator for adversarial training. For Classifier Adaptation (CA), a residual block is added to the source domain classifier in order to learn the difference between source classifier and target classifier. Beyond validating our framework on the standard domain adaptation dataset office-31, we also introduce and evaluate on the Comprehensive Cars (CompCars) dataset, and the experiment results demonstrate the effectiveness of the proposed framework C2DAN.


Author(s):  
Yuguang Yan ◽  
Wen Li ◽  
Hanrui Wu ◽  
Huaqing Min ◽  
Mingkui Tan ◽  
...  

Heterogeneous domain adaptation (HDA) aims to exploit knowledge from a heterogeneous source domain to improve the learning performance in a target domain. Since the feature spaces of the source and target domains are different, the transferring of knowledge is extremely difficult. In this paper, we propose a novel semi-supervised algorithm for HDA by exploiting the theory of optimal transport (OT), a powerful tool originally designed for aligning two different distributions. To match the samples between heterogeneous domains, we propose to preserve the semantic consistency between heterogeneous domains by incorporating label information into the entropic Gromov-Wasserstein discrepancy, which is a metric in OT for different metric spaces, resulting in a new semi-supervised scheme. Via the new scheme, the target and transported source samples with the same label are enforced to follow similar distributions. Lastly, based on the Kullback-Leibler metric, we develop an efficient algorithm to optimize the resultant problem. Comprehensive experiments on both synthetic and real-world datasets demonstrate the effectiveness of our proposed method.


2020 ◽  
Vol 34 (07) ◽  
pp. 10655-10662
Author(s):  
Jongwon Choi ◽  
Youngjoon Choi ◽  
Jihoon Kim ◽  
Jinyeop Chang ◽  
Ilhwan Kwon ◽  
...  

We describe an unsupervised domain adaptation framework for images by a transform to an abstract intermediate domain and ensemble classifiers seeking a consensus. The intermediate domain can be thought as a latent domain where both the source and target domains can be transferred easily. The proposed framework aligns both domains to the intermediate domain, which greatly improves the adaptation performance when the source and target domains are notably dissimilar. In addition, we propose an ensemble model trained by confusing multiple classifiers and letting them make a consensus alternately to enhance the adaptation performance for ambiguous samples. To estimate the hidden intermediate domain and the unknown labels of the target domain simultaneously, we develop a training algorithm using a double-structured architecture. We validate the proposed framework in hard adaptation scenarios with real-world datasets from simple synthetic domains to complex real-world domains. The proposed algorithm outperforms the previous state-of-the-art algorithms on various environments.


Author(s):  
Tuan Nguyen ◽  
Trung Le ◽  
Nhan Dam ◽  
Quan Hung Tran ◽  
Truyen Nguyen ◽  
...  

Using the principle of imitation learning and the theory of optimal transport we propose in this paper a novel model for unsupervised domain adaptation named Teacher Imitation Domain Adaptation with Optimal Transport (TIDOT). Our model includes two cooperative agents: a teacher and a student. The former agent is trained to be an expert on labeled data in the source domain, whilst the latter one aims to work with unlabeled data in the target domain. More specifically, optimal transport is applied to quantify the total of the distance between embedded distributions of the source and target data in the joint space, and the distance between predictive distributions of both agents, thus by minimizing this quantity TIDOT could mitigate not only the data shift but also the label shift. Comprehensive empirical studies show that TIDOT outperforms existing state-of-the-art performance on benchmark datasets.


Author(s):  
C. Daniel Batson

This book provides an example of how the scientific method can be used to address a fundamental question about human nature. For centuries—indeed for millennia—the egoism–altruism debate has echoed through Western thought. Egoism says that the motivation for everything we do, including all of our seemingly selfless acts of care for others, is to gain one or another self-benefit. Altruism, while not denying the force of self-benefit, says that under certain circumstances we can care for others for their sakes, not our own. Over the past half-century, social psychologists have turned to laboratory experiments to provide a scientific resolution of this human nature debate. The experiments focused on the possibility that empathic concern—other-oriented emotion elicited by and congruent with the perceived welfare of someone in need—produces altruistic motivation to remove that need. With carefully constructed experimental designs, these psychologists have tested the nature of the motivation produced by empathic concern, determining whether it is egoistic or altruistic. This series of experiments has provided an answer to a fundamental question about what makes us tick. Framed as a detective story, the book traces this scientific search for altruism through the numerous twists and turns that led to the conclusion that empathy-induced altruism is indeed part of our nature. It then examines the implications of this conclusion—negative implications as well as positive—both for our understanding of who we are as humans and for how we might create a more humane society.


Sign in / Sign up

Export Citation Format

Share Document