Multi-view Adversarially Learned Inference for Cross-domain Joint Distribution Matching

Author(s):  
Changying Du ◽  
Changde Du ◽  
Xingyu Xie ◽  
Chen Zhang ◽  
Hao Wang
Author(s):  
Yuan-Ting Hsieh ◽  
Shi-Yen Tao ◽  
Yao-Hung Hubert Tsai ◽  
Yi-Ren Yeh ◽  
Yu-Chiang Frank Wang

Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7036
Author(s):  
Chao Han ◽  
Xiaoyang Li ◽  
Zhen Yang ◽  
Deyun Zhou ◽  
Yiyang Zhao ◽  
...  

Domain adaptation aims to handle the distribution mismatch of training and testing data, which achieves dramatic progress in multi-sensor systems. Previous methods align the cross-domain distributions by some statistics, such as the means and variances. Despite their appeal, such methods often fail to model the discriminative structures existing within testing samples. In this paper, we present a sample-guided adaptive class prototype method, which consists of the no distribution matching strategy. Specifically, two adaptive measures are proposed. Firstly, the modified nearest class prototype is raised, which allows more diversity within same class, while keeping most of the class wise discrimination information. Secondly, we put forward an easy-to-hard testing scheme by taking into account the different difficulties in recognizing target samples. Easy samples are classified and selected to assist the prediction of hard samples. Extensive experiments verify the effectiveness of the proposed method.


2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Xianghong Zhao ◽  
Jieyu Zhao ◽  
Cong Liu ◽  
Weiming Cai

Motor imagery brain-computer interfaces (BCIs) have demonstrated great potential and attract world-spread attentions. Due to the nonstationary character of the motor imagery signals, costly and boring calibration sessions must be proceeded before use. This prevents them from going into our realistic life. In this paper, the source subject’s data are explored to perform calibration for target subjects. Model trained on source subjects is transferred to work for target subjects, in which the critical problem to handle is the distribution shift. It is found that the performance of classification would be bad when only the marginal distributions of source and target are made closer, since the discriminative directions of the source and target domains may still be much different. In order to solve the problem, our idea comes that joint distribution adaptation is indispensable. It makes the classifier trained in the source domain perform well in the target domain. Specifically, a measure for joint distribution discrepancy (JDD) between the source and target is proposed. Experiments demonstrate that it can align source and target data according to the class they belong to. It has a direct relationship with classification accuracy and works well for transferring. Secondly, a deep neural network with joint distribution matching for zero-training motor imagery BCI is proposed. It explores both marginal and joint distribution adaptation to alleviate distribution discrepancy across subjects and obtain effective and generalized features in an aligned common space. Visualizations of intermediate layers illustrate how and why the network works well. Experiments on the two datasets prove the effectiveness and strength compared to outstanding counterparts.


2021 ◽  
Author(s):  
Wenxin Hou ◽  
Jindong Wang ◽  
Xu Tan ◽  
Tao Qin ◽  
Takahiro Shinozaki

Author(s):  
Xiaofeng Liu ◽  
Bo Hu ◽  
Linghao Jin ◽  
Xu Han ◽  
Fangxu Xing ◽  
...  

In this work, we propose a domain generalization (DG) approach to learn on several labeled source domains and transfer knowledge to a target domain that is inaccessible in training. Considering the inherent conditional and label shifts, we would expect the alignment of p(x|y) and p(y). However, the widely used domain invariant feature learning (IFL) methods relies on aligning the marginal concept shift w.r.t. p(x), which rests on an unrealistic assumption that p(y) is invariant across domains. We thereby propose a novel variational Bayesian inference framework to enforce the conditional distribution alignment w.r.t. p(x|y) via the prior distribution matching in a latent space, which also takes the marginal label shift w.r.t. p(y) into consideration with the posterior alignment. Extensive experiments on various benchmarks demonstrate that our framework is robust to the label shift and the cross-domain accuracy is significantly improved, thereby achieving superior performance over the conventional IFL counterparts.


2020 ◽  
Vol 412 ◽  
pp. 115-128
Author(s):  
Xiaona Jin ◽  
Xiaowei Yang ◽  
Bo Fu ◽  
Sentao Chen

Sign in / Sign up

Export Citation Format

Share Document