Weighted and Class-Specific Maximum Mean Discrepancy for Unsupervised Domain Adaptation

2020 ◽  
Vol 22 (9) ◽  
pp. 2420-2433
Author(s):  
Hongliang Yan ◽  
Zhetao Li ◽  
Qilong Wang ◽  
Peihua Li ◽  
Yong Xu ◽  
...  
Author(s):  
Pei Cao ◽  
Chi Zhang ◽  
Xiangjun Lu ◽  
Dawu Gu

Deep learning (DL)-based techniques have recently proven to be very successful when applied to profiled side-channel attacks (SCA). In a real-world profiled SCA scenario, attackers gain knowledge about the target device by getting access to a similar device prior to the attack. However, most state-of-the-art literature performs only proof-of-concept attacks, where the traces intended for profiling and attacking are acquired consecutively on the same fully-controlled device. This paper reminds that even a small discrepancy between the profiling and attack traces (regarded as domain discrepancy) can cause a successful single-device attack to completely fail. To address the issue of domain discrepancy, we propose a Cross-Device Profiled Attack (CDPA), which introduces an additional fine-tuning phase after establishing a pretrained model. The fine-tuning phase is designed to adjust the pre-trained network, such that it can learn a hidden representation that is not only discriminative but also domain-invariant. In order to obtain domain-invariance, we adopt a maximum mean discrepancy (MMD) loss as a constraint term of the classic cross-entropy loss function. We show that the MMD loss can be easily calculated and embedded in a standard convolutional neural network. We evaluate our strategy on both publicly available datasets and multiple devices (eight Atmel XMEGA 8-bit microcontrollers and three SAKURA-G evaluation boards). The results demonstrate that CDPA can improve the performance of the classic DL-based SCA by orders of magnitude, which significantly eliminates the impact of domain discrepancy caused by different devices.


Geophysics ◽  
2020 ◽  
pp. 1-84
Author(s):  
Ji Chang ◽  
Jing Li ◽  
Yu Kang ◽  
Wenjun Lv ◽  
Ting Xu ◽  
...  

Lithology identification plays an essential role in geological exploration and reservoir evaluation. In recent years, machine learning-based logging lithology identification has received considerable attention due to its ability to fit complex models. Existing work develops machine learning models under the assumption that the data gathered from different wells are from the same probability distribution, so that the model trained on data from old wells can be directly applied to predict the lithologies of a new well without losing accuracy. In fact, due to variations in sedimentary environment and well-logging technique, the data from different wells may not have the same probability distribution. Therefore, such a direct application is unreliable. To prevent the accuracy from being reduced by the distribution difference, we integrate the unsupervised domain adaptation method into lithology identification, under the assumption that no lithology labels are available on a new well. Specifically, we propose a two-flow multi-layer neural network. We train our network with a maximum mean discrepancy optimization, and the training process is interrupted by an early-stopping criterion. These methods ensure that the feature representations learned by our network are both domain-invariant and discriminative. Our method is evaluated from multiple perspectives on a total of 21 wells located in the Jiyang Depression, Bohai Bay Basin. The experimental results demonstrate that our method effectively mitigates the performance degradation caused by data distribution differences and outperforms the baselines by about 10%.


Author(s):  
Qiao Liu ◽  
Hui Xue

Unsupervised domain adaptation (UDA) has been received increasing attention since it does not require labels in target domain. Most existing UDA methods learn domain-invariant features by minimizing discrepancy distance computed by a certain metric between domains. However, these discrepancy-based methods cannot be robustly applied to unsupervised time series domain adaptation (UTSDA). That is because discrepancy metrics in these methods contain only low-order and local statistics, which have limited expression for time series distributions and therefore result in failure of domain matching. Actually, the real-world time series are always non-local distributions, i.e., with non-stationary and non-monotonic statistics. In this paper, we propose an Adversarial Spectral Kernel Matching (AdvSKM) method, where a hybrid spectral kernel network is specifically designed as inner kernel to reform the Maximum Mean Discrepancy (MMD) metric for UTSDA. The hybrid spectral kernel network can precisely characterize non-stationary and non-monotonic statistics in time series distributions. Embedding hybrid spectral kernel network to MMD not only guarantees precise discrepancy metric but also benefits domain matching. Besides, the differentiable architecture of the spectral kernel network enables adversarial kernel learning, which brings more discriminatory expression for discrepancy matching. The results of extensive experiments on several real-world UTSDA tasks verify the effectiveness of our proposed method.


Author(s):  
Atsutoshi Kumagai ◽  
Tomoharu Iwata

We propose a simple yet effective method for unsupervised domain adaptation. When training and test distributions are different, standard supervised learning methods perform poorly. Semi-supervised domain adaptation methods have been developed for the case where labeled data in the target domain are available. However, the target data are often unlabeled in practice. Therefore, unsupervised domain adaptation, which does not require labels for target data, is receiving a lot of attention. The proposed method minimizes the discrepancy between the source and target distributions of input features by transforming the feature space of the source domain. Since such unilateral transformations transfer knowledge in the source domain to the target one without reducing dimensionality, the proposed method can effectively perform domain adaptation without losing information to be transfered. With the proposed method, it is assumed that the transformed features and the original features differ by a small residual to preserve the relationship between features and labels. This transformation is learned by aligning the higher-order moments of the source and target feature distributions based on the maximum mean discrepancy, which enables to compare two distributions without density estimation. Once the transformation is found, we learn supervised models by using the transformed source data and their labels. We use two real-world datasets to demonstrate experimentally that the proposed method achieves better classification performance than existing methods for unsupervised domain adaptation.


2020 ◽  
Vol 155 ◽  
pp. 113404 ◽  
Author(s):  
Peng Liu ◽  
Ting Xiao ◽  
Cangning Fan ◽  
Wei Zhao ◽  
Xianglong Tang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document