Federated Transfer Learning for Intelligent Fault Diagnostics Using Deep Adversarial Networks with Data Privacy

Author(s):  
Wei Zhang ◽  
Xiang Li
2021 ◽  
pp. 147592172110292
Author(s):  
Wei Zhang ◽  
Xiang Li

Federated learning has been receiving increasing attention in the recent years, which improves model performance with data privacy among different clients. The intelligent fault diagnostic problems can be largely benefited from this emerging technology since the private data generally cannot leave local storage in the real industries. While promising federated learning performance has been achieved in the literature, most studies assume data from different clients are independent and identically distributed. In the real industrial scenarios, due to variations in machines and operating conditions, the data distributions are generally different across different clients, that significantly deteriorates the performance of federated learning. To address this issue, a federated transfer learning method is proposed in this article for machinery fault diagnostics. Under the condition that data from different clients cannot be communicated, prior distributions are proposed to indirectly bridge the domain gap. In this way, client-invariant features can be extracted for diagnostics while the data privacy is preserved. Experiments on two rotating machinery datasets are implemented for validation, and the results suggest the proposed method offers an effective and promising approach for federated transfer learning in fault diagnostic problems.


2021 ◽  
Vol 204 ◽  
pp. 79-89
Author(s):  
Borja Espejo-Garcia ◽  
Nikos Mylonas ◽  
Loukas Athanasakos ◽  
Eleanna Vali ◽  
Spyros Fountas

2020 ◽  
Vol 86 ◽  
pp. 105950 ◽  
Author(s):  
Xudong Li ◽  
Yang Hu ◽  
Mingtao Li ◽  
Jianhua Zheng

Author(s):  
Yuan Zhang ◽  
Regina Barzilay ◽  
Tommi Jaakkola

We introduce a neural method for transfer learning between two (source and target) classification tasks or aspects over the same domain. Rather than training on target labels, we use a few keywords pertaining to source and target aspects indicating sentence relevance instead of document class labels. Documents are encoded by learning to embed and softly select relevant sentences in an aspect-dependent manner. A shared classifier is trained on the source encoded documents and labels, and applied to target encoded documents. We ensure transfer through aspect-adversarial training so that encoded documents are, as sets, aspect-invariant. Experimental results demonstrate that our approach outperforms different baselines and model variants on two datasets, yielding an improvement of 27% on a pathology dataset and 5% on a review dataset.


Sign in / Sign up

Export Citation Format

Share Document