scholarly journals UnTran: Recognizing Unseen Activities with Unlabeled Data Using Transfer Learning

Author(s):  
Md Abdullah Al Hafiz Khan ◽  
Nirmalya Roy
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Shoubing Xiang ◽  
Jiangquan Zhang ◽  
Hongli Gao ◽  
Dalei Shi ◽  
Liang Chen

Current studies on intelligent bearing fault diagnosis based on transfer learning have been fruitful. However, these methods mainly focus on transfer fault diagnosis of bearings under different working conditions. In engineering practice, it is often difficult or even impossible to obtain a large amount of labeled data from some machines, and an intelligent diagnostic method trained by labeled data from one machine may not be able to classify unlabeled data from other machines, strongly hindering the application of these intelligent diagnostic methods in certain industries. In this study, a deep transfer learning method for bearing fault diagnosis, domain separation reconstruction adversarial networks (DSRAN), was proposed for the transfer fault diagnosis between machines. In DSRAN, domain-difference and domain-invariant feature extractors are used to extract and separate domain-difference and domain-invariant features, respectively Moreover, the idea of generative adversarial networks (GAN) was used to improve the network in learning domain-invariant features. By using domain-invariant features, DSRAN can adopt the distribution of the data in the source and target domains. Six transfer fault diagnosis experiments were performed to verify the effectiveness of the proposed method, and the average accuracy reached 89.68%. The results showed that the DSRAN method trained by labeled data obtained from one machine can be used to identify the health state of the unlabeled data obtained from other machines.


2012 ◽  
Vol 36 (2) ◽  
pp. 173-187 ◽  
Author(s):  
Huaxiang Zhang ◽  
Hua Ji ◽  
Xiaoqin Wang

2021 ◽  
Vol 7 (4) ◽  
pp. 66
Author(s):  
Juan Miguel Valverde ◽  
Vandad Imani ◽  
Ali Abdollahzadeh ◽  
Riccardo De Feo ◽  
Mithilesh Prakash ◽  
...  

(1) Background: Transfer learning refers to machine learning techniques that focus on acquiring knowledge from related tasks to improve generalization in the tasks of interest. In magnetic resonance imaging (MRI), transfer learning is important for developing strategies that address the variation in MR images from different imaging protocols or scanners. Additionally, transfer learning is beneficial for reutilizing machine learning models that were trained to solve different (but related) tasks to the task of interest. The aim of this review is to identify research directions, gaps in knowledge, applications, and widely used strategies among the transfer learning approaches applied in MR brain imaging; (2) Methods: We performed a systematic literature search for articles that applied transfer learning to MR brain imaging tasks. We screened 433 studies for their relevance, and we categorized and extracted relevant information, including task type, application, availability of labels, and machine learning methods. Furthermore, we closely examined brain MRI-specific transfer learning approaches and other methods that tackled issues relevant to medical imaging, including privacy, unseen target domains, and unlabeled data; (3) Results: We found 129 articles that applied transfer learning to MR brain imaging tasks. The most frequent applications were dementia-related classification tasks and brain tumor segmentation. The majority of articles utilized transfer learning techniques based on convolutional neural networks (CNNs). Only a few approaches utilized clearly brain MRI-specific methodology, and considered privacy issues, unseen target domains, or unlabeled data. We proposed a new categorization to group specific, widely-used approaches such as pretraining and fine-tuning CNNs; (4) Discussion: There is increasing interest in transfer learning for brain MRI. Well-known public datasets have clearly contributed to the popularity of Alzheimer’s diagnostics/prognostics and tumor segmentation as applications. Likewise, the availability of pretrained CNNs has promoted their utilization. Finally, the majority of the surveyed studies did not examine in detail the interpretation of their strategies after applying transfer learning, and did not compare their approach with other transfer learning approaches.


2019 ◽  
Author(s):  
Qi Yuan ◽  
Alejandro Santana-Bonilla ◽  
Martijn Zwijnenburg ◽  
Kim Jelfs

<p>The chemical space for novel electronic donor-acceptor oligomers with targeted properties was explored using deep generative models and transfer learning. A General Recurrent Neural Network model was trained from the ChEMBL database to generate chemically valid SMILES strings. The parameters of the General Recurrent Neural Network were fine-tuned via transfer learning using the electronic donor-acceptor database from the Computational Material Repository to generate novel donor-acceptor oligomers. Six different transfer learning models were developed with different subsets of the donor-acceptor database as training sets. We concluded that electronic properties such as HOMO-LUMO gaps and dipole moments of the training sets can be learned using the SMILES representation with deep generative models, and that the chemical space of the training sets can be efficiently explored. This approach identified approximately 1700 new molecules that have promising electronic properties (HOMO-LUMO gap <2 eV and dipole moment <2 Debye), 6-times more than in the original database. Amongst the molecular transformations, the deep generative model has learned how to produce novel molecules by trading off between selected atomic substitutions (such as halogenation or methylation) and molecular features such as the spatial extension of the oligomer. The method can be extended as a plausible source of new chemical combinations to effectively explore the chemical space for targeted properties.</p>


Sign in / Sign up

Export Citation Format

Share Document