scholarly journals Active Transfer Learning with Zero-Shot Priors: Reusing Past Datasets for Future Tasks

Author(s):  
E. Gavves ◽  
T. Mensink ◽  
T. Tommasi ◽  
C. G. M. Snoek ◽  
T. Tuytelaars
2015 ◽  
Vol 9 (4) ◽  
pp. 595-607 ◽  
Author(s):  
Jie Xin ◽  
Zhiming Cui ◽  
Pengpeng Zhao ◽  
Tianxu He

Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2760
Author(s):  
Seungmin Oh ◽  
Akm Ashiquzzaman ◽  
Dongsu Lee ◽  
Yeonggwang Kim ◽  
Jinsul Kim

In recent years, various studies have begun to use deep learning models to conduct research in the field of human activity recognition (HAR). However, there has been a severe lag in the absolute development of such models since training deep learning models require a lot of labeled data. In fields such as HAR, it is difficult to collect data and there are high costs and efforts involved in manual labeling. The existing methods rely heavily on manual data collection and proper labeling of the data, which is done by human administrators. This often results in the data gathering process often being slow and prone to human-biased labeling. To address these problems, we proposed a new solution for the existing data gathering methods by reducing the labeling tasks conducted on new data based by using the data learned through the semi-supervised active transfer learning method. This method achieved 95.9% performance while also reducing labeling compared to the random sampling or active transfer learning methods.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Yongtae Kim ◽  
Youngsoo Kim ◽  
Charles Yang ◽  
Kundo Park ◽  
Grace X. Gu ◽  
...  

AbstractNeural network-based generative models have been actively investigated as an inverse design method for finding novel materials in a vast design space. However, the applicability of conventional generative models is limited because they cannot access data outside the range of training sets. Advanced generative models that were devised to overcome the limitation also suffer from the weak predictive power on the unseen domain. In this study, we propose a deep neural network-based forward design approach that enables an efficient search for superior materials far beyond the domain of the initial training set. This approach compensates for the weak predictive power of neural networks on an unseen domain through gradual updates of the neural network with active transfer learning and data augmentation methods. We demonstrate the potential of our framework with a grid composite optimization problem that has an astronomical number of possible design configurations. Results show that our proposed framework can provide excellent designs close to the global optima, even with the addition of a very small dataset corresponding to less than 0.5% of the initial training dataset size.


Author(s):  
David Kale ◽  
Marjan Ghazvininejad ◽  
Anil Ramakrishna ◽  
Jingrui He ◽  
Yan Liu

2020 ◽  
Vol 24 (2) ◽  
pp. 363-383 ◽  
Author(s):  
Jingmei Li ◽  
Weifei Wu ◽  
Di Xue

Sign in / Sign up

Export Citation Format

Share Document