FT-MDnet: A Deep-Frozen Transfer Learning Framework for Person Search

Author(s):  
Ronghua Hu ◽  
Tian Wang ◽  
Yi Zhou ◽  
Hichem Snoussi ◽  
Abel Cherouat
Author(s):  
Yin Zhang ◽  
Derek Zhiyuan Cheng ◽  
Tiansheng Yao ◽  
Xinyang Yi ◽  
Lichan Hong ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Vishu Gupta ◽  
Kamal Choudhary ◽  
Francesca Tavazza ◽  
Carelyn Campbell ◽  
Wei-keng Liao ◽  
...  

AbstractArtificial intelligence (AI) and machine learning (ML) have been increasingly used in materials science to build predictive models and accelerate discovery. For selected properties, availability of large databases has also facilitated application of deep learning (DL) and transfer learning (TL). However, unavailability of large datasets for a majority of properties prohibits widespread application of DL/TL. We present a cross-property deep-transfer-learning framework that leverages models trained on large datasets to build models on small datasets of different properties. We test the proposed framework on 39 computational and two experimental datasets and find that the TL models with only elemental fractions as input outperform ML/DL models trained from scratch even when they are allowed to use physical attributes as input, for 27/39 (≈ 69%) computational and both the experimental datasets. We believe that the proposed framework can be widely useful to tackle the small data challenge in applying AI/ML in materials science.


Author(s):  
James Brownlow ◽  
Charles Chu ◽  
Guandong Xu ◽  
Ben Culbert ◽  
Bin Fu ◽  
...  

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Jianye Zhou ◽  
Xinyu Yang ◽  
Lin Zhang ◽  
Siyu Shao ◽  
Gangying Bian

To realize high-precision and high-efficiency machine fault diagnosis, a novel deep learning framework that combines transfer learning and transposed convolution is proposed. Compared with existing methods, this method has faster training speed, fewer training samples per time, and higher accuracy. First, the raw data collected by multiple sensors are combined into a graph and normalized to facilitate model training. Next, the transposed convolution is utilized to expand the image resolution, and then the images are treated as the input of the transfer learning model for training and fine-tuning. The proposed method adopts 512 time series to conduct experiments on two main mechanical datasets of bearings and gears in the variable-speed gearbox, which verifies the effectiveness and versatility of the method. We have obtained advanced results on both datasets of the gearbox dataset. The dataset shows that the test accuracy is 99.99%, achieving a significant improvement from 98.07% to 99.99%.


AI Magazine ◽  
2011 ◽  
Vol 32 (1) ◽  
pp. 15 ◽  
Author(s):  
Matthew E. Taylor ◽  
Peter Stone

Transfer learning has recently gained popularity due to the development of algorithms that can successfully generalize information across multiple tasks. This article focuses on transfer in the context of reinforcement learning domains, a general learning framework where an agent acts in an environment to maximize a reward signal. The goals of this article are to (1) familiarize readers with the transfer learning problem in reinforcement learning domains, (2) explain why the problem is both interesting and difficult, (3) present a selection of existing techniques that demonstrate different solutions, and (4) provide representative open problems in the hope of encouraging additional research in this exciting area.


Sign in / Sign up

Export Citation Format

Share Document