Fuzzy transfer learning of human activities in heterogeneous feature spaces

Author(s):  
David Ada Adama ◽  
Ahmad Lotfi ◽  
Robert Ranson
Author(s):  
Nicholas McCarthy ◽  
Mohammad Karzand ◽  
Freddy Lecue

Flight delays impact airlines, airports and passengers. Delay prediction is crucial during the decision-making process for all players in commercial aviation, and in particular for airlines to meet their on-time performance objectives. Although many machine learning approaches have been experimented with, they fail in (i) predicting delays in minutes with low errors (less than 15 minutes), (ii) being applied to small carriers i.e., low cost companies characterized by a small amount of data. This work presents a Long Short-Term Memory (LSTM) approach to predicting flight delay, modeled as a sequence of flights across multiple airports for a particular aircraft throughout the day. We then suggest a transfer learning approach between heterogeneous feature spaces to train a prediction model for a given smaller airline using the data from another larger airline. Our approach is demonstrated to be robust and accurate for low cost airlines in Europe.


2015 ◽  
Vol 24 (11) ◽  
pp. 4096-4108 ◽  
Author(s):  
Xinxiao Wu ◽  
Han Wang ◽  
Cuiwei Liu ◽  
Yunde Jia

2020 ◽  
Author(s):  
Fernando Pereira Dos Santos ◽  
Moacir Antonelli Ponti

Feature transfer learning aims to reuse knowledge previously acquired in some source dataset to apply it in another target data and/or task. A requirement for the transfer of knowledge is the quality of feature spaces obtained, in which deep learning methods are widely applied since those provide discriminative and general descriptors. In this context, the main questions include: what to transfer; how to transfer; and when to transfer. Hence, we address these questions through distinct learning paradigms, transfer learning techniques, and several datasets and tasks. Therefore, our contributions are: an analysis of multiple descriptors contained in supervised deep networks; a new generalization metric that can be applied to any model and evaluation system; and a new architecture with a loss function for semi-supervised deep networks, in which all available data provide the learning.


Author(s):  
Yong Luo ◽  
Yonggang Wen ◽  
Tongliang Liu ◽  
Dacheng Tao

Transfer learning aims to improve the performance of target learning task by leveraging information (or transferring knowledge) from other related tasks. Recently, transfer distance metric learning (TDML) has attracted lots of interests, but most of these methods assume that feature representations for the source and target learning tasks are the same. Hence, they are not suitable for the applications, in which the data are from heterogeneous domains (feature spaces, modalities and even semantics). Although some existing heterogeneous transfer learning (HTL) approaches is able to handle such domains, they lack flexibility in real-world applications, and the learned transformations are often restricted to be linear. We therefore develop a general and flexible heterogeneous TDML (HTDML) framework based on the knowledge fragment transfer strategy. In the proposed HTDML, any (linear or nonlinear) distance metric learning algorithms can be employed to learn the source metric beforehand. Then a set of knowledge fragments are extracted from the pre-learned source metric to help target metric learning. In addition, either linear or nonlinear distance metric can be learned for the target domain. Extensive experiments on both scene classification and object recognition demonstrate superiority of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document