Genetic transfer learning

2010 ◽  
Vol 37 (10) ◽  
pp. 6997-7002 ◽  
Author(s):  
Barış Koçer ◽  
Ahmet Arslan
2019 ◽  
Author(s):  
Jamie Nicholas Shelley

This paper and accompanying Python/C++ Framework is the product of the Authors perceived problems with narrow (Discrimination based) AI. (Artificial Intelligence) The Framework attempts to develop a genetic transfer of experience through potential structural expressions using a common regulation/exchange value (‘energy’) to create a model whereby neural architecture and all unit processes are co-dependently developed . These expressions are born from fractal definition, stochastically tuned and managed by genetic experience; successful routes are maintained through global rules: (Stability of signal propagation/function over cross functional (external state, internal immediate state, and genetic bias towards selection of previous expressions)).These principles are aimed towards creating a diverse and robust network, hopefully reducing the need for transfer learning and computationally expensive translations as demand on compute increases.


SPE Journal ◽  
2021 ◽  
pp. 1-22
Author(s):  
Faliang Yin ◽  
Xiaoming Xue ◽  
Chengze Zhang ◽  
Kai Zhang ◽  
Jianfa Han ◽  
...  

Summary Production optimization led by computing intelligence can greatly improve oilfield economic effectiveness. However, it is confronted with huge computational challenge because of the expensive black-box objective function and the high-dimensional design variables. Many low-fidelity methods based on simplified physical models or data-driven models have been proposed to reduce evaluation costs. These methods can approximate the global fitness landscape to a certain extent, but it is difficult to ensure accuracy and correlation in local areas. Multifidelity methods have been proposed to balance the advantages of the two, but most of the current methods rely on complex computational models. Through a simple but efficient shortcut, our work aims to establish a novel production-optimization framework using genetic transfer learning to accelerate convergence and improve the quality of optimal solution using results from different fidelities. Net present value (NPV) is a widely used standard to comprehensively evaluate the economic value of a strategy in production optimization. On the basis of NPV, we first established a multifidelity optimization model that can synthesize the reference information from high-fidelity tasks and the approximate results from low-fidelity tasks. Then, we introduce the concept of relative fidelity as an indicator for quantifying the dynamic reliability of low-fidelity methods, and further propose a two-mode multifidelity genetic transfer learning framework that balances computing resources for tasks with different fidelity levels. The multitasking mode takes the elite solution as the transfer medium and forms a closed-loop feedback system through the information exchange between low- and high-fidelity tasks in parallel. Sequential transfer mode, a one-way algorithm, transfers the elite solutions archived in the previous mode as the population to high-fidelity domain for further optimization. This framework is suitable for population-based optimization algorithms with variable search direction and step size. The core work of this paper is to realize the framework by means of differential evolution (DE), for which we propose the multifidelity transfer differential evolution (MTDE). Corresponding to multitasking and sequential transfer in the framework, MTDE includes two modes, transfer based on base vector (b-transfer) and transfer based on population (p-transfer). The b-transfer mode incorporates the unique advantages of DE into fidelity switching, whereas the p-transfer mode adaptively conducts population for further high-fidelity local search. Finally, the production-optimization performance of MTDE is validated with the egg model and two real field cases, in which the black-oil and streamline models are used to obtain high- and low-fidelity results, respectively. We also compared the convergence curves and optimization results with the single-fidelity method and the greedy multifidelity method. The results show that the proposed algorithm has a faster convergence rate and a higher-qualitywell-control strategy. The adaptive capacity of p-transfer is also demonstrated in three distinct cases. At the end of the paper, we discuss the generalization potential of the proposed framework.


2019 ◽  
Author(s):  
Qi Yuan ◽  
Alejandro Santana-Bonilla ◽  
Martijn Zwijnenburg ◽  
Kim Jelfs

<p>The chemical space for novel electronic donor-acceptor oligomers with targeted properties was explored using deep generative models and transfer learning. A General Recurrent Neural Network model was trained from the ChEMBL database to generate chemically valid SMILES strings. The parameters of the General Recurrent Neural Network were fine-tuned via transfer learning using the electronic donor-acceptor database from the Computational Material Repository to generate novel donor-acceptor oligomers. Six different transfer learning models were developed with different subsets of the donor-acceptor database as training sets. We concluded that electronic properties such as HOMO-LUMO gaps and dipole moments of the training sets can be learned using the SMILES representation with deep generative models, and that the chemical space of the training sets can be efficiently explored. This approach identified approximately 1700 new molecules that have promising electronic properties (HOMO-LUMO gap <2 eV and dipole moment <2 Debye), 6-times more than in the original database. Amongst the molecular transformations, the deep generative model has learned how to produce novel molecules by trading off between selected atomic substitutions (such as halogenation or methylation) and molecular features such as the spatial extension of the oligomer. The method can be extended as a plausible source of new chemical combinations to effectively explore the chemical space for targeted properties.</p>


2014 ◽  
Author(s):  
Hiroshi Kanayama ◽  
Youngja Park ◽  
Yuta Tsuboi ◽  
Dongmook Yi
Keyword(s):  

2020 ◽  
Author(s):  
Pathikkumar Patel ◽  
Bhargav Lad ◽  
Jinan Fiaidhi

During the last few years, RNN models have been extensively used and they have proven to be better for sequence and text data. RNNs have achieved state-of-the-art performance levels in several applications such as text classification, sequence to sequence modelling and time series forecasting. In this article we will review different Machine Learning and Deep Learning based approaches for text data and look at the results obtained from these methods. This work also explores the use of transfer learning in NLP and how it affects the performance of models on a specific application of sentiment analysis.


2007 ◽  
Author(s):  
Nicholas A. Gorski ◽  
John E. Laird
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document