Recurrent Neural Networks for Learning Mixed k th -Order Markov Chains

Author(s):  
Wang Xiangrui ◽  
Narendra S. Chaudhari
2007 ◽  
Vol 10 (2) ◽  
Author(s):  
Igor Lorenzato Almeida ◽  
Denise Regina Pechmann ◽  
Adelmo Luis Cechin

This paper present a new approach for the analysis of gene expres- sion, by extracting a Markov Chain from trained Recurrent Neural Networks (RNNs). A lot of microarray data is being generated, since array technologies have been widely used to monitor simultaneously the expression pattern of thou- sands of genes. Microarray data is highly specialized, involves several variables in which are complex to express and analyze. The challenge is to discover how to extract useful information from these data sets. So this work proposes the use of RNNs for data modeling, due to their ability to learn complex temporal non-linear data. Once a model is obtained for the data, it is possible to ex- tract the acquired knowledge and to represent it through Markov Chains model. Markov Chains are easily visualized in the form of states graphs, which show the influences among the gene expression levels and their changes in time


2020 ◽  
Author(s):  
Dean Sumner ◽  
Jiazhen He ◽  
Amol Thakkar ◽  
Ola Engkvist ◽  
Esben Jannik Bjerrum

<p>SMILES randomization, a form of data augmentation, has previously been shown to increase the performance of deep learning models compared to non-augmented baselines. Here, we propose a novel data augmentation method we call “Levenshtein augmentation” which considers local SMILES sub-sequence similarity between reactants and their respective products when creating training pairs. The performance of Levenshtein augmentation was tested using two state of the art models - transformer and sequence-to-sequence based recurrent neural networks with attention. Levenshtein augmentation demonstrated an increase performance over non-augmented, and conventionally SMILES randomization augmented data when used for training of baseline models. Furthermore, Levenshtein augmentation seemingly results in what we define as <i>attentional gain </i>– an enhancement in the pattern recognition capabilities of the underlying network to molecular motifs.</p>


Author(s):  
Faisal Ladhak ◽  
Ankur Gandhe ◽  
Markus Dreyer ◽  
Lambert Mathias ◽  
Ariya Rastrow ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document