An inhibitory weight initialization improves the speed and quality of recurrent neural networks learning

1997 ◽  
Vol 16 (3) ◽  
pp. 207-224 ◽  
Author(s):  
J.P. Draye ◽  
D. Pavisic ◽  
G. Cheron ◽  
G. Libert
2021 ◽  
Vol 48 (4) ◽  
pp. 37-40
Author(s):  
Nikolas Wehner ◽  
Michael Seufert ◽  
Joshua Schuler ◽  
Sarah Wassermann ◽  
Pedro Casas ◽  
...  

This paper addresses the problem of Quality of Experience (QoE) monitoring for web browsing. In particular, the inference of common Web QoE metrics such as Speed Index (SI) is investigated. Based on a large dataset collected with open web-measurement platforms on different device-types, a unique feature set is designed and used to estimate the RUMSI - an efficient approximation to SI, with machinelearning based regression and classification approaches. Results indicate that it is possible to estimate the RUMSI accurately, and that in particular, recurrent neural networks are highly suitable for the task, as they capture the network dynamics more precisely.


Author(s):  
Josep Arús-Pous ◽  
Simon Johansson ◽  
Oleksii Prykhodko ◽  
Esben Jannik Bjerrum ◽  
Christian Tyrchan ◽  
...  

Recurrent Neural Networks (RNNs) trained with a set of molecules represented as unique (canonical) SMILES strings, have shown the capacity to create large chemical spaces of valid and meaningful structures. Herein we perform an extensive benchmark on models trained with subsets of GDB-13 of different sizes (1 million , 10,000 and 1,000), with different SMILES variants (canonical, randomized and DeepSMILES), with two different recurrent cell types (LSTM and GRU) and with different hyperparameter combinations. To guide the benchmarks new metrics were developed that define the generated chemical space with respect to its uniformity, closedness and completeness. Results show that models that use LSTM cells trained with 1 million randomized SMILES, a non-unique molecular string representation, are able to generate larger chemical spaces than the other approaches and they represent more accurately the target chemical space. Specifically, a model was trained with randomized SMILES that was able to generate almost all molecules from GDB-13 with a quasi-uniform probability. Models trained with smaller samples show an even bigger improvement when trained with randomized SMILES models. Additionally, models were trained on molecules obtained from ChEMBL and illustrate again that training with randomized SMILES lead to models having a better representation of the drug-like chemical space. Namely, the model trained with randomized SMILES was able to generate at least double the amount of unique molecules with the same distribution of properties comparing to one trained with canonical SMILES.


Author(s):  
Annie K Lamar

We investigate the generation of metrically accurate Homeric poetry using recurrent neural networks (RNN). We assess two models: a basic encoder-decoder RNN and the hierarchical recurrent encoderdecoder model (HRED). We assess the quality of the generated lines of poetry using quantitative metrical analysis and expert evaluation. This evaluation reveals that while the basic encoder-decoder is able to capture complex poetic meter, it under performs in terms of semantic coherence. The HRED model, however, produces more semantically coherent lines of poetry but is unable to capture the meter. Our research highlights the importance of expert evaluation and suggests that future research should focus on encoder-decoder models that balance various types of input – both immediate and long-range.


Author(s):  
G. J. Anaya-Lopez ◽  
C. Cardenas-Angelat ◽  
D. Jimenez-Soria ◽  
M.C. Aguayo-Torres ◽  
N. Guerra-Melgares ◽  
...  

2019 ◽  
Author(s):  
Josep Arús-Pous ◽  
Simon Johansson ◽  
Oleksii Ptykhodko ◽  
Esben Jannik Bjerrum ◽  
Christian Tyrchan ◽  
...  

Recurrent Neural Networks (RNNs) trained with a set of molecules represented as unique (canonical) SMILES strings, have shown the capacity to create large chemical spaces of valid and meaningful structures. Herein we perform an extensive benchmark on models trained with subsets of GDB-13 of different sizes (1 million , 10,000 and 1,000), with different SMILES variants (canonical, randomized and DeepSMILES), with two different recurrent cell types (LSTM and GRU) and with different hyperparameter combinations. To guide the benchmarks new metrics were developed that define the generated chemical space with respect to its uniformity, closedness and completeness. Results show that models that use LSTM cells trained with 1 million randomized SMILES, a non-unique molecular string representation, are able to generate larger chemical spaces than the other approaches and they represent more accurately the target chemical space. Specifically, a model was trained with randomized SMILES that was able to generate almost all molecules from GDB-13 with a quasi-uniform probability. Models trained with smaller samples show an even bigger improvement when trained with randomized SMILES models. Additionally, models were trained on molecules obtained from ChEMBL and illustrate again that training with randomized SMILES lead to models having a better representation of the drug-like chemical space. Namely, the model trained with randomized SMILES was able to generate at least double the amount of unique molecules with the same distribution of properties comparing to one trained with canonical SMILES.


Author(s):  
Arunmozhi Mourougappane ◽  
Suresh Jaganathan

Sentiment Analysis and classification becomes a key trend in the human world in analyzing the nature and quality of the product, people's emotion, inference about products, and movies. Sentiment Analysis is the process of classification as it classifies the inference or review into positive or negative. Since the data that are labeled are very expensive and difficult to gather, it is hard. Also, the sarcastic data and homonyms are difficult to be identified. Hence the assumption of reviews will be wrong. The solution to identify the sarcastic words and the words with different meanings happens with the help of Recurrent Neural Networks.


2019 ◽  
Author(s):  
Josep Arús-Pous ◽  
Simon Johansson ◽  
Oleksii Prykhodko ◽  
Esben Jannik Bjerrum ◽  
Christian Tyrchan ◽  
...  

Recurrent Neural Networks (RNNs) trained with a set of molecules represented as unique (canonical) SMILES strings, have shown the capacity to create large chemical spaces of valid and meaningful structures. Herein we perform an extensive benchmark on models trained with subsets of GDB-13 of different sizes (1 million , 10,000 and 1,000), with different SMILES variants (canonical, randomized and DeepSMILES), with two different recurrent cell types (LSTM and GRU) and with different hyperparameter combinations. To guide the benchmarks new metrics were developed that define the generated chemical space with respect to its uniformity, closedness and completeness. Results show that models that use LSTM cells trained with 1 million randomized SMILES, a non-unique molecular string representation, are able to generate larger chemical spaces than the other approaches and they represent more accurately the target chemical space. Specifically, a model was trained with randomized SMILES that was able to generate almost all molecules from GDB-13 with a quasi-uniform probability. Models trained with smaller samples show an even bigger improvement when trained with randomized SMILES models. Additionally, models were trained on molecules obtained from ChEMBL and illustrate again that training with randomized SMILES lead to models having a better representation of the drug-like chemical space. Namely, the model trained with randomized SMILES was able to generate at least double the amount of unique molecules with the same distribution of properties comparing to one trained with canonical SMILES.


Sign in / Sign up

Export Citation Format

Share Document