scholarly journals Molecular de-novo design through deep reinforcement learning

2017 ◽  
Vol 9 (1) ◽  
Author(s):  
Marcus Olivecrona ◽  
Thomas Blaschke ◽  
Ola Engkvist ◽  
Hongming Chen
Author(s):  
Thomas Blaschke ◽  
Ola Engkvist ◽  
Jürgen Bajorath ◽  
Hongming Chen

<div><div><div><p>In de novo molecular design, recurrent neural networks (RNN) have been shown to be effective methods for sampling and generating novel chemical structures. Using a technique called reinforcement learning (RL), an RNN can be tuned to target a particular section of chemical space with optimized desirable properties using a scoring function. However, ligands generated by current RL methods so far tend to have relatively low diversity, and sometimes even result in duplicate structures when optimizing towards particular properties. Here, we propose a new method to address the low diversity issue in RL. Memory-assisted RL is an extension of the known RL, with the introduction of a so-called memory unit.</p></div></div></div>


2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Xuhan Liu ◽  
Kai Ye ◽  
Herman W. T. van Vlijmen ◽  
Michael T. M. Emmerich ◽  
Adriaan P. IJzerman ◽  
...  

AbstractIn polypharmacology drugs are required to bind to multiple specific targets, for example to enhance efficacy or to reduce resistance formation. Although deep learning has achieved a breakthrough in de novo design in drug discovery, most of its applications only focus on a single drug target to generate drug-like active molecules. However, in reality drug molecules often interact with more than one target which can have desired (polypharmacology) or undesired (toxicity) effects. In a previous study we proposed a new method named DrugEx that integrates an exploration strategy into RNN-based reinforcement learning to improve the diversity of the generated molecules. Here, we extended our DrugEx algorithm with multi-objective optimization to generate drug-like molecules towards multiple targets or one specific target while avoiding off-targets (the two adenosine receptors, A1AR and A2AAR, and the potassium ion channel hERG in this study). In our model, we applied an RNN as the agent and machine learning predictors as the environment. Both the agent and the environment were pre-trained in advance and then interplayed under a reinforcement learning framework. The concept of evolutionary algorithms was merged into our method such that crossover and mutation operations were implemented by the same deep learning model as the agent. During the training loop, the agent generates a batch of SMILES-based molecules. Subsequently scores for all objectives provided by the environment are used to construct Pareto ranks of the generated molecules. For this ranking a non-dominated sorting algorithm and a Tanimoto-based crowding distance algorithm using chemical fingerprints are applied. Here, we adopted GPU acceleration to speed up the process of Pareto optimization. The final reward of each molecule is calculated based on the Pareto ranking with the ranking selection algorithm. The agent is trained under the guidance of the reward to make sure it can generate desired molecules after convergence of the training process. All in all we demonstrate generation of compounds with a diverse predicted selectivity profile towards multiple targets, offering the potential of high efficacy and low toxicity.


2020 ◽  
Author(s):  
Thomas Blaschke ◽  
Ola Engkvist ◽  
Jürgen Bajorath ◽  
Hongming Chen

<div><div><div><p>In de novo molecular design, recurrent neural networks (RNN) have been shown to be effective methods for sampling and generating novel chemical structures. Using a technique called reinforcement learning (RL), an RNN can be tuned to target a particular section of chemical space with optimized desirable properties using a scoring function. However, ligands generated by current RL methods so far tend to have relatively low diversity, and sometimes even result in duplicate structures when optimizing towards particular properties. Here, we propose a new method to address the low diversity issue in RL. Memory-assisted RL is an extension of the known RL, with the introduction of a so-called memory unit.</p></div></div></div>


2019 ◽  
Author(s):  
Niclas Ståhl ◽  
Göran Falkman ◽  
Alexander Karlsson ◽  
Gunnar Mathiason ◽  
Jonas Boström

<p>In medicinal chemistry programs it is key to design and make compounds that are efficacious and safe. This is a long, complex and difficult multi-parameter optimization process, often including several properties with orthogonal trends. New methods for the automated design of compounds against profiles of multiple properties are thus of great value. Here we present a fragment-based reinforcement learning approach based on an actor-critic model, for the generation of novel molecules with optimal properties. The actor and the critic are both modelled with bidirectional long short-term memory (LSTM) networks. The AI method learns how to generate new compounds with desired properties by starting from an initial set of lead molecules and then improve these by replacing some of their fragments. A balanced binary tree based on the similarity of fragments is used in the generative process to bias the output towards structurally similar molecules. The method is demonstrated by a case study showing that 93% of the generated molecules are chemically valid, and a third satisfy the targeted objectives, while there were none in the initial set.</p>


Author(s):  
Dieter Buyst ◽  
V. Gheerardijn ◽  
J. Van Den Begin ◽  
A. Madder ◽  
J. C. Martins

Author(s):  
Laura Díaz-Casado ◽  
Israel Serrano-Chacón ◽  
Laura Montalvillo-Jiménez ◽  
Francisco Corzana ◽  
Agatha Bastida ◽  
...  

Nature ◽  
2021 ◽  
Author(s):  
Alfredo Quijano-Rubio ◽  
Hsien-Wei Yeh ◽  
Jooyoung Park ◽  
Hansol Lee ◽  
Robert A. Langan ◽  
...  
Keyword(s):  
De Novo ◽  

Sign in / Sign up

Export Citation Format

Share Document