Impact of different protonation states on virtual screening performance against cruzain

Author(s):  
Viviane Corrêa Santos ◽  
Augusto César Broilo Campos ◽  
Birgit J. Waldner ◽  
Klaus R. Liedl ◽  
Rafaela Salgado Ferreira
2020 ◽  
Author(s):  
Fergus Imrie ◽  
Anthony R. Bradley ◽  
Charlotte M. Deane

An essential step in the development of virtual screening methods is the use of established sets of actives and decoys for benchmarking and training. However, the decoy molecules in commonly used sets are biased meaning that methods often exploit these biases to separate actives and decoys, rather than learning how to perform molecular recognition. This fundamental issue prevents generalisation and hinders virtual screening method development. We have developed a deep learning method (DeepCoy) that generates decoys to a user’s preferred specification in order to remove such biases or construct sets with a defined bias. We validated DeepCoy using two established benchmarks, DUD-E and DEKOIS 2.0. For all DUD-E targets and 80 of the 81 DEKOIS 2.0 targets, our generated decoy molecules more closely matched the active molecules’ physicochemical properties while introducing no discernible additional risk of false negatives. The DeepCoy decoys improved the Deviation from Optimal Embedding (DOE) score by an average of 81% and 66%, respectively, decreasing from 0.163 to 0.032 for DUD-E and from 0.109 to 0.038 for DEKOIS 2.0. Further, the generated decoys are harder to distinguish than the original decoy molecules via docking with Autodock Vina, with virtual screening performance falling from an AUC ROC of 0.71 to 0.63. The code is available at https://github.com/oxpig/DeepCoy. Generated molecules can be downloaded from http://opig.stats.ox.ac.uk/resources.


2019 ◽  
Vol 59 (9) ◽  
pp. 3655-3666 ◽  
Author(s):  
Yunierkis Perez-Castillo ◽  
Stellamaris Sotomayor-Burneo ◽  
Karina Jimenes-Vargas ◽  
Mario Gonzalez-Rodriguez ◽  
Maykel Cruz-Monteagudo ◽  
...  

2015 ◽  
Author(s):  
Yunierkis Pérez-Castillo ◽  
Aliuska Morales-Helguera ◽  
M. Natália D. S. Cordeiro ◽  
Eduardo Tejera ◽  
Cesar Paz-y-Miño ◽  
...  

Author(s):  
Jocelyn Sunseri ◽  
David Koes

Virtual screening - predicting which compounds within a specified compound library bind to a target molecule, typically a protein - is a fundamental task in the field of drug discovery. Doing virtual screening well provides tangible practical benefits, including reduced drug development costs, faster time to therapeutic viability, and fewer unforeseen side effects. As with most applied computational tasks, the algorithms currently used to perform virtual screening feature inherent tradeoffs between speed and accuracy. Furthermore, even theoretically rigorous, computationally intensive methods may fail to account for important effects relevant to whether a given compound will ultimately be usable as a drug. Here we investigate the virtual screening performance of the recently released Gnina molecular docking software, which uses deep convolutional networks to score protein-ligand structures. We find, on average, that Gnina outperforms conventional empirical scoring. The default scoring in Gnina outperforms the empirical AutoDock Vina scoring function on 89 of the 117 targets of the DUD-E and LIT-PCBA virtual screening benchmarks with a median 1% early enrichment factor that is more than twice that of Vina. However, we also find that issues of bias linger in these sets, even when not used directly to train models, and this bias obfuscates to what extent machine learning models are achieving their performance through a sophisticated interpretation of molecular interactions versus fitting to non-informative simplistic property distributions.


2009 ◽  
Vol 23 (9) ◽  
pp. 471-478 ◽  
Author(s):  
Robert D. Clark ◽  
Jennifer K. Shepphird ◽  
John Holliday

2014 ◽  
Vol 18 (3) ◽  
pp. 637-654 ◽  
Author(s):  
Yunierkis Pérez-Castillo ◽  
Maykel Cruz-Monteagudo ◽  
Cosmin Lazar ◽  
Jonatan Taminau ◽  
Mathy Froeyen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document