scholarly journals CACHE (Critical Assessment of Computational Hit-finding Experiments): A public-private partnership benchmarking initiative to enable the development of computational methods for hit-finding

Author(s):  
Suzanne Ackloo ◽  
Rima Al-awar ◽  
Rommie E. Amaro ◽  
Cheryl H. Arrowsmith ◽  
Hatylas Azevedo ◽  
...  

Computational approaches in drug discovery and development hold great promise, with artificial intelligence methods undergoing widespread contemporary use, but the experimental validation of these new approaches is frequently inadequate. We are initiating Critical Assessment of Computational Hit-finding Experiments (CACHE) as a public benchmarking project that aims to accelerate the development of small molecule hit-finding algorithms by competitive assessment. Compounds will be identified by participants using a wide range of computational methods for dozens of protein targets selected for different types of prediction scenarios, as well as for their potential biological or pharmaceutical relevance. Community-generated predictions will be tested centrally and rigorously in an experimental hub(s), and all data, including the chemical structures of experimentally tested compounds, will be made publicly available without restrictions. The ability of a range of computational approaches to find novel compounds will be evaluated, compared, and published. The overarching goal of CACHE is to accelerate the development of computational chemistry methods by providing rapid and unbiased feedback to those developing methods, with an ancillary and valuable benefit of identifying new compound-protein binding pairs for biologically interesting targets. The initiative builds on the power of crowd sourcing and expands the open science paradigm for drug discovery.

2020 ◽  
Author(s):  
Tim Becker ◽  
Kevin Yang ◽  
Juan C Caicedo ◽  
Bridget K Wagner ◽  
Vlado C Dancik ◽  
...  

Recent advances in deep learning enable using chemical structures and phenotypic profiles to accurately predict assay results for compounds virtually, reducing the time and cost of screens in the drug discovery process. The relative strength of high-throughput data sources - chemical structures, images (Cell Painting), and gene expression profiles (L1000) - has been unknown. Here we compare their ability to predict the activity of compounds structurally different from those used in training, using a sparse dataset of 16,979 chemicals tested in 376 assays for a total of 542,648 readouts. Deep learning-based feature extraction from chemical structures provided a remarkable ability to predict assay activity for structures dissimilar to those used for training. Image-based profiling performed even better, but requires wet lab experimentation. It outperformed gene expression profiling, and at lower cost. Furthermore, the three profiling modalities are complementary, and together can predict a wide range of diverse bioactivity, including cell-based and biochemical assays. Our study shows that, for many assays, predicting compound activity from phenotypic profiles and chemical structures is an accurate and efficient way to identify potential treatments in the early stages of the drug discovery process.


2020 ◽  
Vol 8 (3) ◽  
pp. 147-152
Author(s):  
Johannes Breuer ◽  
Tim Wulf ◽  
M. Rohangis Mohseni

The rise of new technologies and platforms, such as mobile devices and streaming services, has substantially changed the media entertainment landscape and continues to do so. Since its subject of study is changing constantly and rapidly, research on media entertainment has to be quick to adapt. This need to quickly react and adapt not only relates to the questions researchers need to ask but also to the methods they need to employ to answer those questions. Over the last few years, the field of computational social science has been developing and using methods for the collection and analysis of data that can be used to study the use, content, and effects of entertainment media. These methods provide ample opportunities for this area of research and can help in overcoming some of the limitations of self-report data and manual content analyses that most of the research on media entertainment is based on. However, they also have their own set of challenges that researchers need to be aware of and address to make (full) use of them. This thematic issue brings together studies employing computational methods to investigate different types and facets of media entertainment. These studies cover a wide range of entertainment media, data types, and analysis methods, and clearly highlight the potential of computational approaches to media entertainment research. At the same time, the articles also include a critical perspective, openly discuss the challenges and limitations of computational methods, and provide useful suggestions for moving this nascent field forward.


2018 ◽  
Author(s):  
Christopher Southan ◽  
Joanna L Sharman ◽  
Elena Faccenda ◽  
Adam J Pawson ◽  
Simon D Harding ◽  
...  

Connecting chemistry to pharmacology (c2p) has been an objective of GtoPdb and its precursor IUPHAR-DB since 2003. This has been achieved by populating our database with expert-curated relationships between documents, assays, quantitative results, chemical structures, their locations within the documents and the protein targets in the assays (D-A-R-C-P). A wide range of challenges associated with this are described in this perspective, using illustrative examples from GtoPdb entries. Our selection process begins with judgements of pharmacological relevance and scientific quality. Even though we have a stringent focus for our small-data extraction we note that assessing the quality of papers has become more difficult over the last 15 years. We discuss ambiguity issues with the resolution of authors’ descriptions of A-R-C-P entities to standardised identifiers. We also describe developments that have made this somewhat easier over the same period both in the publication ecosystem as well as enhancements of our internal processes over recent years. This perspective concludes with a look at challenges for the future including the wider capture of mechanistic nuances and possible impacts of text mining on automated entity extraction


Life ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 1070
Author(s):  
Mohammad M. Al-Sanea ◽  
Garri Chilingaryan ◽  
Narek Abelyan ◽  
Arsen Sargsyan ◽  
Sargis Hovhannisyan ◽  
...  

The vascular endothelial growth factor receptor 2 (VEGFR-2) is largely recognized as a potent therapeutic molecular target for the development of angiogenesis-related tumor treatment. Tumor growth, metastasis and multidrug resistance highly depends on the angiogenesis and drug discovery of the potential small molecules targeting VEGFR-2, with the potential anti-angiogenic activity being of high interest to anti-cancer research. Multiple small molecule inhibitors of the VEGFR-2 are approved for the treatment of different type of cancers, with one of the most recent, tivozanib, being approved by the FDA for the treatment of relapsed or refractory advanced renal cell carcinoma (RCC). However, the endogenous and acquired resistance of the protein, toxicity of compounds and wide range of side effects still remain critical issues, which lead to the short-term clinical effects and failure of antiangiogenic drugs. We applied a combination of computational methods and approaches for drug design and discovery with the goal of finding novel, potential and small molecule inhibitors of VEGFR2, as alternatives to the known inhibitors’ chemical scaffolds and components. From studying several of these compounds, the derivatives of pyrido[1,2-a]pyrimidin-4-one and isoindoline-1,3-dione in particular were identified.


2018 ◽  
Author(s):  
Christopher Southan ◽  
Joanna L Sharman ◽  
Elena Faccenda ◽  
Adam J Pawson ◽  
Simon D Harding ◽  
...  

Connecting chemistry to pharmacology (c2p) has been an objective of GtoPdb and its precursor IUPHAR-DB since 2003. This has been achieved by populating our database with expert-curated relationships between documents, assays, quantitative results, chemical structures, their locations within the documents and the protein targets in the assays (D-A-R-C-P). A wide range of challenges associated with this are described in this perspective, using illustrative examples from GtoPdb entries. Our selection process begins with judgements of pharmacological relevance and scientific quality. Even though we have a stringent focus for our small-data extraction we note that assessing the quality of papers has become more difficult over the last 15 years. We discuss ambiguity issues with the resolution of authors’ descriptions of A-R-C-P entities to standardised identifiers. We also describe developments that have made this somewhat easier over the same period both in the publication ecosystem as well as enhancements of our internal processes over recent years. This perspective concludes with a look at challenges for the future including the wider capture of mechanistic nuances and possible impacts of text mining on automated entity extraction


2016 ◽  
Vol 14 (04) ◽  
pp. 1650018 ◽  
Author(s):  
Nolen Joy Perualila-Tan ◽  
Ziv Shkedy ◽  
Willem Talloen ◽  
Hinrich W. H. Göhlmann ◽  
Marijke Van Moerbeke ◽  
...  

The modern process of discovering candidate molecules in early drug discovery phase includes a wide range of approaches to extract vital information from the intersection of biology and chemistry. A typical strategy in compound selection involves compound clustering based on chemical similarity to obtain representative chemically diverse compounds (not incorporating potency information). In this paper, we propose an integrative clustering approach that makes use of both biological (compound efficacy) and chemical (structural features) data sources for the purpose of discovering a subset of compounds with aligned structural and biological properties. The datasets are integrated at the similarity level by assigning complementary weights to produce a weighted similarity matrix, serving as a generic input in any clustering algorithm. This new analysis work flow is semi-supervised method since, after the determination of clusters, a secondary analysis is performed wherein it finds differentially expressed genes associated to the derived integrated cluster(s) to further explain the compound-induced biological effects inside the cell. In this paper, datasets from two drug development oncology projects are used to illustrate the usefulness of the weighted similarity-based clustering approach to integrate multi-source high-dimensional information to aid drug discovery. Compounds that are structurally and biologically similar to the reference compounds are discovered using this proposed integrative approach.


2015 ◽  
Vol 4 (5) ◽  
pp. 1159-1172 ◽  
Author(s):  
Nigel Greene ◽  
William Pennie

Computational approaches offer the attraction of being both fast and cheap to run being able to process thousands of chemical structures in a few minutes. As with all new technology, there is a tendency for these approaches to be hyped up and claims of reliability and performance may be exaggerated. So just how good are these computational methods?


2020 ◽  
Vol 27 (35) ◽  
pp. 5856-5886 ◽  
Author(s):  
Chen Wang ◽  
Lukasz Kurgan

Therapeutic activity of a significant majority of drugs is determined by their interactions with proteins. Databases of drug-protein interactions (DPIs) primarily focus on the therapeutic protein targets while the knowledge of the off-targets is fragmented and partial. One way to bridge this knowledge gap is to employ computational methods to predict protein targets for a given drug molecule, or interacting drugs for given protein targets. We survey a comprehensive set of 35 methods that were published in high-impact venues and that predict DPIs based on similarity between drugs and similarity between protein targets. We analyze the internal databases of known PDIs that these methods utilize to compute similarities, and investigate how they are linked to the 12 publicly available source databases. We discuss contents, impact and relationships between these internal and source databases, and well as the timeline of their releases and publications. The 35 predictors exploit and often combine three types of similarities that consider drug structures, drug profiles, and target sequences. We review the predictive architectures of these methods, their impact, and we explain how their internal DPIs databases are linked to the source databases. We also include a detailed timeline of the development of these predictors and discuss the underlying limitations of the current resources and predictive tools. Finally, we provide several recommendations concerning the future development of the related databases and methods.


Author(s):  
Georgiana Uță ◽  
Denisa Ștefania Manolescu ◽  
Speranța Avram

Background.: Currently, the pharmacological management in Alzheimer's disease is based on several chemical structures, represented by acetylcholinesterase and N-methyl-D-aspartate (NMDA) receptor ligands, with still unclear molecular mechanisms, but severe side effects. For this reason, a challenge for Alzheimer's disease treatment remains to identify new drugs with reduced side effects. Recently, the natural compounds, in particular certain chemical compounds identified in the essential oil of peppermint, sage, grapes, sea buckthorn, have increased interest as possible therapeutics. Objectives.: In this paper, we have summarized data from the recent literature, on several chemical compounds extracted from Salvia officinalis L., with therapeutic potential in Alzheimer's disease. Methods.: In addition to the wide range of experimental methods performed in vivo and in vitro, also we presented some in silico studies of medicinal compounds. Results. Through this mini-review, we present the latest information regarding the therapeutic characteristics of natural compounds isolated from Salvia officinalis L. in Alzheimer's disease. Conclusion.: Thus, based on the information presented, we can say that phytotherapy is a reliable therapeutic method in a neurodegenerative disease.


2020 ◽  
Vol 20 (19) ◽  
pp. 1651-1660
Author(s):  
Anuraj Nayarisseri

Drug discovery is one of the most complicated processes and establishment of a single drug may require multidisciplinary attempts to design efficient and commercially viable drugs. The main purpose of drug design is to identify a chemical compound or inhibitor that can bind to an active site of a specific cavity on a target protein. The traditional drug design methods involved various experimental based approaches including random screening of chemicals found in nature or can be synthesized directly in chemical laboratories. Except for the long cycle design and time, high cost is also the major issue of concern. Modernized computer-based algorithm including structure-based drug design has accelerated the drug design and discovery process adequately. Surprisingly from the past decade remarkable progress has been made concerned with all area of drug design and discovery. CADD (Computer Aided Drug Designing) based tools shorten the conventional cycle size and also generate chemically more stable and worthy compounds and hence reduce the drug discovery cost. This special edition of editorial comprises the combination of seven research and review articles set emphasis especially on the computational approaches along with the experimental approaches using a chemical synthesizing for the binding affinity in chemical biology and discovery as a salient used in de-novo drug designing. This set of articles exfoliates the role that systems biology and the evaluation of ligand affinity in drug design and discovery for the future.


Sign in / Sign up

Export Citation Format

Share Document