Precision Psychiatry: Machine learning as a tool to find new pharmacological targets

Author(s):  
João Rema ◽  
Filipa Novais ◽  
Diogo Telles-Correia

: There is an increasing amount of data arising from neurobehavioral sciences and medical records that cannot be adequately analyzed by traditional research methods. New drugs develop at a slow rate and seem unsatisfactory for the majority of neurobehavioral disorders. Machine learning (ML) techniques, instead, can incorporate psychopathological, computational, cognitive, and neurobiological underpinning knowledge leading to a refinement of detection, diagnosis, prognosis, treatment, research, and support. Machine and deep learning methods are currently used to accelerate the process of discovering new pharmacological targets and drugs. Objective: The present work reviews current evidence regarding the contribution of machine learning to the discovery of new drug targets. Methods: Scientific articles from PubMed, SCOPUS, EMBASE, and Web of Science Core Collection published until May 2021 were included in this review. Results : The most significant areas of research are schizophrenia, depression and anxiety, Alzheimer´s disease, and substance use disorders. ML techniques have pinpointed target gene candidates and pathways, new molecular substances, and several biomarkers regarding psychiatric disorders. Drug repositioning studies using ML have identified multiple drug candidates as promising therapeutic agents. Conclusion: Next-generation ML techniques and subsequent deep learning may power new findings regarding the discovery of new pharmacological agents by bridging the gap between biological data and chemical drug information.

2018 ◽  
Vol 15 (1) ◽  
pp. 6-28 ◽  
Author(s):  
Javier Pérez-Sianes ◽  
Horacio Pérez-Sánchez ◽  
Fernando Díaz

Background: Automated compound testing is currently the de facto standard method for drug screening, but it has not brought the great increase in the number of new drugs that was expected. Computer- aided compounds search, known as Virtual Screening, has shown the benefits to this field as a complement or even alternative to the robotic drug discovery. There are different methods and approaches to address this problem and most of them are often included in one of the main screening strategies. Machine learning, however, has established itself as a virtual screening methodology in its own right and it may grow in popularity with the new trends on artificial intelligence. Objective: This paper will attempt to provide a comprehensive and structured review that collects the most important proposals made so far in this area of research. Particular attention is given to some recent developments carried out in the machine learning field: the deep learning approach, which is pointed out as a future key player in the virtual screening landscape.


2019 ◽  
Vol 20 (1) ◽  
Author(s):  
Mahroo Moridi ◽  
Marzieh Ghadirinia ◽  
Ali Sharifi-Zarchi ◽  
Fatemeh Zare-Mirakabad

Abstract Background De novo drug discovery is a time-consuming and expensive process. Nowadays, drug repositioning is utilized as a common strategy to discover a new drug indication for existing drugs. This strategy is mostly used in cases with a limited number of candidate pairs of drugs and diseases. In other words, they are not scalable to a large number of drugs and diseases. Most of the in-silico methods mainly focus on linear approaches while non-linear models are still scarce for new indication predictions. Therefore, applying non-linear computational approaches can offer an opportunity to predict possible drug repositioning candidates. Results In this study, we present a non-linear method for drug repositioning. We extract four drug features and two disease features to find the semantic relations between drugs and diseases. We utilize deep learning to extract an efficient representation for each feature. These representations reduce the dimension and heterogeneity of biological data. Then, we assess the performance of different combinations of drug features to introduce a pipeline for drug repositioning. In the available database, there are different numbers of known drug-disease associations corresponding to each combination of drug features. Our assessment shows that as the numbers of drug features increase, the numbers of available drugs decrease. Thus, the proposed method with large numbers of drug features is as accurate as small numbers. Conclusion Our pipeline predicts new indications for existing drugs systematically, in a more cost-effective way and shorter timeline. We assess the pipeline to discover the potential drug-disease associations based on cross-validation experiments and some clinical trial studies.


2020 ◽  
Author(s):  
Xue Zhang ◽  
Weijia Xiao ◽  
Wangxin Xiao

ABSTRACTEssential genes are necessary to the survival or reproduction of a living organism. The prediction and analysis of gene essentiality can advance our understanding to basic life and human diseases, and further boost the development of new drugs. Wet lab methods for identifying essential genes are often costly, time consuming, and laborious. As a complement, computational methods have been proposed to predict essential genes by integrating multiple biological data sources. Most of these methods are evaluated on model organisms. However, prediction methods for human essential genes are still limited and the relationship between human gene essentiality and different biological information still needs to be explored. In addition, exploring suitable deep learning techniques to overcome the limitations of traditional machine learning methods and improve the prediction accuracy is also important and interesting. We propose a deep learning based method, DeepSF, to predict human essential genes. DeepSF integrates sequence features derived from DNA and protein sequence data with features extracted or learned from different types of functional data, such as gene ontology, protein complex, protein domain, and protein-protein interaction network. More than 200 features from these biological data are extracted/learned which are integrated together to train a cost-sensitive deep neural network by utilizing multiple deep leaning techniques. The experimental results of 10-fold cross validation show that DeepSF can accurately predict human gene essentiality with an average AUC of 95.17%, the area under precision-recall curve (auPRC) of 92.21%, the accuracy of 91.59%, and the F1 measure about 78.71%. In addition, the comparison experimental results show that DeepSF significantly outperforms several popular traditional machine learning models (SVM, Random Forest, and Adaboost), and performs slightly better than a recent deep learning model (DeepHE). We have demonstrated that the proposed method, DeepSF, is effective for predicting human essential genes. Deep learning techniques are promising at both feature learning and classification levels for the task of essential gene prediction.


Author(s):  
Xue Zhang ◽  
Wangxin Xiao ◽  
Weijia Xiao

AbstractMotivationAccurately predicting essential genes using computational methods can greatly reduce the effort in finding them via wet experiments at both time and resource scales, and further accelerate the process of drug discovery. Several computational methods have been proposed for predicting essential genes in model organisms by integrating multiple biological data sources either via centrality measures or machine learning based methods. However, the methods aiming to predict human essential genes are still limited and the performance still need improve. In addition, most of the machine learning based essential gene prediction methods are lack of skills to handle the imbalanced learning issue inherent in the essential gene prediction problem, which might be one factor affecting their performance.ResultsWe proposed a deep learning based method, DeepHE, to predict human essential genes by integrating features derived from sequence data and protein-protein interaction (PPI) network. A deep learning based network embedding method was utilized to automatically learn features from PPI network. In addition, 89 sequence features were derived from DNA sequence and protein sequence for each gene. These two types of features were integrated to train a multilayer neural network. A cost-sensitive technique was used to address the imbalanced learning problem when training the deep neural network. The experimental results for predicting human essential genes showed that our proposed method, DeepHE, can accurately predict human gene essentiality with an average AUC higher than 94%, the area under precision-recall curve (AP) higher than 90%, and the accuracy higher than 90%. We also compared DeepHE with several widely used traditional machine learning models (SVM, Naïve Bayes, Random Forest, Adaboost). The experimental results showed that DeepHE greatly outperformed the compared machine learning models.ConclusionsWe demonstrated that human essential genes can be accurately predicted by designing effective machine learning algorithm and integrating representative features captured from available biological data. The proposed deep learning framework is effective for such task.Availability and ImplementationThe python code will be freely available upon the acceptance of this manuscript at https://github.com/xzhang2016/[email protected]


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Tamer N. Jarada ◽  
Jon G. Rokne ◽  
Reda Alhajj

Abstract Background Drug repositioning is an emerging approach in pharmaceutical research for identifying novel therapeutic potentials for approved drugs and discover therapies for untreated diseases. Due to its time and cost efficiency, drug repositioning plays an instrumental role in optimizing the drug development process compared to the traditional de novo drug discovery process. Advances in the genomics, together with the enormous growth of large-scale publicly available data and the availability of high-performance computing capabilities, have further motivated the development of computational drug repositioning approaches. More recently, the rise of machine learning techniques, together with the availability of powerful computers, has made the area of computational drug repositioning an area of intense activities. Results In this study, a novel framework SNF-NN based on deep learning is presented, where novel drug-disease interactions are predicted using drug-related similarity information, disease-related similarity information, and known drug-disease interactions. Heterogeneous similarity information related to drugs and disease is fed to the proposed framework in order to predict novel drug-disease interactions. SNF-NN uses similarity selection, similarity network fusion, and a highly tuned novel neural network model to predict new drug-disease interactions. The robustness of SNF-NN is evaluated by comparing its performance with nine baseline machine learning methods. The proposed framework outperforms all baseline methods ($$AUC-ROC$$ A U C - R O C = 0.867, and $$AUC-PR$$ A U C - P R =0.876) using stratified 10-fold cross-validation. To further demonstrate the reliability and robustness of SNF-NN, two datasets are used to fairly validate the proposed framework’s performance against seven recent state-of-the-art methods for drug-disease interaction prediction. SNF-NN achieves remarkable performance in stratified 10-fold cross-validation with $$AUC-ROC$$ A U C - R O C ranging from 0.879 to 0.931 and $$AUC-PR$$ A U C - P R from 0.856 to 0.903. Moreover, the efficiency of SNF-NN is verified by validating predicted unknown drug-disease interactions against clinical trials and published studies. Conclusion In conclusion, computational drug repositioning research can significantly benefit from integrating similarity measures in heterogeneous networks and deep learning models for predicting novel drug-disease interactions. The data and implementation of SNF-NN are available at http://pages.cpsc.ucalgary.ca/ tnjarada/snf-nn.php.


2018 ◽  
Vol 20 (5) ◽  
pp. 1878-1912 ◽  
Author(s):  
Ahmet Sureyya Rifaioglu ◽  
Heval Atas ◽  
Maria Jesus Martin ◽  
Rengul Cetin-Atalay ◽  
Volkan Atalay ◽  
...  

Abstract The identification of interactions between drugs/compounds and their targets is crucial for the development of new drugs. In vitro screening experiments (i.e. bioassays) are frequently used for this purpose; however, experimental approaches are insufficient to explore novel drug-target interactions, mainly because of feasibility problems, as they are labour intensive, costly and time consuming. A computational field known as ‘virtual screening’ (VS) has emerged in the past decades to aid experimental drug discovery studies by statistically estimating unknown bio-interactions between compounds and biological targets. These methods use the physico-chemical and structural properties of compounds and/or target proteins along with the experimentally verified bio-interaction information to generate predictive models. Lately, sophisticated machine learning techniques are applied in VS to elevate the predictive performance. The objective of this study is to examine and discuss the recent applications of machine learning techniques in VS, including deep learning, which became highly popular after giving rise to epochal developments in the fields of computer vision and natural language processing. The past 3 years have witnessed an unprecedented amount of research studies considering the application of deep learning in biomedicine, including computational drug discovery. In this review, we first describe the main instruments of VS methods, including compound and protein features (i.e. representations and descriptors), frequently used libraries and toolkits for VS, bioactivity databases and gold-standard data sets for system training and benchmarking. We subsequently review recent VS studies with a strong emphasis on deep learning applications. Finally, we discuss the present state of the field, including the current challenges and suggest future directions. We believe that this survey will provide insight to the researchers working in the field of computational drug discovery in terms of comprehending and developing novel bio-prediction methods.


Author(s):  
Teresa Reynolds Sousa ◽  
João Rema ◽  
Sergio Machado ◽  
Filipa Novais

Background: The therapeutic options for neurobehavioral disorders are still limited, and in many cases, they lack a satisfactory balance between efficacy and side effects. Objective: This work aims to review current evidence regarding the potential contribution of psychedelics and hallucinogens to the discovery of new drugs for treating different psychiatric disorders. Discussion: Ayahuasca/N,N-dimethyltryptamine (DMT), lysergic acid diethylamide (LSD), and psilocybin have evidence supporting their use in depression, and psilocybin and ayahuasca have also shown good results in treatment-resistant depression. In randomized controlled trials (RCTs) conducted with anxious patients, there were symptomatic improvements with psilocybin and LSD. Psilocybin diminished Yale–Brown Obsessive Compulsive Scale (Y-BOCS) scores in a small obsessive–compulsive disorder (OCD) sample. The evidence is less robust regarding substance use disorders, but it suggests a possible role for LSD and psilocybin in alcohol use disorders and for psilocybin in tobacco addiction. In a clinical setting, these substances seem to be safe and well-tolerated. Their mechanisms of action are not fully elucidated, but there seems to be a preponderant role of 5-hydroxytryptamine (5HT) 2A agonism, as well as connectivity changes within the default mode network (DMN) and amygdala and some other molecular modifications. Conclusion: The studies underlying the conclusions have small samples and are heterogeneous in their methods. However, the results suggest that the use of psychedelics and hallucinogens could be considered in some disorders. More studies are needed to reinforce their evidence as potential new drugs.


2021 ◽  
Vol 17 ◽  
Author(s):  
Prashanth Kulkarni ◽  
Manjappa Mahadevappa ◽  
Srikar Chilakamarri

: Artificial intelligence technology is emerging as a promising entity in cardiovascular medicine, potentially improving diagnosis and patient care. In this article, we review the literature on artificial intelligence and its utility in cardiology. We provide a detailed description of concepts of artificial intelligence tools like machine learning, deep learning, and cognitive computing. This review discusses the current evidence, application, prospects, and limitations of artificial intelligence in cardiology.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Kalyanaraman Vaidyanathan ◽  
Chuangqi Wang ◽  
Amanda Krajnik ◽  
Yudong Yu ◽  
Moses Choi ◽  
...  

AbstractMachine learning approaches have shown great promise in biology and medicine discovering hidden information to further understand complex biological and pathological processes. In this study, we developed a deep learning-based machine learning algorithm to meaningfully process image data and facilitate studies in vascular biology and pathology. Vascular injury and atherosclerosis are characterized by neointima formation caused by the aberrant accumulation and proliferation of vascular smooth muscle cells (VSMCs) within the vessel wall. Understanding how to control VSMC behaviors would promote the development of therapeutic targets to treat vascular diseases. However, the response to drug treatments among VSMCs with the same diseased vascular condition is often heterogeneous. Here, to identify the heterogeneous responses of drug treatments, we created an in vitro experimental model system using VSMC spheroids and developed a machine learning-based computational method called HETEROID (heterogeneous spheroid). First, we established a VSMC spheroid model that mimics neointima-like formation and the structure of arteries. Then, to identify the morphological subpopulations of drug-treated VSMC spheroids, we used a machine learning framework that combines deep learning-based spheroid segmentation and morphological clustering analysis. Our machine learning approach successfully showed that FAK, Rac, Rho, and Cdc42 inhibitors differentially affect spheroid morphology, suggesting that multiple drug responses of VSMC spheroid formation exist. Overall, our HETEROID pipeline enables detailed quantitative drug characterization of morphological changes in neointima formation, that occurs in vivo, by single-spheroid analysis.


2021 ◽  
Author(s):  
revathi B. S. ◽  
A. Meena Kowshalya

Abstract Image Captioning is the process of generating textual descriptions of an image. These descriptions need to be syntactically and semantically correct. Image Captioning has potential advantages in many applications like image indexing techniques, devices for visually impaired persons, social media and several other natural language processing applications. Image Captioning is a popular research area where numerous scopes for new findings exist in preparation of datasets, generating language models, developing the models and evaluating the same. This paper extensively surveys very early literature that includes the advent of Artificial Intelligence, the Machine Learning pathway, the photography era, the early Deep Learning and the current Deep Learning methodology for image Captioning. This survey will definitely help novice researchers to understand the roadmap to current techniques.


Sign in / Sign up

Export Citation Format

Share Document