A multimodal deep learning framework for predicting drug–drug interaction events

2020 ◽  
Vol 36 (15) ◽  
pp. 4316-4322 ◽  
Author(s):  
Yifan Deng ◽  
Xinran Xu ◽  
Yang Qiu ◽  
Jingbo Xia ◽  
Wen Zhang ◽  
...  

Abstract Motivation Drug–drug interactions (DDIs) are one of the major concerns in pharmaceutical research. Many machine learning based methods have been proposed for the DDI prediction, but most of them predict whether two drugs interact or not. The studies revealed that DDIs could cause different subsequent events, and predicting DDI-associated events is more useful for investigating the mechanism hidden behind the combined drug usage or adverse reactions. Results In this article, we collect DDIs from DrugBank database, and extract 65 categories of DDI events by dependency analysis and events trimming. We propose a multimodal deep learning framework named DDIMDL that combines diverse drug features with deep learning to build a model for predicting DDI-associated events. DDIMDL first constructs deep neural network (DNN)-based sub-models, respectively, using four types of drug features: chemical substructures, targets, enzymes and pathways, and then adopts a joint DNN framework to combine the sub-models to learn cross-modality representations of drug–drug pairs and predict DDI events. In computational experiments, DDIMDL produces high-accuracy performances and has high efficiency. Moreover, DDIMDL outperforms state-of-the-art DDI event prediction methods and baseline methods. Among all the features of drugs, the chemical substructures seem to be the most informative. With the combination of substructures, targets and enzymes, DDIMDL achieves an accuracy of 0.8852 and an area under the precision–recall curve of 0.9208. Availability and implementation The source code and data are available at https://github.com/YifanDengWHU/DDIMDL. Supplementary information Supplementary data are available at Bioinformatics online.

2018 ◽  
Vol 35 (13) ◽  
pp. 2216-2225 ◽  
Author(s):  
Abdurrahman Elbasir ◽  
Balasubramanian Moovarkumudalvan ◽  
Khalid Kunji ◽  
Prasanna R Kolatkar ◽  
Raghvendra Mall ◽  
...  

Abstract Motivation Protein structure determination has primarily been performed using X-ray crystallography. To overcome the expensive cost, high attrition rate and series of trial-and-error settings, many in-silico methods have been developed to predict crystallization propensities of proteins based on their sequences. However, the majority of these methods build their predictors by extracting features from protein sequences, which is computationally expensive and can explode the feature space. We propose DeepCrystal, a deep learning framework for sequence-based protein crystallization prediction. It uses deep learning to identify proteins which can produce diffraction-quality crystals without the need to manually engineer additional biochemical and structural features from sequence. Our model is based on convolutional neural networks, which can exploit frequently occurring k-mers and sets of k-mers from the protein sequences to distinguish proteins that will result in diffraction-quality crystals from those that will not. Results Our model surpasses previous sequence-based protein crystallization predictors in terms of recall, F-score, accuracy and Matthew’s correlation coefficient (MCC) on three independent test sets. DeepCrystal achieves an average improvement of 1.4, 12.1% in recall, when compared to its closest competitors, Crysalis II and Crysf, respectively. In addition, DeepCrystal attains an average improvement of 2.1, 6.0% for F-score, 1.9, 3.9% for accuracy and 3.8, 7.0% for MCC w.r.t. Crysalis II and Crysf on independent test sets. Availability and implementation The standalone source code and models are available at https://github.com/elbasir/DeepCrystal and a web-server is also available at https://deeplearning-protein.qcri.org. Supplementary information Supplementary data are available at Bioinformatics online.


2019 ◽  
Vol 35 (24) ◽  
pp. 5191-5198 ◽  
Author(s):  
Xiangxiang Zeng ◽  
Siyi Zhu ◽  
Xiangrong Liu ◽  
Yadi Zhou ◽  
Ruth Nussinov ◽  
...  

Abstract Motivation Traditional drug discovery and development are often time-consuming and high risk. Repurposing/repositioning of approved drugs offers a relatively low-cost and high-efficiency approach toward rapid development of efficacious treatments. The emergence of large-scale, heterogeneous biological networks has offered unprecedented opportunities for developing in silico drug repositioning approaches. However, capturing highly non-linear, heterogeneous network structures by most existing approaches for drug repositioning has been challenging. Results In this study, we developed a network-based deep-learning approach, termed deepDR, for in silico drug repurposing by integrating 10 networks: one drug–disease, one drug-side-effect, one drug–target and seven drug–drug networks. Specifically, deepDR learns high-level features of drugs from the heterogeneous networks by a multi-modal deep autoencoder. Then the learned low-dimensional representation of drugs together with clinically reported drug–disease pairs are encoded and decoded collectively via a variational autoencoder to infer candidates for approved drugs for which they were not originally approved. We found that deepDR revealed high performance [the area under receiver operating characteristic curve (AUROC) = 0.908], outperforming conventional network-based or machine learning-based approaches. Importantly, deepDR-predicted drug–disease associations were validated by the ClinicalTrials.gov database (AUROC = 0.826) and we showcased several novel deepDR-predicted approved drugs for Alzheimer’s disease (e.g. risperidone and aripiprazole) and Parkinson’s disease (e.g. methylphenidate and pergolide). Availability and implementation Source code and data can be downloaded from https://github.com/ChengF-Lab/deepDR Supplementary information Supplementary data are available online at Bioinformatics.


Sensors ◽  
2021 ◽  
Vol 21 (18) ◽  
pp. 6108
Author(s):  
Sukhan Lee ◽  
Yongjun Yang

Deep learning approaches to estimating full 3D orientations of objects, in addition to object classes, are limited in their accuracies, due to the difficulty in learning the continuous nature of three-axis orientation variations by regression or classification with sufficient generalization. This paper presents a novel progressive deep learning framework, herein referred to as 3D POCO Net, that offers high accuracy in estimating orientations about three rotational axes yet with efficiency in network complexity. The proposed 3D POCO Net is configured, using four PointNet-based networks for independently representing the object class and three individual axes of rotations. The four independent networks are linked by in-between association subnetworks that are trained to progressively map the global features learned by individual networks one after another for fine-tuning the independent networks. In 3D POCO Net, high accuracy is achieved by combining a high precision classification based on a large number of orientation classes with a regression based on a weighted sum of classification outputs, while high efficiency is maintained by a progressive framework by which a large number of orientation classes are grouped into independent networks linked by association subnetworks. We implemented 3D POCO Net for full three-axis orientation variations and trained it with about 146 million orientation variations augmented from the ModelNet10 dataset. The testing results show that we can achieve an orientation regression error of about 2.5° with about 90% accuracy in object classification for general three-axis orientation estimation and object classification. Furthermore, we demonstrate that a pre-trained 3D POCO Net can serve as an orientation representation platform based on which orientations as well as object classes of partial point clouds from occluded objects are learned in the form of transfer learning.


2020 ◽  
Author(s):  
Raniyaharini R ◽  
Madhumitha K ◽  
Mishaa S ◽  
Virajaravi R

2020 ◽  
Author(s):  
Jinseok Lee

BACKGROUND The coronavirus disease (COVID-19) has explosively spread worldwide since the beginning of 2020. According to a multinational consensus statement from the Fleischner Society, computed tomography (CT) can be used as a relevant screening tool owing to its higher sensitivity for detecting early pneumonic changes. However, physicians are extremely busy fighting COVID-19 in this era of worldwide crisis. Thus, it is crucial to accelerate the development of an artificial intelligence (AI) diagnostic tool to support physicians. OBJECTIVE We aimed to quickly develop an AI technique to diagnose COVID-19 pneumonia and differentiate it from non-COVID pneumonia and non-pneumonia diseases on CT. METHODS A simple 2D deep learning framework, named fast-track COVID-19 classification network (FCONet), was developed to diagnose COVID-19 pneumonia based on a single chest CT image. FCONet was developed by transfer learning, using one of the four state-of-art pre-trained deep learning models (VGG16, ResNet50, InceptionV3, or Xception) as a backbone. For training and testing of FCONet, we collected 3,993 chest CT images of patients with COVID-19 pneumonia, other pneumonia, and non-pneumonia diseases from Wonkwang University Hospital, Chonnam National University Hospital, and the Italian Society of Medical and Interventional Radiology public database. These CT images were split into a training and a testing set at a ratio of 8:2. For the test dataset, the diagnostic performance to diagnose COVID-19 pneumonia was compared among the four pre-trained FCONet models. In addition, we tested the FCONet models on an additional external testing dataset extracted from the embedded low-quality chest CT images of COVID-19 pneumonia in recently published papers. RESULTS Of the four pre-trained models of FCONet, the ResNet50 showed excellent diagnostic performance (sensitivity 99.58%, specificity 100%, and accuracy 99.87%) and outperformed the other three pre-trained models in testing dataset. In additional external test dataset using low-quality CT images, the detection accuracy of the ResNet50 model was the highest (96.97%), followed by Xception, InceptionV3, and VGG16 (90.71%, 89.38%, and 87.12%, respectively). CONCLUSIONS The FCONet, a simple 2D deep learning framework based on a single chest CT image, provides excellent diagnostic performance in detecting COVID-19 pneumonia. Based on our testing dataset, the ResNet50-based FCONet might be the best model, as it outperformed other FCONet models based on VGG16, Xception, and InceptionV3.


Sign in / Sign up

Export Citation Format

Share Document