scholarly journals DeepDrug: A general graph-based deep learning framework for drug relation prediction

2020 ◽  
Author(s):  
Xusheng Cao ◽  
Rui Fan ◽  
Wanwen Zeng

AbstractComputational approaches for accurate predictions of drug-related interactions such as drug-drug interactions (DDIs) and drug-target interactions (DTIs) are highly demanding for biochemical researchers due to their efficiency and cost-effectiveness. Despite the fact that many methods have been proposed and developed to predict DDIs and DTIs respectively, their success is still limited due to a lack of systematic evaluation of the intrinsic properties embedded in their structure. In this paper, we develop a deep learning framework, named DeepDrug, to overcome these shortcomings by using graph convolutional networks to learn the graphical representations of drugs and proteins such as molecular fingerprints and residual structures in order to boost the prediction accuracy. We benchmark our methods in binary-class DDIs, multi-class DDIs and binary-class DTIs classification tasks using several datasets. We then demonstrate that DeepDrug outperforms other state-of-the-art published methods both in terms of accuracy and robustness in predicting DDIs and DTIs with varying ratios of positive to negative training data. Ultimately, we visualize the structural features learned by DeepDrug, which display compatible and accordant patterns in chemical properties, providing additional evidence to support the strong predictive power of DeepDrug. We believe that DeepDrug is an efficient tool in accurate prediction of DDIs and DTIs and provides a promising path in understanding the underlying mechanism of these biochemical relations. The source code of the DeepDrug can be downloaded from https://github.com/wanwenzeng/deepdrug.

Author(s):  
J. Joshua Thomas ◽  
Tran Huu Ngoc Tran ◽  
Gilberto Pérez Lechuga ◽  
Bahari Belaton

Applying deep learning to the pervasive graph data is significant because of the unique characteristics of graphs. Recently, substantial amounts of research efforts have been keen on this area, greatly advancing graph-analyzing techniques. In this study, the authors comprehensively review different kinds of deep learning methods applied to graphs. They discuss with existing literature into sub-components of two: graph convolutional networks, graph autoencoders, and recent trends including chemoinformatics research area including molecular fingerprints and drug discovery. They further experiment with variational autoencoder (VAE) analyze how these apply in drug target interaction (DTI) and applications with ephemeral outline on how they assist the drug discovery pipeline and discuss potential research directions.


2020 ◽  
Vol 8 ◽  
Author(s):  
Adil Khadidos ◽  
Alaa O. Khadidos ◽  
Srihari Kannan ◽  
Yuvaraj Natarajan ◽  
Sachi Nandan Mohanty ◽  
...  

In this paper, a data mining model on a hybrid deep learning framework is designed to diagnose the medical conditions of patients infected with the coronavirus disease 2019 (COVID-19) virus. The hybrid deep learning model is designed as a combination of convolutional neural network (CNN) and recurrent neural network (RNN) and named as DeepSense method. It is designed as a series of layers to extract and classify the related features of COVID-19 infections from the lungs. The computerized tomography image is used as an input data, and hence, the classifier is designed to ease the process of classification on learning the multidimensional input data using the Expert Hidden layers. The validation of the model is conducted against the medical image datasets to predict the infections using deep learning classifiers. The results show that the DeepSense classifier offers accuracy in an improved manner than the conventional deep and machine learning classifiers. The proposed method is validated against three different datasets, where the training data are compared with 70%, 80%, and 90% training data. It specifically provides the quality of the diagnostic method adopted for the prediction of COVID-19 infections in a patient.


2019 ◽  
Vol 11 (6) ◽  
pp. 684 ◽  
Author(s):  
Maria Papadomanolaki ◽  
Maria Vakalopoulou ◽  
Konstantinos Karantzalos

Deep learning architectures have received much attention in recent years demonstrating state-of-the-art performance in several segmentation, classification and other computer vision tasks. Most of these deep networks are based on either convolutional or fully convolutional architectures. In this paper, we propose a novel object-based deep-learning framework for semantic segmentation in very high-resolution satellite data. In particular, we exploit object-based priors integrated into a fully convolutional neural network by incorporating an anisotropic diffusion data preprocessing step and an additional loss term during the training process. Under this constrained framework, the goal is to enforce pixels that belong to the same object to be classified at the same semantic category. We compared thoroughly the novel object-based framework with the currently dominating convolutional and fully convolutional deep networks. In particular, numerous experiments were conducted on the publicly available ISPRS WGII/4 benchmark datasets, namely Vaihingen and Potsdam, for validation and inter-comparison based on a variety of metrics. Quantitatively, experimental results indicate that, overall, the proposed object-based framework slightly outperformed the current state-of-the-art fully convolutional networks by more than 1% in terms of overall accuracy, while intersection over union results are improved for all semantic categories. Qualitatively, man-made classes with more strict geometry such as buildings were the ones that benefit most from our method, especially along object boundaries, highlighting the great potential of the developed approach.


2018 ◽  
Vol 35 (13) ◽  
pp. 2216-2225 ◽  
Author(s):  
Abdurrahman Elbasir ◽  
Balasubramanian Moovarkumudalvan ◽  
Khalid Kunji ◽  
Prasanna R Kolatkar ◽  
Raghvendra Mall ◽  
...  

Abstract Motivation Protein structure determination has primarily been performed using X-ray crystallography. To overcome the expensive cost, high attrition rate and series of trial-and-error settings, many in-silico methods have been developed to predict crystallization propensities of proteins based on their sequences. However, the majority of these methods build their predictors by extracting features from protein sequences, which is computationally expensive and can explode the feature space. We propose DeepCrystal, a deep learning framework for sequence-based protein crystallization prediction. It uses deep learning to identify proteins which can produce diffraction-quality crystals without the need to manually engineer additional biochemical and structural features from sequence. Our model is based on convolutional neural networks, which can exploit frequently occurring k-mers and sets of k-mers from the protein sequences to distinguish proteins that will result in diffraction-quality crystals from those that will not. Results Our model surpasses previous sequence-based protein crystallization predictors in terms of recall, F-score, accuracy and Matthew’s correlation coefficient (MCC) on three independent test sets. DeepCrystal achieves an average improvement of 1.4, 12.1% in recall, when compared to its closest competitors, Crysalis II and Crysf, respectively. In addition, DeepCrystal attains an average improvement of 2.1, 6.0% for F-score, 1.9, 3.9% for accuracy and 3.8, 7.0% for MCC w.r.t. Crysalis II and Crysf on independent test sets. Availability and implementation The standalone source code and models are available at https://github.com/elbasir/DeepCrystal and a web-server is also available at https://deeplearning-protein.qcri.org. Supplementary information Supplementary data are available at Bioinformatics online.


Author(s):  
Subasish Das ◽  
Anandi Dutta ◽  
Karen Dixon ◽  
Lisa Minjares-Kyle ◽  
George Gillette

Motorcyclists are vulnerable highway users. Unlike passenger vehicle occupants, motorcycle riders do not have either protective structural surrounding or the advanced restraints that are mandatory safety features in cars and light trucks. Per vehicle mile traveled, motorcyclist fatalities occurred 27 times more frequently than passenger car occupant fatalities in traffic crashes. In addition, there were 4,976 motorcycle crash-related fatalities in the U.S. in 2014—more than twice the number of motorcycle rider fatalities that occurred in 1997. It shows that, in addition to current efforts, research needs to be conducted with additional resources and in newer directions. This paper investigated five years (2010–2014) of Louisiana at-fault motorcycle rider-involved crashes by using deep learning, which is a competent tool for mapping a high-multidimensional input into a smaller multidimensional output. The current study contributes to the existing injury severity modeling literature by developing a deep learning framework, named as DeepScooter, to predict motorcycle-involved crash severities. The final deep learning model can predict severity types with 100% accuracy with training data, and with 94% accuracy with test data, which is not attainable by using a statistical method or machine learning algorithm. The intensity of severities was found to be more likely associated with rider ejection, two-way roadways with no physical separation, curved aligned roadways, and weekends. It is anticipated that the DeepScooter framework and the findings will provide significant contributions to the area of motorcycle safety.


2021 ◽  
Vol 4 ◽  
Author(s):  
Ruqian Hao ◽  
Khashayar Namdar ◽  
Lin Liu ◽  
Farzad Khalvati

Brain tumor is one of the leading causes of cancer-related death globally among children and adults. Precise classification of brain tumor grade (low-grade and high-grade glioma) at an early stage plays a key role in successful prognosis and treatment planning. With recent advances in deep learning, artificial intelligence–enabled brain tumor grading systems can assist radiologists in the interpretation of medical images within seconds. The performance of deep learning techniques is, however, highly depended on the size of the annotated dataset. It is extremely challenging to label a large quantity of medical images, given the complexity and volume of medical data. In this work, we propose a novel transfer learning–based active learning framework to reduce the annotation cost while maintaining stability and robustness of the model performance for brain tumor classification. In this retrospective research, we employed a 2D slice–based approach to train and fine-tune our model on the magnetic resonance imaging (MRI) training dataset of 203 patients and a validation dataset of 66 patients which was used as the baseline. With our proposed method, the model achieved area under receiver operating characteristic (ROC) curve (AUC) of 82.89% on a separate test dataset of 66 patients, which was 2.92% higher than the baseline AUC while saving at least 40% of labeling cost. In order to further examine the robustness of our method, we created a balanced dataset, which underwent the same procedure. The model achieved AUC of 82% compared with AUC of 78.48% for the baseline, which reassures the robustness and stability of our proposed transfer learning augmented with active learning framework while significantly reducing the size of training data.


2020 ◽  
Author(s):  
Obaidur Rahaman ◽  
Alessio Gagliardi

<p>The ability to predict material properties without the need of resource consuming experimental efforts can immensely accelerate material and drug discovery. Although ab initio methods can be reliable and accurate in making such predictions, they are computationally too expensive at a large scale. The recent advancements in artificial intelligence and machine learning as well as availability of large quantum mechanics derived datasets enable us to train models on these datasets as benchmark and to make fast predictions on much larger datasets. The success of these machine learning models highly depends on the machine-readable fingerprints of the molecules that capture their chemical properties as well as topological information. In this work we propose a common deep learning based framework to combine different types of molecular fingerprints to enhance prediction accuracy. Graph Neural Network (GNN), Many Body Tensor Representation (MBTR) and a set of simple Molecular Descriptors (MD) were used to predict the total energies, Highest Occupied Molecular Orbital (HOMO) energies and Lowest Unoccupied Molecular Orbital (LUMO) energies of a dataset containing ~62k large organic molecules with complex aromatic rings and remarkably diverse functional groups. The results demonstrate that a combination of best performing molecular fingerprints can produce better results than the individual ones. The simple and flexible deep learning framework developed in this work can be easily adapted to incorporate other types of molecular fingerprints.<br></p>


Author(s):  
Bing Yu ◽  
Haoteng Yin ◽  
Zhanxing Zhu

Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a novel deep learning framework, Spatio-Temporal Graph Convolutional Networks (STGCN), to tackle the time series prediction problem in traffic domain. Instead of applying regular convolutional and recurrent units, we formulate the problem on graphs and build the model with complete convolutional structures, which enable much faster training speed with fewer parameters. Experiments show that our model STGCN effectively captures comprehensive spatio-temporal correlations through modeling multi-scale traffic networks and consistently outperforms state-of-the-art baselines on various real-world traffic datasets.


2017 ◽  
Author(s):  
Pooya Mobadersany ◽  
Safoora Yousefi ◽  
Mohamed Amgad ◽  
David A Gutman ◽  
Jill S Barnholtz-Sloan ◽  
...  

ABSTRACTCancer histology reflects underlying molecular processes and disease progression, and contains rich phenotypic information that is predictive of patient outcomes. In this study, we demonstrate a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with traditional survival models. We illustrate how this approach can integrate information from both histology images and genomic biomarkers to predict time-to-event patient outcomes, and demonstrate performance surpassing the current clinical paradigm for predicting the survival of patients diagnosed with glioma. We also provide techniques to visualize the tissue patterns learned by these deep learning survival models, and establish a framework for addressing intratumoral heterogeneity and training data deficits.


2015 ◽  
Vol 44 (4) ◽  
pp. e32-e32 ◽  
Author(s):  
Sai Zhang ◽  
Jingtian Zhou ◽  
Hailin Hu ◽  
Haipeng Gong ◽  
Ligong Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document