scholarly journals Compound2Drug – a Machine/deep Learning Tool for Predicting the Bioactivity of PubChem Compounds

Author(s):  
Ben Geoffrey A S ◽  
Pavan Preetham Valluri ◽  
Akhil Sanker ◽  
Rafal Madaj ◽  
Host Antony Davidd ◽  
...  

<p>Network data is composed of nodes and edges. Successful application of machine learning/deep learning algorithms on network data to make node classification and link prediction has been shown in the area of social networks through which highly customized suggestions are offered to social network users. Similarly one can attempt the use of machine learning/deep learning algorithms on biological network data to generate predictions of scientific usefulness. In the present work, compound-drug target interaction data set from bindingDB has been used to train machine learning/deep learning algorithms which are used to predict the drug targets for any PubChem compound queried by the user. The user is required to input the PubChem Compound ID (CID) of the compound the user wishes to gain information about its predicted biological activity and the tool outputs the RCSB PDB IDs of the predicted drug target. The tool also incorporates a feature to perform automated <i>In Silico</i> modelling for the compounds and the predicted drug targets to uncover their protein-ligand interaction profiles. The programs fetches the structures of the compound and the predicted drug targets, prepares them for molecular docking using standard AutoDock Scripts that are part of MGLtools and performs molecular docking, protein-ligand interaction profiling of the targets and the compound and stores the visualized results in the working folder of the user. The program is hosted, supported and maintained at the following GitHub repository </p> <p><a href="https://github.com/bengeof/Compound2Drug">https://github.com/bengeof/Compound2Drug</a></p>

2020 ◽  
Author(s):  
Ben Geoffrey A S ◽  
Pavan Preetham Valluri ◽  
Akhil Sanker ◽  
Rafal Madaj ◽  
Host Antony Davidd ◽  
...  

<p>Network data is composed of nodes and edges. Successful application of machine learning/deep learning algorithms on network data to make node classification and link prediction has been shown in the area of social networks through which highly customized suggestions are offered to social network users. Similarly one can attempt the use of machine learning/deep learning algorithms on biological network data to generate predictions of scientific usefulness. In the present work, compound-drug target interaction data set from bindingDB has been used to train machine learning/deep learning algorithms which are used to predict the drug targets for any PubChem compound queried by the user. The user is required to input the PubChem Compound ID (CID) of the compound the user wishes to gain information about its predicted biological activity and the tool outputs the RCSB PDB IDs of the predicted drug target. The tool also incorporates a feature to perform automated <i>In Silico</i> modelling for the compounds and the predicted drug targets to uncover their protein-ligand interaction profiles. The programs fetches the structures of the compound and the predicted drug targets, prepares them for molecular docking using standard AutoDock Scripts that are part of MGLtools and performs molecular docking, protein-ligand interaction profiling of the targets and the compound and stores the visualized results in the working folder of the user. The program is hosted, supported and maintained at the following GitHub repository </p> <p><a href="https://github.com/bengeof/Compound2Drug">https://github.com/bengeof/Compound2Drug</a></p>


2021 ◽  
Author(s):  
Ben Geoffrey A S ◽  
Rafal Madaj ◽  
Akhil Sanker ◽  
Pavan Preetham Valluri ◽  
Harshmeet Singh

Network data is composed of nodes and edges. Successful application of machine learning/deep learning algorithms on network data to make node classification and link prediction have been shown in the area of social networks through which highly customized suggestions are offered to social<br>network users. Similarly one can attempt the use of machine learning/deep learning algorithms on biological network data to generate predictions of scientific usefulness. In the presented work, compound-drug target interaction network data set from bindingDB has been used to train deep learning neural network and a multi class classification has been implemented to classify PubChem compound queried by the user into class labels of PBD IDs. This way target interaction prediction for PubChem compounds is carried out using deep learning. The user is required to input the PubChem Compound ID (CID) of the compound the user wishes to gain information about its predicted biological activity and the tool outputs the RCSB PDB IDs of the predicted drug target interaction for the input CID. Further the tool also optimizes the compound of interest of the user toward drug likeness properties through a deep learning based structure optimization with a deep learning based<br>drug likeness optimization protocol. The tool also incorporates a feature to perform automated In Silico modelling for the compounds and the predicted drug targets to uncover their protein-ligand interaction profiles. The program is hosted, supported and maintained at the following GitHub repository<div><br></div>https://github.com/bengeof/Compound2DeNovoDrugPropMax<br>


2021 ◽  
Author(s):  
Ben Geoffrey A S ◽  
Rafal Madaj ◽  
Akhil Sanker ◽  
Pavan Preetham Valluri

Network data is composed of nodes and edges. Successful application of machine learning/deep<br>learning algorithms on network data to make node classification and link prediction have been shown<br>in the area of social networks through which highly customized suggestions are offered to social<br>network users. Similarly one can attempt the use of machine learning/deep learning algorithms on<br>biological network data to generate predictions of scientific usefulness. In the presented work,<br>compound-drug target interaction network data set from bindingDB has been used to train deep<br>learning neural network and a multi class classification has been implemented to classify PubChem<br>compound queried by the user into class labels of PBD IDs. This way target interaction prediction for<br>PubChem compounds is carried out using deep learning. The user is required to input the PubChem<br>Compound ID (CID) of the compound the user wishes to gain information about its predicted<br>biological activity and the tool outputs the RCSB PDB IDs of the predicted drug target interaction for<br>the input CID. Further the tool also optimizes the compound of interest of the user toward drug<br>likeness properties through a deep learning based structure optimization with a deep learning based<br>drug likeness optimization protocol. The tool also incorporates a feature to perform automated In<br>Silico modelling for the compounds and the predicted drug targets to uncover their protein-ligand<br>interaction profiles. The program is hosted, supported and maintained at the following GitHub<br><div>repository</div><div><br></div><div>https://github.com/bengeof/Compound2DeNovoDrugPropMax</div><div><br></div>Anticipating the rise in the use of quantum computing and quantum machine learning in drug discovery we use<br>the Penny-lane interface to quantum hardware to turn classical Keras layers used in our machine/deep<br>learning models into a quantum layer and introduce quantum layers into classical models to produce a<br>quantum-classical machine/deep learning hybrid model of our tool and the code corresponding to the<br><div>same is provided below</div><div><br></div>https://github.com/bengeof/QPoweredCompound2DeNovoDrugPropMax<br>


Molecules ◽  
2020 ◽  
Vol 25 (22) ◽  
pp. 5277
Author(s):  
Lauv Patel ◽  
Tripti Shukla ◽  
Xiuzhen Huang ◽  
David W. Ussery ◽  
Shanzhi Wang

The advancements of information technology and related processing techniques have created a fertile base for progress in many scientific fields and industries. In the fields of drug discovery and development, machine learning techniques have been used for the development of novel drug candidates. The methods for designing drug targets and novel drug discovery now routinely combine machine learning and deep learning algorithms to enhance the efficiency, efficacy, and quality of developed outputs. The generation and incorporation of big data, through technologies such as high-throughput screening and high through-put computational analysis of databases used for both lead and target discovery, has increased the reliability of the machine learning and deep learning incorporated techniques. The use of these virtual screening and encompassing online information has also been highlighted in developing lead synthesis pathways. In this review, machine learning and deep learning algorithms utilized in drug discovery and associated techniques will be discussed. The applications that produce promising results and methods will be reviewed.


2021 ◽  
Author(s):  
Ben Geoffrey ◽  
Rafal Madaj ◽  
Pavan Preetham Valluri ◽  
Akhil Sanker

The past decade has seen a surge in the range of application data science, machine learning, deep learning, and AI methods to drug discovery. The presented work involves an assemblage of a variety of AI methods for drug discovery along with the incorporation of in silico techniques to provide a holistic tool for automated drug discovery. When drug candidates are required to be identified for aparticular drug target of interest, the user is required to provide the tool target signatures in the form of an amino acid sequence or its corresponding nucleotide sequence. The tool collects data registered on PubChem required to perform an automated QSAR and with the validated QSAR model, prediction and drug lead generation are carried out. This protocol we call Target2Drug. This is followed by a protocol we call Target2DeNovoDrug wherein novel molecules with likely activityagainst the target are generated de novo using a generative LSTM model. It is often required in drug discovery that the generated molecules possess certain properties like drug-likeness, and therefore to optimize the generated de novo molecules toward the required drug-like property we use a deep learning model called DeepFMPO, and this protocol we call Target2DeNovoDrugPropMax. This is followed by the fast automated AutoDock-Vina based in silico modeling and profiling of theinteraction of optimized drug leads and the drug target. This is followed by an automated execution of the Molecular Dynamics protocol that is also carried out for the complex identified with the best protein-ligand interaction from the AutoDock- Vina based virtual screening. The results are stored in the working folder of the user. The code is maintained, supported, and provide for use in thefollowing GitHub repositoryhttps://github.com/bengeof/Target2DeNovoDrugPropMaxAnticipating the rise in the use of quantum computing and quantum machine learning in drug discovery we use the Penny-lane interface to quantum hardware to turn classical Keras layers used in our machine/deep learning models into a quantum layer and introduce quantum layers into our classical models to produce a quantum-classical machine/deep learning hybrid model of our tool and the code corresponding to the same is provided belowhttps://github.com/bengeof/QPoweredTarget2DeNovoDrugPropMax


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5953 ◽  
Author(s):  
Parastoo Alinia ◽  
Ali Samadani ◽  
Mladen Milosevic ◽  
Hassan Ghasemzadeh ◽  
Saman Parvaneh

Automated lying-posture tracking is important in preventing bed-related disorders, such as pressure injuries, sleep apnea, and lower-back pain. Prior research studied in-bed lying posture tracking using sensors of different modalities (e.g., accelerometer and pressure sensors). However, there remain significant gaps in research regarding how to design efficient in-bed lying posture tracking systems. These gaps can be articulated through several research questions, as follows. First, can we design a single-sensor, pervasive, and inexpensive system that can accurately detect lying postures? Second, what computational models are most effective in the accurate detection of lying postures? Finally, what physical configuration of the sensor system is most effective for lying posture tracking? To answer these important research questions, in this article we propose a comprehensive approach for designing a sensor system that uses a single accelerometer along with machine learning algorithms for in-bed lying posture classification. We design two categories of machine learning algorithms based on deep learning and traditional classification with handcrafted features to detect lying postures. We also investigate what wearing sites are the most effective in the accurate detection of lying postures. We extensively evaluate the performance of the proposed algorithms on nine different body locations and four human lying postures using two datasets. Our results show that a system with a single accelerometer can be used with either deep learning or traditional classifiers to accurately detect lying postures. The best models in our approach achieve an F1 score that ranges from 95.2% to 97.8% with a coefficient of variation from 0.03 to 0.05. The results also identify the thighs and chest as the most salient body sites for lying posture tracking. Our findings in this article suggest that, because accelerometers are ubiquitous and inexpensive sensors, they can be a viable source of information for pervasive monitoring of in-bed postures.


2018 ◽  
Vol 8 (4) ◽  
pp. 34 ◽  
Author(s):  
Vishal Saxena ◽  
Xinyu Wu ◽  
Ira Srivastava ◽  
Kehan Zhu

The ongoing revolution in Deep Learning is redefining the nature of computing that is driven by the increasing amount of pattern classification and cognitive tasks. Specialized digital hardware for deep learning still holds its predominance due to the flexibility offered by the software implementation and maturity of algorithms. However, it is being increasingly desired that cognitive computing occurs at the edge, i.e., on hand-held devices that are energy constrained, which is energy prohibitive when employing digital von Neumann architectures. Recent explorations in digital neuromorphic hardware have shown promise, but offer low neurosynaptic density needed for scaling to applications such as intelligent cognitive assistants (ICA). Large-scale integration of nanoscale emerging memory devices with Complementary Metal Oxide Semiconductor (CMOS) mixed-signal integrated circuits can herald a new generation of Neuromorphic computers that will transcend the von Neumann bottleneck for cognitive computing tasks. Such hybrid Neuromorphic System-on-a-chip (NeuSoC) architectures promise machine learning capability at chip-scale form factor, and several orders of magnitude improvement in energy efficiency. Practical demonstration of such architectures has been limited as performance of emerging memory devices falls short of the expected behavior from the idealized memristor-based analog synapses, or weights, and novel machine learning algorithms are needed to take advantage of the device behavior. In this article, we review the challenges involved and present a pathway to realize large-scale mixed-signal NeuSoCs, from device arrays and circuits to spike-based deep learning algorithms with ‘brain-like’ energy-efficiency.


Sign in / Sign up

Export Citation Format

Share Document