scholarly journals Predicting the potency of anti-Alzheimer drug combinations using machine learning

2020 ◽  
Author(s):  
Thomas J Anastasio

ABSTRACTBACKGROUNDClinical trials of single drugs for the treatment of Alzheimer Disease (AD) have been notoriously unsuccessful. Combinations of repurposed drugs could provide effective treatments for AD. The challenge is to identify potentially potent combinations.OBJECTIVETo use machine learning (ML) to extract the knowledge from two leading AD databases, and then use the machine to predict which combinations of the drugs in common between the two databases would be the most effective as treatments for AD.METHODSThree-layered neural networks (NNs) having compound, gated units in their internal layer were trained using ML to predict the cognitive scores of participants in either database, given the other data fields including age, demographic variables, comorbidities, and drugs taken.RESULTSThe predictions from the separately trained NNs were strongly correlated. The best drug combinations, jointed determined from both sets of predictions, were high in NSAID, anticoagulant, lipid-lowering, and antihypertensive drugs, and female hormones.CONCLUSIONThe results suggest that AD, as a multifactorial disorder, could be effectively treated using a combination of repurposed drugs.

Processes ◽  
2021 ◽  
Vol 9 (2) ◽  
pp. 264
Author(s):  
Thomas J. Anastasio

Clinical trials of single drugs intended to slow the progression of Alzheimer’s Disease (AD) have been notoriously unsuccessful. Combinations of repurposed drugs could provide effective treatments for AD. The challenge is to identify potentially effective combinations. To meet this challenge, machine learning (ML) was used to extract the knowledge from two leading AD databases, and then “the machine” predicted which combinations of the drugs in common between the two databases would be the most effective as treatments for AD. Specifically, three-layered artificial neural networks (ANNs) with compound, gated units in their internal layer were trained using ML to predict the cognitive scores of participants, separately in either database, given other data fields including age, demographic variables, comorbidities, and drugs taken. The predictions from the separately trained ANNs were statistically highly significantly correlated. The best drug combinations, jointly determined from both sets of predictions, were high in nonsteroidal anti-inflammatory drugs; anticoagulant, lipid-lowering, and antihypertensive drugs; and female hormones. The results suggest that the neurodegenerative processes that underlie AD and other dementias could be effectively treated using a combination of repurposed drugs. Predicted drug combinations could be evaluated in clinical trials.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253789
Author(s):  
Magdalyn E. Elkin ◽  
Xingquan Zhu

As of March 30 2021, over 5,193 COVID-19 clinical trials have been registered through Clinicaltrial.gov. Among them, 191 trials were terminated, suspended, or withdrawn (indicating the cessation of the study). On the other hand, 909 trials have been completed (indicating the completion of the study). In this study, we propose to study underlying factors of COVID-19 trial completion vs. cessation, and design predictive models to accurately predict whether a COVID-19 trial may complete or cease in the future. We collect 4,441 COVID-19 trials from ClinicalTrial.gov to build a testbed, and design four types of features to characterize clinical trial administration, eligibility, study information, criteria, drug types, study keywords, as well as embedding features commonly used in the state-of-the-art machine learning. Our study shows that drug features and study keywords are most informative features, but all four types of features are essential for accurate trial prediction. By using predictive models, our approach achieves more than 0.87 AUC (Area Under the Curve) score and 0.81 balanced accuracy to correctly predict COVID-19 clinical trial completion vs. cessation. Our research shows that computational methods can deliver effective features to understand difference between completed vs. ceased COVID-19 trials. In addition, such models can also predict COVID-19 trial status with satisfactory accuracy, and help stakeholders better plan trials and minimize costs.


Sequence Classification is one of the on-demand research projects in the field of Natural Language Processing (NLP). Classifying a set of images or text into an appropriate category or class is a complex task that a lot of Machine Learning (ML) models fail to accomplish accurately and end up under-fitting the given dataset. Some of the ML algorithms used in text classification are KNN, Naïve Bayes, Support Vector Machines, Convolutional Neural Networks (CNNs), Recursive CNNs, Recurrent Neural Networks (RNNs), Long Short Term Memory (LSTM), etc. For this experimental study, LSTM and a few other algorithms were chosen for a more comparative study. The dataset used is the SMS Spam Collection Dataset from Kaggle and 150 more entries were additionally added from different sources. Two possible class labels for the data points are spam and ham. Each entry consists of the class label, a few sentences of text followed by a few useless features that are eliminated. After converting the text to the required format, the models are run and then evaluated using various metrics. In experimental studies, the LSTM gives much better classification accuracy than the other machine learning models. F1-Scores in the high nineties were achieved using LSTM for classifying the text. The other models showed very low F1-Scores and Cosine Similarities indicating that they had underperformed on the dataset. Another interesting observation is that the LSTM had reduced the number of false positives and false negatives than any other model.


2019 ◽  
Vol 25 (2) ◽  
pp. 145-167 ◽  
Author(s):  
Nicholas Guttenberg ◽  
Nathaniel Virgo ◽  
Alexandra Penn

Natural evolution gives the impression of leading to an open-ended process of increasing diversity and complexity. If our goal is to produce such open-endedness artificially, this suggests an approach driven by evolutionary metaphor. On the other hand, techniques from machine learning and artificial intelligence are often considered too narrow to provide the sort of exploratory dynamics associated with evolution. In this article, we hope to bridge that gap by reviewing common barriers to open-endedness in the evolution-inspired approach and how they are dealt with in the evolutionary case—collapse of diversity, saturation of complexity, and failure to form new kinds of individuality. We then show how these problems map onto similar ones in the machine learning approach, and discuss how the same insights and solutions that alleviated those barriers in evolutionary approaches can be ported over. At the same time, the form these issues take in the machine learning formulation suggests new ways to analyze and resolve barriers to open-endedness. Ultimately, we hope to inspire researchers to be able to interchangeably use evolutionary and gradient-descent-based machine learning methods to approach the design and creation of open-ended systems.


Author(s):  
Vishal Shah ◽  
Neha Sajnani

In recent years’ machine learning is playing a vital role in our everyday lifelike, it can help us to route somewhere, find something for what we aren’t aware of, or can schedule appointments in seconds. Looking at the other side of the coin besides machine learning Mobile phones are equivocating and competing in the same field. If we take an optimistic view, by applying machine learning in our mobile devices, we can make our lives better and even move society forward. Image Classification is the most common and trending topic of machine learning. Among several different types of models in deep learning, Convolutional Neural Networks (CNN’s) have intimated high performance on image classification which are made out of various handling layers to gain proficiency with the portrayals of information with numerous unique levels, are the best AI models as of late. Here, we have trained a simple CNN and completed the experiments on the dataset called Fashion Mnist and Flower Recognition, and also analyzed the techniques of integrating the trained model in the Android platform.


Author(s):  
Shafagat Mahmudova

The study machine learning for software based on Soft Computing technology. It analyzes Soft Computing components. Their use in software, their advantages and challenges are studied. Machine learning and its features are highlighted. The functions and features of neural networks are clarified, and recommendations were given.


2020 ◽  
Author(s):  
Jingbai Li ◽  
Patrick Reiser ◽  
André Eberhard ◽  
Pascal Friederich ◽  
Steven Lopez

<p>Photochemical reactions are being increasingly used to construct complex molecular architectures with mild and straightforward reaction conditions. Computational techniques are increasingly important to understand the reactivities and chemoselectivities of photochemical isomerization reactions because they offer molecular bonding information along the excited-state(s) of photodynamics. These photodynamics simulations are resource-intensive and are typically limited to 1–10 picoseconds and 1,000 trajectories due to high computational cost. Most organic photochemical reactions have excited-state lifetimes exceeding 1 picosecond, which places them outside possible computational studies. Westermeyr <i>et al.</i> demonstrated that a machine learning approach could significantly lengthen photodynamics simulation times for a model system, methylenimmonium cation (CH<sub>2</sub>NH<sub>2</sub><sup>+</sup>).</p><p>We have developed a Python-based code, Python Rapid Artificial Intelligence <i>Ab Initio</i> Molecular Dynamics (PyRAI<sup>2</sup>MD), to accomplish the unprecedented 10 ns <i>cis-trans</i> photodynamics of <i>trans</i>-hexafluoro-2-butene (CF<sub>3</sub>–CH=CH–CF<sub>3</sub>) in 3.5 days. The same simulation would take approximately 58 years with ground-truth multiconfigurational dynamics. We proposed an innovative scheme combining Wigner sampling, geometrical interpolations, and short-time quantum chemical trajectories to effectively sample the initial data, facilitating the adaptive sampling to generate an informative and data-efficient training set with 6,232 data points. Our neural networks achieved chemical accuracy (mean absolute error of 0.032 eV). Our 4,814 trajectories reproduced the S<sub>1</sub> half-life (60.5 fs), the photochemical product ratio (<i>trans</i>: <i>cis</i> = 2.3: 1), and autonomously discovered a pathway towards a carbene. The neural networks have also shown the capability of generalizing the full potential energy surface with chemically incomplete data (<i>trans</i> → <i>cis</i> but not <i>cis</i> → <i>trans</i> pathways) that may offer future automated photochemical reaction discoveries.</p>


2020 ◽  
Vol 20 (28) ◽  
pp. 2634-2647
Author(s):  
Dong-Dong Li ◽  
Pan Yu ◽  
Wei Xiao ◽  
Zhen-Zhong Wang ◽  
Lin-Guo Zhao

: Berberine, as a representative isoquinoline alkaloid, exhibits significant hypolipidemic activity in both animal models and clinical trials. Recently, a large number of studies on the lipid-lowering mechanism of berberine and studies for improving its hypolipidemic activity have been reported, but for the most part, they have been either incomplete or not comprehensive. In addition, there have been a few specific reviews on the lipid-reducing effect of berberine. In this paper, the physicochemical properties, the lipid-lowering mechanism, and studies of the modification of berberine all are discussed to promote the development of berberine as a lipid-lowering agent. Subsequently, this paper provides some insights into the deficiencies of berberine in the study of lipid-lowering drug, and based on the situation, some proposals are put forward.


2020 ◽  
Author(s):  
Joseph Prinable ◽  
Peter Jones ◽  
David Boland ◽  
Alistair McEwan ◽  
Cindy Thamrin

BACKGROUND The ability to continuously monitor breathing metrics may have indications for general health as well as respiratory conditions such as asthma. However, few studies have focused on breathing due to a lack of available wearable technologies. OBJECTIVE Examine the performance of two machine learning algorithms in extracting breathing metrics from a finger-based pulse oximeter, which is amenable to long-term monitoring. METHODS Pulse oximetry data was collected from 11 healthy and 11 asthma subjects who breathed at a range of controlled respiratory rates. UNET and Long Short-Term memory (LSTM) algorithms were applied to the data, and results compared against breathing metrics derived from respiratory inductance plethysmography measured simultaneously as a reference. RESULTS The UNET vs LSTM model provided breathing metrics which were strongly correlated with those from the reference signal (all p<0.001, except for inspiratory:expiratory ratio). The following relative mean bias(95% confidence interval) were observed: inspiration time 1.89(-52.95, 56.74)% vs 1.30(-52.15, 54.74)%, expiration time -3.70(-55.21, 47.80)% vs -4.97(-56.84, 46.89)%, inspiratory:expiratory ratio -4.65(-87.18, 77.88)% vs -5.30(-87.07, 76.47)%, inter-breath intervals -2.39(-32.76, 27.97)% vs -3.16(-33.69, 27.36)%, and respiratory rate 2.99(-27.04 to 33.02)% vs 3.69(-27.17 to 34.56)%. CONCLUSIONS Both machine learning models show strongly correlation and good comparability with reference, with low bias though wide variability for deriving breathing metrics in asthma and health cohorts. Future efforts should focus on improvement of performance of these models, e.g. by increasing the size of the training dataset at the lower breathing rates. CLINICALTRIAL Sydney Local Health District Human Research Ethics Committee (#LNR\16\HAWKE99 ethics approval).


2021 ◽  
Vol 0 (0) ◽  
Author(s):  
Idris Kharroubi ◽  
Thomas Lim ◽  
Xavier Warin

AbstractWe study the approximation of backward stochastic differential equations (BSDEs for short) with a constraint on the gains process. We first discretize the constraint by applying a so-called facelift operator at times of a grid. We show that this discretely constrained BSDE converges to the continuously constrained one as the mesh grid converges to zero. We then focus on the approximation of the discretely constrained BSDE. For that we adopt a machine learning approach. We show that the facelift can be approximated by an optimization problem over a class of neural networks under constraints on the neural network and its derivative. We then derive an algorithm converging to the discretely constrained BSDE as the number of neurons goes to infinity. We end by numerical experiments.


Sign in / Sign up

Export Citation Format

Share Document