scholarly journals An interpretable flux-based machine learning model of drug interactions across metabolic space and time

2021 ◽  
Author(s):  
Carolina H Chung ◽  
Sriram Chandrasekaran

Drug combinations are a promising strategy to counter antibiotic resistance. However, current experimental and computational approaches do not account for the entire complexity involved in combination therapy design, such as the effect of the growth environment, drug order, and time interval. To address these limitations, we present an approach that uses genome-scale metabolic modeling and machine learning to explain and guide combination therapy design. Our approach (a) accommodates diverse data types, (b) accurately predicts drug interactions in various growth conditions, (c) accounts for time- and order-specific interactions, and (d) identifies mechanistic factors driving drug interactions. The entropy in bacterial stress response, time between treatments, and gluconeogenesis activation were the most predictive features of combination therapy outcomes across time scales and growth conditions. Analysis of the vast landscape of condition-specific drug interactions revealed promising new drug combinations and a tradeoff in the efficacy between simultaneous and sequential combination therapies.

CrystEngComm ◽  
2021 ◽  
Author(s):  
Yifan Dang ◽  
Can Zhu ◽  
Motoki Ikumi ◽  
Masaki Takaishi ◽  
Wancheng Yu ◽  
...  

A time-dependent recipe designed by an adaptive control method can consistently maintain the optimal growth conditions despite the unsteady growth environment.


2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


2021 ◽  
Vol 14 (5) ◽  
pp. 472
Author(s):  
Tyler C. Beck ◽  
Kyle R. Beck ◽  
Jordan Morningstar ◽  
Menny M. Benjamin ◽  
Russell A. Norris

Roughly 2.8% of annual hospitalizations are a result of adverse drug interactions in the United States, representing more than 245,000 hospitalizations. Drug–drug interactions commonly arise from major cytochrome P450 (CYP) inhibition. Various approaches are routinely employed in order to reduce the incidence of adverse interactions, such as altering drug dosing schemes and/or minimizing the number of drugs prescribed; however, often, a reduction in the number of medications cannot be achieved without impacting therapeutic outcomes. Nearly 80% of drugs fail in development due to pharmacokinetic issues, outlining the importance of examining cytochrome interactions during preclinical drug design. In this review, we examined the physiochemical and structural properties of small molecule inhibitors of CYPs 3A4, 2D6, 2C19, 2C9, and 1A2. Although CYP inhibitors tend to have distinct physiochemical properties and structural features, these descriptors alone are insufficient to predict major cytochrome inhibition probability and affinity. Machine learning based in silico approaches may be employed as a more robust and accurate way of predicting CYP inhibition. These various approaches are highlighted in the review.


Molecules ◽  
2019 ◽  
Vol 24 (15) ◽  
pp. 2747 ◽  
Author(s):  
Eliane Briand ◽  
Ragnar Thomsen ◽  
Kristian Linnet ◽  
Henrik Berg Rasmussen ◽  
Søren Brunak ◽  
...  

The human carboxylesterase 1 (CES1), responsible for the biotransformation of many diverse therapeutic agents, may contribute to the occurrence of adverse drug reactions and therapeutic failure through drug interactions. The present study is designed to address the issue of potential drug interactions resulting from the inhibition of CES1. Based on an ensemble of 10 crystal structures complexed with different ligands and a set of 294 known CES1 ligands, we used docking (Autodock Vina) and machine learning methodologies (LDA, QDA and multilayer perceptron), considering the different energy terms from the scoring function to assess the best combination to enable the identification of CES1 inhibitors. The protocol was then applied on a library of 1114 FDA-approved drugs and eight drugs were selected for in vitro CES1 inhibition. An inhibition effect was observed for diltiazem (IC50 = 13.9 µM). Three others drugs (benztropine, iloprost and treprostinil), exhibited a weak CES1 inhibitory effects with IC50 values of 298.2 µM, 366.8 µM and 391.6 µM respectively. In conclusion, the binding site of CES1 is relatively flexible and can adapt its conformation to different types of ligands. Combining ensemble docking and machine learning approaches improves the prediction of CES1 inhibitors compared to a docking study using only one crystal structure.


Author(s):  
Dhamanpreet Kaur ◽  
Matthew Sobiesk ◽  
Shubham Patil ◽  
Jin Liu ◽  
Puran Bhagat ◽  
...  

Abstract Objective This study seeks to develop a fully automated method of generating synthetic data from a real dataset that could be employed by medical organizations to distribute health data to researchers, reducing the need for access to real data. We hypothesize the application of Bayesian networks will improve upon the predominant existing method, medBGAN, in handling the complexity and dimensionality of healthcare data. Materials and Methods We employed Bayesian networks to learn probabilistic graphical structures and simulated synthetic patient records from the learned structure. We used the University of California Irvine (UCI) heart disease and diabetes datasets as well as the MIMIC-III diagnoses database. We evaluated our method through statistical tests, machine learning tasks, preservation of rare events, disclosure risk, and the ability of a machine learning classifier to discriminate between the real and synthetic data. Results Our Bayesian network model outperformed or equaled medBGAN in all key metrics. Notable improvement was achieved in capturing rare variables and preserving association rules. Discussion Bayesian networks generated data sufficiently similar to the original data with minimal risk of disclosure, while offering additional transparency, computational efficiency, and capacity to handle more data types in comparison to existing methods. We hope this method will allow healthcare organizations to efficiently disseminate synthetic health data to researchers, enabling them to generate hypotheses and develop analytical tools. Conclusion We conclude the application of Bayesian networks is a promising option for generating realistic synthetic health data that preserves the features of the original data without compromising data privacy.


2020 ◽  
Vol 12 (7) ◽  
pp. 1218
Author(s):  
Laura Tuşa ◽  
Mahdi Khodadadzadeh ◽  
Cecilia Contreras ◽  
Kasra Rafiezadeh Shahi ◽  
Margret Fuchs ◽  
...  

Due to the extensive drilling performed every year in exploration campaigns for the discovery and evaluation of ore deposits, drill-core mapping is becoming an essential step. While valuable mineralogical information is extracted during core logging by on-site geologists, the process is time consuming and dependent on the observer and individual background. Hyperspectral short-wave infrared (SWIR) data is used in the mining industry as a tool to complement traditional logging techniques and to provide a rapid and non-invasive analytical method for mineralogical characterization. Additionally, Scanning Electron Microscopy-based image analyses using a Mineral Liberation Analyser (SEM-MLA) provide exhaustive high-resolution mineralogical maps, but can only be performed on small areas of the drill-cores. We propose to use machine learning algorithms to combine the two data types and upscale the quantitative SEM-MLA mineralogical data to drill-core scale. This way, quasi-quantitative maps over entire drill-core samples are obtained. Our upscaling approach increases result transparency and reproducibility by employing physical-based data acquisition (hyperspectral imaging) combined with mathematical models (machine learning). The procedure is tested on 5 drill-core samples with varying training data using random forests, support vector machines and neural network regression models. The obtained mineral abundance maps are further used for the extraction of mineralogical parameters such as mineral association.


Geophysics ◽  
2019 ◽  
Vol 84 (2) ◽  
pp. O39-O47 ◽  
Author(s):  
Ryan Smith ◽  
Tapan Mukerji ◽  
Tony Lupo

Predicting well production in unconventional oil and gas settings is challenging due to the combined influence of engineering, geologic, and geophysical inputs on well productivity. We have developed a machine-learning workflow that incorporates geophysical and geologic data, as well as engineering completion parameters, into a model for predicting well production. The study area is in southwest Texas in the lower Eagle Ford Group. We make use of a time-series method known as functional principal component analysis to summarize the well-production time series. Next, we use random forests, a machine-learning regression technique, in combination with our summarized well data to predict the full time series of well production. The inputs to this model are geologic, geophysical, and engineering data. We are then able to predict the well-production time series, with 65%–76% accuracy. This method incorporates disparate data types into a robust, predictive model that predicts well production in unconventional resources.


2021 ◽  
Author(s):  
Agata Blasiak ◽  
Anh TL Truong ◽  
Alexandria Remus ◽  
Lissa Hooi ◽  
Shirley Gek Kheng Seah ◽  
...  

Objectives: We aimed to harness IDentif.AI 2.0, a clinically actionable AI platform to rapidly pinpoint and prioritize optimal combination therapy regimens against COVID-19. Methods: A pool of starting candidate therapies was developed in collaboration with a community of infectious disease clinicians and included EIDD-1931 (metabolite of EIDD-2801), baricitinib, ebselen, selinexor, masitinib, nafamostat mesylate, telaprevir (VX-950), SN-38 (metabolite of irinotecan), imatinib mesylate, remdesivir, lopinavir, and ritonavir. Following the initial drug pool assessment, a focused, 6-drug pool was interrogated at 3 dosing levels per drug representing nearly 10,000 possible combination regimens. IDentif.AI 2.0 paired prospective, experimental validation of multi-drug efficacy on a SARS-CoV-2 live virus (propagated, original strain and B.1.351 variant) and Vero E6 assay with a quadratic optimization workflow. Results: Within 3 weeks, IDentif.AI 2.0 realized a list of combination regimens, ranked by efficacy, for clinical go/no-go regimen recommendations. IDentif.AI 2.0 revealed EIDD-1931 to be a strong candidate upon which multiple drug combinations can be derived. Conclusions: IDentif.AI 2.0 rapidly revealed promising drug combinations for a clinical translation. It pinpointed dose-dependent drug synergy behavior to play a role in trial design and realizing positive treatment outcomes. IDentif.AI 2.0 represents an actionable path towards rapidly optimizing combination therapy following pandemic emergence.


Sign in / Sign up

Export Citation Format

Share Document