scholarly journals Computational Prediction of the Isoform Specificity of Cytochrome P450 Substrates by an Improved Bayesian Method

2019 ◽  
Author(s):  
Hao Dai ◽  
Yu-Xi Zheng ◽  
Xiao-Qi Shan ◽  
Yan-Yi Chu ◽  
Wei Wang ◽  
...  

Abstract Cytochrome P450 (CYP) is the most important drug-metabolizing enzyme in human beings. Each CYP isoform is able to metabolize a large number of compounds, and if patients take more than one drugs during the treatment, it is possible that some drugs would be metabolized by the same CYP isoform, leading to potential drug-drug interactions and side effects. Therefore, it is necessary to investigate the isoform specificity of CYP substrates. In this study, we constructed a data set consisting of 10 major CYP isoforms associated with 776 substrates, and used machine learning methods to construct the predictive models based on the features of structural and physicochemical properties of substrates. We also proposed a new method called Improved Bayesian method, which is suitable for small data sets and is able to construct more stable and accurate predictive models compared with other traditional machine learning models. Based on this method, the predictive performance of our method got the accuracy of 86% for the independent test, which was significantly better to the existing models. We believe that our proposed method will facilitate the understanding of drug metabolisms and help the large-scale analysis of drug-drug interactions.

2021 ◽  
Author(s):  
Norberto Sánchez-Cruz ◽  
Jose L. Medina-Franco

<p>Epigenetic targets are a significant focus for drug discovery research, as demonstrated by the eight approved epigenetic drugs for treatment of cancer and the increasing availability of chemogenomic data related to epigenetics. This data represents a large amount of structure-activity relationships that has not been exploited thus far for the development of predictive models to support medicinal chemistry efforts. Herein, we report the first large-scale study of 26318 compounds with a quantitative measure of biological activity for 55 protein targets with epigenetic activity. Through a systematic comparison of machine learning models trained on molecular fingerprints of different design, we built predictive models with high accuracy for the epigenetic target profiling of small molecules. The models were thoroughly validated showing mean precisions up to 0.952 for the epigenetic target prediction task. Our results indicate that the herein reported models have considerable potential to identify small molecules with epigenetic activity. Therefore, our results were implemented as freely accessible and easy-to-use web application.</p>


2019 ◽  
Vol 78 (5) ◽  
pp. 617-628 ◽  
Author(s):  
Erika Van Nieuwenhove ◽  
Vasiliki Lagou ◽  
Lien Van Eyck ◽  
James Dooley ◽  
Ulrich Bodenhofer ◽  
...  

ObjectivesJuvenile idiopathic arthritis (JIA) is the most common class of childhood rheumatic diseases, with distinct disease subsets that may have diverging pathophysiological origins. Both adaptive and innate immune processes have been proposed as primary drivers, which may account for the observed clinical heterogeneity, but few high-depth studies have been performed.MethodsHere we profiled the adaptive immune system of 85 patients with JIA and 43 age-matched controls with indepth flow cytometry and machine learning approaches.ResultsImmune profiling identified immunological changes in patients with JIA. This immune signature was shared across a broad spectrum of childhood inflammatory diseases. The immune signature was identified in clinically distinct subsets of JIA, but was accentuated in patients with systemic JIA and those patients with active disease. Despite the extensive overlap in the immunological spectrum exhibited by healthy children and patients with JIA, machine learning analysis of the data set proved capable of discriminating patients with JIA from healthy controls with ~90% accuracy.ConclusionsThese results pave the way for large-scale immune phenotyping longitudinal studies of JIA. The ability to discriminate between patients with JIA and healthy individuals provides proof of principle for the use of machine learning to identify immune signatures that are predictive to treatment response group.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lam Hoang Viet Le ◽  
Toan Luu Duc Huynh ◽  
Bryan S. Weber ◽  
Bao Khac Quoc Nguyen

PurposeThis paper aims to identify the disproportionate impacts of the COVID-19 pandemic on labor markets.Design/methodology/approachThe authors conduct a large-scale survey on 16,000 firms from 82 industries in Ho Chi Minh City, Vietnam, and analyze the data set by using different machine-learning methods.FindingsFirst, job loss and reduction in state-owned enterprises have been significantly larger than in other types of organizations. Second, employees of foreign direct investment enterprises suffer a significantly lower labor income than those of other groups. Third, the adverse effects of the COVID-19 pandemic on the labor market are heterogeneous across industries and geographies. Finally, firms with high revenue in 2019 are more likely to adopt preventive measures, including the reduction of labor forces. The authors also find a significant correlation between firms' revenue and labor reduction as traditional econometrics and machine-learning techniques suggest.Originality/valueThis study has two main policy implications. First, although government support through taxes has been provided, the authors highlight evidence that there may be some additional benefit from targeting firms that have characteristics associated with layoffs or other negative labor responses. Second, the authors provide information that shows which firm characteristics are associated with particular labor market responses such as layoffs, which may help target stimulus packages. Although the COVID-19 pandemic affects most industries and occupations, heterogeneous firm responses suggest that there could be several varieties of targeted policies-targeting firms that are likely to reduce labor forces or firms likely to face reduced revenue. In this paper, the authors outline several industries and firm characteristics which appear to more directly be reducing employee counts or having negative labor responses which may lead to more cost–effect stimulus.


2017 ◽  
Vol 36 (3) ◽  
pp. 267-269 ◽  
Author(s):  
Matt Hall ◽  
Brendon Hall

The Geophysical Tutorial in the October issue of The Leading Edge was the first we've done on the topic of machine learning. Brendon Hall's article ( Hall, 2016 ) showed readers how to take a small data set — wireline logs and geologic facies data from nine wells in the Hugoton natural gas and helium field of southwest Kansas ( Dubois et al., 2007 ) — and predict the facies in two wells for which the facies data were not available. The article demonstrated with 25 lines of code how to explore the data set, then create, train and test a machine learning model for facies classification, and finally visualize the results. The workflow took a deliberately naive approach using a support vector machine model. It achieved a sort of baseline accuracy rate — a first-order prediction, if you will — of 0.42. That might sound low, but it's not untypical for a naive approach to this kind of problem. For comparison, random draws from the facies distribution score 0.16, which is therefore the true baseline.


Author(s):  
Daniel Elton ◽  
Zois Boukouvalas ◽  
Mark S. Butrico ◽  
Mark D. Fuge ◽  
Peter W. Chung

We present a proof of concept that machine learning techniques can be used to predict the properties of CNOHF energetic molecules from their molecular structures. We focus on a small but diverse dataset consisting of 109 molecular structures spread across ten compound classes. Up until now, candidate molecules for energetic materials have been screened using predictions from expensive quantum simulations and thermochemical codes. We present a comprehensive comparison of machine learning models and several molecular featurization methods - sum over bonds, custom descriptors, Coulomb matrices, bag of bonds, and fingerprints. The best featurization was sum over bonds (bond counting), and the best model was kernel ridge regression. Despite having a small data set, we obtain acceptable errors and Pearson correlations for the prediction of detonation pressure, detonation velocity, explosive energy, heat of formation, density, and other properties out of sample. By including another dataset with 309 additional molecules in our training we show how the error can be pushed lower, although the convergence with number of molecules is slow. Our work paves the way for future applications of machine learning in this domain, including automated lead generation and interpreting machine learning models to obtain novel chemical insights.


2021 ◽  
Author(s):  
Benjamin Domingue ◽  
Dimiter Dimitrov

A recently developed framework of measurement, referred to as Delta-scoring (or D-scoring) method (DSM; e.g., Dimitrov 2016, 2018, 2020) is gaining attention in the field of educational measurement and widely used in large-scale assessments at the National Center for Assessment in Saudi Arabia. The D-scores obtained under the DSM range from 0 to 1 to indicate how much (what proportion) of the ability measured by a test of binary items is demonstrated by the examinee. This study examines whether the D-scale is an interval scale and how D-scores compare to IRT ability scores (thetas) in terms of intervalness via testing the axioms of additive conjoint measurement (ACM). The approach to testing is the ConjointChecks (Domingue, 2014), which implements a Bayesian method to evaluating whether the axioms are violated in a given empirical item response data set. The results indicate that the D-scores, computed under the DSM, produce fewer violations of the ordering axioms of ACM than do the IRT “theta” scores. The conclusion is that the DSM produces a dependable D-scale in terms of the essential property of intervalness.


2019 ◽  
Author(s):  
Jihyeun Lee ◽  
Surendra Kumar ◽  
Sang-Yoon Lee ◽  
Sung Jean Park ◽  
Mi-hyun Kim

S100A9 is a potential therapeutic target for various disease including prostate cancer, colorectal cancer, and Alzheimer’s disease. However, the sparsity of atomic level data such as protein-protein interaction of S100A9 with MD2/TLR4/CD147 makes rational drug design of S100A9 inhibitors more challengeable. Herein we firstly report predictive models of S100A9 inhibitory effect by applying machine learning classifiers on 2D-molecular descriptors. The models were optimized through feature selectors as well as classifiers to produce the top eight random forest models with robust predictability as well as high cost-effectiveness. Notably, the optimal feature sets were obtained after the reduction of 2798 features into dozens of features with the chopping of fingerprint bits. In addition, the high efficiency of compact feature sets allowed us to further screen a large-scale dataset (over 6,000,000 compounds) within a week. Through the consensus vote of the top models, 46 hits (hit rate = 0.000713%) were identified as potential S100A9 inhibitors. We expect that our models will facilitate the drug discovery process by providing high predictive power as well as cost-reduction ability and give insights into the design of the novel drugs targeting S100A9.


2021 ◽  
Vol 11 (4) ◽  
pp. 1529
Author(s):  
Xiaohong Sun ◽  
Jinan Gu ◽  
Meimei Wang ◽  
Yanhua Meng ◽  
Huichao Shi

In the wheel hub industry, the quality control of the product surface determines the subsequent processing, which can be realized through the hub defect image recognition based on deep learning. Although the existing methods based on deep learning have reached the level of human beings, they rely on large-scale training sets, however, these models are completely unable to cope with the situation without samples. Therefore, in this paper, a generalized zero-shot learning framework for hub defect image recognition was built. First, a reverse mapping strategy was adopted to reduce the hubness problem, then a domain adaptation measure was employed to alleviate the projection domain shift problem, and finally, a scaling calibration strategy was used to avoid the recognition preference of seen defects. The proposed model was validated using two data sets, VOC2007 and the self-built hub defect data set, and the results showed that the method performed better than the current popular methods.


Author(s):  
Álinson S. Xavier ◽  
Feng Qiu ◽  
Shabbir Ahmed

Security-constrained unit commitment (SCUC) is a fundamental problem in power systems and electricity markets. In practical settings, SCUC is repeatedly solved via mixed-integer linear programming (MIP), sometimes multiple times per day, with only minor changes in input data. In this work, we propose a number of machine learning techniques to effectively extract information from previously solved instances in order to significantly improve the computational performance of MIP solvers when solving similar instances in the future. Based on statistical data, we predict redundant constraints in the formulation, good initial feasible solutions, and affine subspaces where the optimal solution is likely to lie, leading to a significant reduction in problem size. Computational results on a diverse set of realistic and large-scale instances show that using the proposed techniques, SCUC can be solved on average 4.3 times faster with optimality guarantees and 10.2 times faster without optimality guarantees, with no observed reduction in solution quality. Out-of-distribution experiments provide evidence that the method is somewhat robust against data-set shift. Summary of Contribution. The paper describes a novel computational method, based on a combination of mixed-integer linear programming (MILP) and machine learning (ML), to solve a challenging and fundamental optimization problem in the energy sector. The method advances the state-of-the-art, not only for this particular problem, but also, more generally, in solving discrete optimization problems via ML. We expect that the techniques presented can be readily used by practitioners in the energy sector and adapted, by researchers in other fields, to other challenging operations research problems that are solved routinely.


Sign in / Sign up

Export Citation Format

Share Document