scholarly journals Modelling of S&P 500 Index Price Based on U.S. Economic Indicators: Machine Learning Approach

2021 ◽  
Vol 32 (4) ◽  
pp. 362-375
Author(s):  
Ligita Gasparėnienė ◽  
Rita Remeikiene ◽  
Aleksejus Sosidko ◽  
Vigita Vėbraitė

In order to forecast stock prices based on economic indicators, many studies have been conducted using well-known statistical methods. Meanwhile, since ~2010 as the power of computers improved, new methods of machine learning began to be used. It would be interesting to know how those algorithms using a variety of mathematical and statistical methods, are able to predict the stock market. The purpose of this article is to model the monthly price of the S&P 500 index based on U.S. economic indicators using statistical, machine learning, deep learning approaches and finally compare metrics of those models. After the selection of indicators according to the data visualization, multicollinearity tests, statistical significance tests, 3 out of 27 indicators remained. The main finding of the research is that the authors improved the baseline statistical linear regression model by 19 percent using a ML Random Forest algorithm. In this way, model achieved accuracy 97.68% of prediction S&P 500 index.

2020 ◽  
Author(s):  
Prashant Kumar ◽  
Paul A. Adamczyk ◽  
Xiaolin Zhang ◽  
Ramon Bonela Andrade ◽  
Philip A. Romero ◽  
...  

ABSTRACTIn order to make renewable fuels and chemicals from microbes, new methods are required to engineer microbes more intelligently. Computational approaches, to engineer strains for enhanced chemical production typically rely on detailed mechanistic models (e.g., kinetic/stoichiometric models of metabolism) — requiring many experimental datasets for their parameterization—while experimental methods may require screening large mutant libraries to explore the design space for the few mutants with desired behaviors. To address these limitations, we developed an active and machine learning approach (ActiveOpt) to intelligently guide experiments to arrive at an optimal phenotype with minimal measured datasets. ActiveOpt was applied to two separate case studies to evaluate its potential to increase valine yields and neurosporene productivity in Escherichia coli. In both the cases, ActiveOpt identified the best performing strain in fewer experiments than the case studies used. This work demonstrates that machine and active learning approaches have the potential to greatly facilitate metabolic engineering efforts to rapidly achieve its objectives.


Author(s):  
Nabil Belacel ◽  
Miroslava Cuperlovic-Culf

Early and accurate Alzheimer’s disease (AD) diagnosis remains a challenge. Recently, increasing efforts have been focused towards utilization of metabolomics data for the discovery of biomarkers for screening and diagnosis of AD. Several machine learning approaches were explored for classifying the blood metabolomics profiles of cognitively healthy and AD patients. Differentiation between AD, mild cognitive impairment (MCI) and cognitively healthy subjects remains difficult. In this paper, we propose a new machine learning approach for the selection of a subset of features that provide an improvement in classification rates between these three levels of cognitive disorders. Our experimental results demonstrate that utilization of these selected metabolic markers improves the performance of several classifiers in comparison to the classification accuracy obtained for the complete metabolomics dataset. The obtained results indicate that our algorithms are effective in discovering a panel of biomarkers of AD and MCI from metabolomics data suggesting the possibility to develop a noninvasive blood diagnostic technique of AD and MCI.


2017 ◽  
Author(s):  
Sabrina Jaeger ◽  
Simone Fulle ◽  
Samo Turk

Inspired by natural language processing techniques we here introduce Mol2vec which is an unsupervised machine learning approach to learn vector representations of molecular substructures. Similarly, to the Word2vec models where vectors of closely related words are in close proximity in the vector space, Mol2vec learns vector representations of molecular substructures that are pointing in similar directions for chemically related substructures. Compounds can finally be encoded as vectors by summing up vectors of the individual substructures and, for instance, feed into supervised machine learning approaches to predict compound properties. The underlying substructure vector embeddings are obtained by training an unsupervised machine learning approach on a so-called corpus of compounds that consists of all available chemical matter. The resulting Mol2vec model is pre-trained once, yields dense vector representations and overcomes drawbacks of common compound feature representations such as sparseness and bit collisions. The prediction capabilities are demonstrated on several compound property and bioactivity data sets and compared with results obtained for Morgan fingerprints as reference compound representation. Mol2vec can be easily combined with ProtVec, which employs the same Word2vec concept on protein sequences, resulting in a proteochemometric approach that is alignment independent and can be thus also easily used for proteins with low sequence similarities.


2019 ◽  
Author(s):  
Oskar Flygare ◽  
Jesper Enander ◽  
Erik Andersson ◽  
Brjánn Ljótsson ◽  
Volen Z Ivanov ◽  
...  

**Background:** Previous attempts to identify predictors of treatment outcomes in body dysmorphic disorder (BDD) have yielded inconsistent findings. One way to increase precision and clinical utility could be to use machine learning methods, which can incorporate multiple non-linear associations in prediction models. **Methods:** This study used a random forests machine learning approach to test if it is possible to reliably predict remission from BDD in a sample of 88 individuals that had received internet-delivered cognitive behavioral therapy for BDD. The random forest models were compared to traditional logistic regression analyses. **Results:** Random forests correctly identified 78% of participants as remitters or non-remitters at post-treatment. The accuracy of prediction was lower in subsequent follow-ups (68%, 66% and 61% correctly classified at 3-, 12- and 24-month follow-ups, respectively). Depressive symptoms, treatment credibility, working alliance, and initial severity of BDD were among the most important predictors at the beginning of treatment. By contrast, the logistic regression models did not identify consistent and strong predictors of remission from BDD. **Conclusions:** The results provide initial support for the clinical utility of machine learning approaches in the prediction of outcomes of patients with BDD. **Trial registration:** ClinicalTrials.gov ID: NCT02010619.


2021 ◽  
Vol 23 (4) ◽  
pp. 2742-2752
Author(s):  
Tamar L. Greaves ◽  
Karin S. Schaffarczyk McHale ◽  
Raphael F. Burkart-Radke ◽  
Jason B. Harper ◽  
Tu C. Le

Machine learning models were developed for an organic reaction in ionic liquids and validated on a selection of ionic liquids.


Author(s):  
Jeffrey G Klann ◽  
Griffin M Weber ◽  
Hossein Estiri ◽  
Bertrand Moal ◽  
Paul Avillach ◽  
...  

Abstract Introduction The Consortium for Clinical Characterization of COVID-19 by EHR (4CE) is an international collaboration addressing COVID-19 with federated analyses of electronic health record (EHR) data. Objective We sought to develop and validate a computable phenotype for COVID-19 severity. Methods Twelve 4CE sites participated. First we developed an EHR-based severity phenotype consisting of six code classes, and we validated it on patient hospitalization data from the 12 4CE clinical sites against the outcomes of ICU admission and/or death. We also piloted an alternative machine-learning approach and compared selected predictors of severity to the 4CE phenotype at one site. Results The full 4CE severity phenotype had pooled sensitivity of 0.73 and specificity 0.83 for the combined outcome of ICU admission and/or death. The sensitivity of individual code categories for acuity had high variability - up to 0.65 across sites. At one pilot site, the expert-derived phenotype had mean AUC 0.903 (95% CI: 0.886, 0.921), compared to AUC 0.956 (95% CI: 0.952, 0.959) for the machine-learning approach. Billing codes were poor proxies of ICU admission, with as low as 49% precision and recall compared to chart review. Discussion We developed a severity phenotype using 6 code classes that proved resilient to coding variability across international institutions. In contrast, machine-learning approaches may overfit hospital-specific orders. Manual chart review revealed discrepancies even in the gold-standard outcomes, possibly due to heterogeneous pandemic conditions. Conclusion We developed an EHR-based severity phenotype for COVID-19 in hospitalized patients and validated it at 12 international sites.


2021 ◽  
Vol 7 (1) ◽  
pp. 16-19
Author(s):  
Owes Khan ◽  
Geri Shahini ◽  
Wolfram Hardt

Automotive technologies are ever-increasinglybecoming digital. Highly autonomous driving togetherwith digital E/E control mechanisms include thousandsof software applications which are called as software components. Together with the industry requirements, and rigorous software development processes, mappingof components as a software pool becomes very difficult.This article analyses and discusses the integration possiblilities of machine learning approaches to our previously introduced concept of mapping of software components through a common software pool.


Catalysts ◽  
2020 ◽  
Vol 10 (3) ◽  
pp. 291 ◽  
Author(s):  
Anamya Ajjolli Nagaraja ◽  
Philippe Charton ◽  
Xavier F. Cadet ◽  
Nicolas Fontaine ◽  
Mathieu Delsaut ◽  
...  

The metabolic engineering of pathways has been used extensively to produce molecules of interest on an industrial scale. Methods like gene regulation or substrate channeling helped to improve the desired product yield. Cell-free systems are used to overcome the weaknesses of engineered strains. One of the challenges in a cell-free system is selecting the optimized enzyme concentration for optimal yield. Here, a machine learning approach is used to select the enzyme concentration for the upper part of glycolysis. The artificial neural network approach (ANN) is known to be inefficient in extrapolating predictions outside the box: high predicted values will bump into a sort of “glass ceiling”. In order to explore this “glass ceiling” space, we developed a new methodology named glass ceiling ANN (GC-ANN). Principal component analysis (PCA) and data classification methods are used to derive a rule for a high flux, and ANN to predict the flux through the pathway using the input data of 121 balances of four enzymes in the upper part of glycolysis. The outcomes of this study are i. in silico selection of optimum enzyme concentrations for a maximum flux through the pathway and ii. experimental in vitro validation of the “out-of-the-box” fluxes predicted using this new approach. Surprisingly, flux improvements of up to 63% were obtained. Gratifyingly, these improvements are coupled with a cost decrease of up to 25% for the assay.


2020 ◽  
Vol 5 (8) ◽  
pp. 62
Author(s):  
Clint Morris ◽  
Jidong J. Yang

Generating meaningful inferences from crash data is vital to improving highway safety. Classic statistical methods are fundamental to crash data analysis and often regarded for their interpretability. However, given the complexity of crash mechanisms and associated heterogeneity, classic statistical methods, which lack versatility, might not be sufficient for granular crash analysis because of the high dimensional features involved in crash-related data. In contrast, machine learning approaches, which are more flexible in structure and capable of harnessing richer data sources available today, emerges as a suitable alternative. With the aid of new methods for model interpretation, the complex machine learning models, previously considered enigmatic, can be properly interpreted. In this study, two modern machine learning techniques, Linear Discriminate Analysis and eXtreme Gradient Boosting, were explored to classify three major types of multi-vehicle crashes (i.e., rear-end, same-direction sideswipe, and angle) occurred on Interstate 285 in Georgia. The study demonstrated the utility and versatility of modern machine learning methods in the context of crash analysis, particularly in understanding the potential features underlying different crash patterns on freeways.


Sign in / Sign up

Export Citation Format

Share Document