scholarly journals Deep Reinforcement Learning Agent for S&P 500 Stock Selection

Axioms ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 130
Author(s):  
Tommi Huotari ◽  
Jyrki Savolainen ◽  
Mikael Collan

This study investigated the performance of a trading agent based on a convolutional neural network model in portfolio management. The results showed that with real-world data the agent could produce relevant trading results, while the agent’s behavior corresponded to that of a high-risk taker. The data used were wide in comparison with earlier reported research and was based on the full set of the S&P 500 stock data for twenty-one years supplemented with selected financial ratios. The results presented are new in terms of the size of the data set used and with regards to the model used. The results provide direction and offer insight into how deep learning methods may be used in constructing automatic trading systems.

2020 ◽  
Author(s):  
Nikolas Popper ◽  
Melanie Zechmeister ◽  
Dominik Brunmeir ◽  
Claire Rippinger ◽  
Nadine Weibrecht ◽  
...  

AbstractWe generate synthetic data documenting COVID-19 cases in Austria by the means of an agent-based simulation model. The model simulates the transmission of the SARS-CoV-2 virus in a statistical replica of the population and reproduces typical patient pathways on an individual basis while simultaneously integrating historical data on the implementation and expiration of population-wide countermeasures. The resulting data semantically and statistically aligns with an official epidemiological case reporting data set and provides an easily accessible, consistent and augmented alternative. Our synthetic data set provides additional insight into the spread of the epidemic by synthesizing information that cannot be recorded in reality.


2017 ◽  
Vol 23 (1/2) ◽  
pp. 46-65 ◽  
Author(s):  
Dinuka Herath ◽  
Joyce Costello ◽  
Fabian Homberg

Purpose This paper aims at simulating on how “disorganization” affects team problem solving. The prime objective is to determine how team problem solving varies between an organized and disorganized environment also considering motivational aspects. Design/methodology/approach Using agent-based modeling, the authors use a real-world data set from 226 volunteers at five different types of non-profit organizations in Southwest England to define some attributes of the agents. The authors introduce the concepts of natural, structural and functional disorganization while operationalizing natural and functional disorganization. Findings The simulations show that “disorganization” is more conducive for problem solving efficiency than “organization” given enough flexibility (range) to search and acquire resources. The findings further demonstrate that teams with resources above their hierarchical level (access to better quality resources) tend to perform better than teams that have only limited access to resources. Originality/value The nuanced categories of “(dis-)organization” allow us to compare between various structural limitations, thus generating insights for improving the way managers structure teams for better problem solving.


2021 ◽  
pp. 1-13
Author(s):  
Hailin Liu ◽  
Fangqing Gu ◽  
Zixian Lin

Transfer learning methods exploit similarities between different datasets to improve the performance of the target task by transferring knowledge from source tasks to the target task. “What to transfer” is a main research issue in transfer learning. The existing transfer learning method generally needs to acquire the shared parameters by integrating human knowledge. However, in many real applications, an understanding of which parameters can be shared is unknown beforehand. Transfer learning model is essentially a special multi-objective optimization problem. Consequently, this paper proposes a novel auto-sharing parameter technique for transfer learning based on multi-objective optimization and solves the optimization problem by using a multi-swarm particle swarm optimizer. Each task objective is simultaneously optimized by a sub-swarm. The current best particle from the sub-swarm of the target task is used to guide the search of particles of the source tasks and vice versa. The target task and source task are jointly solved by sharing the information of the best particle, which works as an inductive bias. Experiments are carried out to evaluate the proposed algorithm on several synthetic data sets and two real-world data sets of a school data set and a landmine data set, which show that the proposed algorithm is effective.


2019 ◽  
Vol 5 (1) ◽  
pp. 444-467
Author(s):  
Katherine A. Crawford

AbstractOstia, the ancient port of Rome, had a rich religious landscape. How processional rituals further contributed to this landscape, however, has seen little consideration. This is largely due to a lack of evidence that attests to the routes taken by processional rituals. The present study aims to address existing problems in studying processions by questioning what factors motivated processional movement routes. A novel computational approach that integrates GIS, urban network analysis, and agent-based modelling is introduced. This multi-layered approach is used to question how spectators served as attractors in the creation of a processional landscape using Ostia’s Campo della Magna Mater as a case study. The analysis of these results is subsequently used to gain new insight into how a greater processional landscape was created surrounding the sanctuary of the Magna Mater.


2021 ◽  
pp. 004912412098618
Author(s):  
Tim de Leeuw ◽  
Steffen Keijl

Although multiple organizational-level databases are frequently combined into one data set, there is no overview of the matching methods (MMs) that are utilized because the vast majority of studies does not report how this was done. Furthermore, it is unclear what the differences are between the utilized methods, and it is unclear whether research findings might be influenced by the utilized method. This article describes four commonly used methods for matching databases and potential issues. An empirical comparison of those methods used to combine regularly used organizational-level databases reveals large differences in the number of observations obtained. Furthermore, empirical analyses of these different methods reveal that several of them produce both systematic and random errors. These errors can result in erroneous estimations of regression coefficients in terms of direction and/or size as well as an issue where truly significant relationships might be found to be insignificant. This shows that research findings can be influenced by the MM used, which would argue in favor of the establishment of a preferred method as well as more transparency on the utilized method in future studies. This article provides insight into the matching process and methods, suggests a preferred method, and should aid researchers, reviewers, and editors with both combining multiple databases and describing and assessing them.


2017 ◽  
Vol 29 (2) ◽  
pp. 375-383 ◽  
Author(s):  
K. L. Ong ◽  
D. P. Beall ◽  
M. Frohbergh ◽  
E. Lau ◽  
J. A. Hirsch

Abstract Summary The 5-year period following 2009 saw a steep reduction in vertebral augmentation volume and was associated with elevated mortality risk in vertebral compression fracture (VCF) patients. The risk of mortality following a VCF diagnosis was 85.1% at 10 years and was found to be lower for balloon kyphoplasty (BKP) and vertebroplasty (VP) patients. Introduction BKP and VP are associated with lower mortality risks than non-surgical management (NSM) of VCF. VP versus sham trials published in 2009 sparked controversy over its effectiveness, leading to diminished referral volumes. We hypothesized that lower BKP/VP utilization would lead to a greater mortality risk for VCF patients. Methods BKP/VP utilization was evaluated for VCF patients in the 100% US Medicare data set (2005–2014). Survival and morbidity were analyzed by the Kaplan-Meier method and compared between NSM, BKP, and VP using Cox regression with adjustment by propensity score and various factors. Results The cohort included 261,756 BKP (12.6%) and 117,232 VP (5.6%) patients, comprising 20% of the VCF patient population in 2005, peaking at 24% in 2007–2008, and declining to 14% in 2014. The propensity-adjusted mortality risk for VCF patients was 4% (95% CI, 3–4%; p < 0.001) greater in 2010–2014 versus 2005–2009. The 10-year risk of mortality for the overall cohort was 85.1%. BKP and VP cohorts had a 19% (95% CI, 19–19%; p < 0.001) and 7% (95% CI, 7–8%; p < 0.001) lower propensity-adjusted 10-year mortality risk than the NSM cohort, respectively. The BKP cohort had a 13% (95% CI, 12–13%; p < 0.001) lower propensity-adjusted 10-year mortality risk than the VP cohort. Conclusions Changes in treatment patterns following the 2009 VP publications led to fewer augmentation procedures. In turn, the 5-year period following 2009 was associated with elevated mortality risk in VCF patients. This provides insight into the implications of treatment pattern changes and associated mortality risks.


Author(s):  
Shaoqiang Wang ◽  
Shudong Wang ◽  
Song Zhang ◽  
Yifan Wang

Abstract To automatically detect dynamic EEG signals to reduce the time cost of epilepsy diagnosis. In the signal recognition of electroencephalogram (EEG) of epilepsy, traditional machine learning and statistical methods require manual feature labeling engineering in order to show excellent results on a single data set. And the artificially selected features may carry a bias, and cannot guarantee the validity and expansibility in real-world data. In practical applications, deep learning methods can release people from feature engineering to a certain extent. As long as the focus is on the expansion of data quality and quantity, the algorithm model can learn automatically to get better improvements. In addition, the deep learning method can also extract many features that are difficult for humans to perceive, thereby making the algorithm more robust. Based on the design idea of ResNeXt deep neural network, this paper designs a Time-ResNeXt network structure suitable for time series EEG epilepsy detection to identify EEG signals. The accuracy rate of Time-ResNeXt in the detection of EEG epilepsy can reach 91.50%. The Time-ResNeXt network structure produces extremely advanced performance on the benchmark dataset (Berne-Barcelona dataset) and has great potential for improving clinical practice.


New Medit ◽  
2021 ◽  
Vol 20 (1) ◽  
Author(s):  

Most employee satisfaction studies do not consider the current digital transformation of the social world. The aim of this research is to provide insight into employee satisfaction in agribusiness by means of coaching, motivation, emotional salary and social media with a value chain methodology. The model is tested empirically by analysing a survey data set of 381 observations in Spanish agribusiness firms of the agri-food value chain. The results show flexible remunerations of emotional salary are determinants of employee satisfaction. Additionally, motivation is relevant in the production within commercialisation link and coaching in the production within transformation link. Whole-of-chain employees showed the greatest satisfaction with the use of social media in personnel management. Findings also confirmed that employees will stay when a job is satisfying. This study contributes to the literature by investigating the effect of current social and digital business skills on employee satisfaction in the agri-food value chain.


Sign in / Sign up

Export Citation Format

Share Document