scholarly journals Speed-vs-Accuracy Tradeoff in Collective Estimation: An Adaptive Exploration-Exploitation Case*

Author(s):  
Mohsen Raoufi ◽  
Heiko Hamann ◽  
Pawel Romanczuk
2010 ◽  
Vol 31 (3) ◽  
pp. 130-137 ◽  
Author(s):  
Hagen C. Flehmig ◽  
Michael B. Steinborn ◽  
Karl Westhoff ◽  
Robert Langner

Previous research suggests a relationship between neuroticism (N) and the speed-accuracy tradeoff in speeded performance: High-N individuals were observed performing less efficiently than low-N individuals and compensatorily overemphasizing response speed at the expense of accuracy. This study examined N-related performance differences in the serial mental addition and comparison task (SMACT) in 99 individuals, comparing several performance measures (i.e., response speed, accuracy, and variability), retest reliability, and practice effects. N was negatively correlated with mean reaction time but positively correlated with error percentage, indicating that high-N individuals tended to be faster but less accurate in their performance than low-N individuals. The strengthening of the relationship after practice demonstrated the reliability of the findings. There was, however, no relationship between N and distractibility (assessed via measures of reaction time variability). Our main findings are in line with the processing efficiency theory, extending the relationship between N and working style to sustained self-paced speeded mental addition.


2005 ◽  
Author(s):  
Neta Moye ◽  
Lucy L. Gilson ◽  
Jill E. Perry-Smith

1997 ◽  
Author(s):  
Jeffry S. Kellogg ◽  
Xiangen Hu ◽  
William Marks

Mathematics ◽  
2021 ◽  
Vol 9 (16) ◽  
pp. 1839
Author(s):  
Broderick Crawford ◽  
Ricardo Soto ◽  
José Lemus-Romani ◽  
Marcelo Becerra-Rozas ◽  
José M. Lanza-Gutiérrez ◽  
...  

One of the central issues that must be resolved for a metaheuristic optimization process to work well is the dilemma of the balance between exploration and exploitation. The metaheuristics (MH) that achieved this balance can be called balanced MH, where a Q-Learning (QL) integration framework was proposed for the selection of metaheuristic operators conducive to this balance, particularly the selection of binarization schemes when a continuous metaheuristic solves binary combinatorial problems. In this work the use of this framework is extended to other recent metaheuristics, demonstrating that the integration of QL in the selection of operators improves the exploration-exploitation balance. Specifically, the Whale Optimization Algorithm and the Sine-Cosine Algorithm are tested by solving the Set Covering Problem, showing statistical improvements in this balance and in the quality of the solutions.


Author(s):  
Humoud Alsabah ◽  
Agostino Capponi ◽  
Octavio Ruiz Lacedelli ◽  
Matt Stern

Abstract We introduce a reinforcement learning framework for retail robo-advising. The robo-advisor does not know the investor’s risk preference but learns it over time by observing her portfolio choices in different market environments. We develop an exploration–exploitation algorithm that trades off costly solicitations of portfolio choices by the investor with autonomous trading decisions based on stale estimates of investor’s risk aversion. We show that the approximate value function constructed by the algorithm converges to the value function of an omniscient robo-advisor over a number of periods that is polynomial in the state and action space. By correcting for the investor’s mistakes, the robo-advisor may outperform a stand-alone investor, regardless of the investor’s opportunity cost for making portfolio decisions.


Sign in / Sign up

Export Citation Format

Share Document