Discovering Optimal Training Policies: A New Experimental Paradigm

2012 ◽  
Author(s):  
Robert V. Lindsey ◽  
Michael C. Mozer ◽  
Harold Pashler
2016 ◽  
Vol 32 (1) ◽  
pp. 17-38 ◽  
Author(s):  
Florian Schmitz ◽  
Karsten Manske ◽  
Franzis Preckel ◽  
Oliver Wilhelm

Abstract. The Balloon-Analogue Risk Task (BART; Lejuez et al., 2002 ) is one of the most popular behavioral tasks suggested to assess risk-taking in the laboratory. Previous research has shown that the conventionally computed score is predictive, but neglects available information in the data. We suggest a number of alternative scores that are motivated by theories of risk-taking and that exploit more of the available data. These scores can be grouped around (1) risk-taking, (2) task performance, (3) impulsive decision making, and (4) reinforcement sequence modulation. Their theoretical rationale is detailed and their validity is tested within the nomological network of risk-taking, deviance, and scholastic achievement. Two multivariate studies were conducted with youths (n = 435) and with adolescents/young adults (n = 316). Additionally, we tested formal models suggested for the BART that decompose observed behavior into a set of meaningful parameters. A simulation study with parameter recovery was conducted, and the data from the two studies were reanalyzed using the models. Most scores were reliable and differentially predictive of criterion variables and may be used in basic research. However, task specificity and the generally moderate validity do not warrant use of the experimental paradigm for diagnostic purposes.


2017 ◽  
Vol 4 (3) ◽  
pp. 259-273 ◽  
Author(s):  
Fawn C. Caplandies ◽  
Ben Colagiuri ◽  
Suzanne G. Helfer ◽  
Andrew L. Geers

Author(s):  
Tobias Alf Kroll ◽  
A. Alexandre Trindade ◽  
Amber Asikis ◽  
Melissa Salas ◽  
Marcy Lau ◽  
...  

2020 ◽  
Author(s):  
Kate Ergo ◽  
Luna De Vilder ◽  
Esther De Loof ◽  
Tom Verguts

Recent years have witnessed a steady increase in the number of studies investigating the role of reward prediction errors (RPEs) in declarative learning. Specifically, in several experimental paradigms RPEs drive declarative learning; with larger and more positive RPEs enhancing declarative learning. However, it is unknown whether this RPE must derive from the participant’s own response, or whether instead any RPE is sufficient to obtain the learning effect. To test this, we generated RPEs in the same experimental paradigm where we combined an agency and a non-agency condition. We observed no interaction between RPE and agency, suggesting that any RPE (irrespective of its source) can drive declarative learning. This result holds implications for declarative learning theory.


1988 ◽  
Author(s):  
Alexander H. Levis ◽  
Jeff T. Casey ◽  
Anne-Claire Louvet

2021 ◽  
Vol 54 (3) ◽  
pp. 1-18
Author(s):  
Petr Spelda ◽  
Vit Stritecky

As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, then an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The article asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach “elsewhere” in space and time or deploy ML models in non-benign environments. The article argues that the only viable version of the contract can be based on optimality (instead of on reliability, which cannot be justified without circularity) and aligns this position with Schurz's optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (“elsewhere” and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full.


Author(s):  
Chenguang Li ◽  
Hongjun Yang ◽  
Long Cheng

AbstractAs a relatively new physiological signal of brain, functional near-infrared spectroscopy (fNIRS) is being used more and more in brain–computer interface field, especially in the task of motor imagery. However, the classification accuracy based on this signal is relatively low. To improve the accuracy of classification, this paper proposes a new experimental paradigm and only uses fNIRS signals to complete the classification task of six subjects. Notably, the experiment is carried out in a non-laboratory environment, and movements of motion imagination are properly designed. And when the subjects are imagining the motions, they are also subvocalizing the movements to prevent distraction. Therefore, according to the motor area theory of the cerebral cortex, the positions of the fNIRS probes have been slightly adjusted compared with other methods. Next, the signals are classified by nine classification methods, and the different features and classification methods are compared. The results show that under this new experimental paradigm, the classification accuracy of 89.12% and 88.47% can be achieved using the support vector machine method and the random forest method, respectively, which shows that the paradigm is effective. Finally, by selecting five channels with the largest variance after empirical mode decomposition of the original signal, similar classification results can be achieved.


Sign in / Sign up

Export Citation Format

Share Document