Backward versus Forward Blocking: Evidence for Performance-Based Models of Human Contingency Learning

2011 ◽  
Vol 109 (3) ◽  
pp. 1001-1016 ◽  
Author(s):  
David Luque ◽  
Miguel A. Vadillo

Two types of theories are usually invoked to account for cue-interaction effects in human-contingency learning, performance-based theories, such as the comparator hypothesis and statistical models, and learning-based theories, such as associative models. Interestingly, the former models predict two important cue-interaction effects, forward and backward blocking, should affect responding in a similar manner, whereas learning-based models predict the effect of forward blocking should be larger than the effect of backward blocking. Previous experiments involved important methodological problems, and results have been contradictory. The present experiment was designed to explore potential asymmetries between forward and backward blocking. Analyses yielded similar effect sizes, thereby favoring the explanation by performance-based models.

2002 ◽  
Vol 55 (4b) ◽  
pp. 289-310 ◽  
Author(s):  
Jan De Houwer ◽  
Tom Beckers

Over the past 20 years, human contingency learning has resurfaced as an important topic within experimental psychology. This renewed interest was sparked mainly by the proposal that associative models of Pavlovian conditioning might also apply to human contingency learning—a proposal that has led to many new empirical findings and theoretical developments. We provide a brief review of these recent developments and try to point to issues that need to be addressed in future research.


2018 ◽  
Vol 30 (1) ◽  
pp. 25-41 ◽  
Author(s):  
Clara R. Grabitz ◽  
Katherine S. Button ◽  
Marcus R. Munafò ◽  
Dianne F. Newbury ◽  
Cyril R. Pernet ◽  
...  

Genetics and neuroscience are two areas of science that pose particular methodological problems because they involve detecting weak signals (i.e., small effects) in noisy data. In recent years, increasing numbers of studies have attempted to bridge these disciplines by looking for genetic factors associated with individual differences in behavior, cognition, and brain structure or function. However, different methodological approaches to guarding against false positives have evolved in the two disciplines. To explore methodological issues affecting neurogenetic studies, we conducted an in-depth analysis of 30 consecutive articles in 12 top neuroscience journals that reported on genetic associations in nonclinical human samples. It was often difficult to estimate effect sizes in neuroimaging paradigms. Where effect sizes could be calculated, the studies reporting the largest effect sizes tended to have two features: (i) they had the smallest samples and were generally underpowered to detect genetic effects, and (ii) they did not fully correct for multiple comparisons. Furthermore, only a minority of studies used statistical methods for multiple comparisons that took into account correlations between phenotypes or genotypes, and only nine studies included a replication sample or explicitly set out to replicate a prior finding. Finally, presentation of methodological information was not standardized and was often distributed across Methods sections and Supplementary Material, making it challenging to assemble basic information from many studies. Space limits imposed by journals could mean that highly complex statistical methods were described in only a superficial fashion. In summary, methods that have become standard in the genetics literature—stringent statistical standards, use of large samples, and replication of findings—are not always adopted when behavioral, cognitive, or neuroimaging phenotypes are used, leading to an increased risk of false-positive findings. Studies need to correct not just for the number of phenotypes collected but also for the number of genotypes examined, genetic models tested, and subsamples investigated. The field would benefit from more widespread use of methods that take into account correlations between the factors corrected for, such as spectral decomposition, or permutation approaches. Replication should become standard practice; this, together with the need for larger sample sizes, will entail greater emphasis on collaboration between research groups. We conclude with some specific suggestions for standardized reporting in this area.


2017 ◽  
Vol 43 (1) ◽  
pp. 81-93 ◽  
Author(s):  
Joaquín Morís ◽  
Itxaso Barberia ◽  
Miguel A. Vadillo ◽  
Ainhoa Andrades ◽  
Francisco J. López

2011 ◽  
Vol 23 (1) ◽  
pp. 59-68 ◽  
Author(s):  
Daniel A. Sternberg ◽  
James L. McClelland

2011 ◽  
Vol 37 (3) ◽  
pp. 308-316 ◽  
Author(s):  
Andy J. Wills ◽  
Steven Graham ◽  
Zhisheng Koh ◽  
Ian P. L. McLaren ◽  
Matthew D. Rolland

Sign in / Sign up

Export Citation Format

Share Document