Attentional lapses cannot explain the worst performance rule

2020 ◽  
Author(s):  
Christoph Löffler ◽  
Gidon T. Frischkorn ◽  
Jan Rummel ◽  
Dirk Hagemann ◽  
Anna-Lena Schubert

The worst performance rule describes the often-observed phenomenon that individuals' slowest responses in a task are more predictive of their intelligence than their fastest or average responses. To explain this phenomenon, Larson and Alderton (1990) suggested that occasional lapses of attention might result in slower reaction times. Because less intelligent individuals are more likely to experience lapses of attention, they should show a more heavily skewed reaction time distribution, causing an increase in correlations between reaction times and intelligence across the percentiles of the distribution. The attentional lapses account has been well-received, not least because of to its intuitive appeal, but has never been subjected to a direct empirical test. Using state-of-the-art hierarchical modeling approaches to quantify and test the worst performance rule, we investigated in a sample of 98 participants if different behavioral, self-report, and neural measures of attentional control accounted for the phenomenon. Notably, no measure of attentional lapses accounted for increasing covariances between intelligence and reaction time from best to worst performance. Hence, our results challenge the attentional lapses account of the worst performance rule.

2021 ◽  
Vol 10 (1) ◽  
pp. 2
Author(s):  
Christoph Löffler ◽  
Gidon T. Frischkorn ◽  
Jan Rummel ◽  
Dirk Hagemann ◽  
Anna-Lena Schubert

The worst performance rule (WPR) describes the phenomenon that individuals’ slowest responses in a task are often more predictive of their intelligence than their fastest or average responses. To explain this phenomenon, it was previously suggested that occasional lapses of attention during task completion might be associated with particularly slow reaction times. Because less intelligent individuals should experience lapses of attention more frequently, reaction time distribution should be more heavily skewed for them than for more intelligent people. Consequently, the correlation between intelligence and reaction times should increase from the lowest to the highest quantile of the response time distribution. This attentional lapses account has some intuitive appeal, but has not yet been tested empirically. Using a hierarchical modeling approach, we investigated whether the WPR pattern would disappear when including different behavioral, self-report, and neural measurements of attentional lapses as predictors. In a sample of N = 85, we found that attentional lapses accounted for the WPR, but effect sizes of single covariates were mostly small to very small. We replicated these results in a reanalysis of a much larger previously published data set. Our findings render empirical support to the attentional lapses account of the WPR.


2021 ◽  
Author(s):  
Christoph Löffler ◽  
Gidon T. Frischkorn ◽  
Jan Rummel ◽  
Dirk Hagemann ◽  
Anna-Lena Schubert

The worst performance rule (WPR) describes the phenomenon that individuals’ slowest responses in a task are often more predictive of their intelligence than their fastest or average responses. To explain this phenomenon, Larson and Alderton (1990) suggested that occasional lapses of attention during task completion might be associated with particularly slow reaction times. Because less intelligent individuals should experience lapses of attention more frequently, reaction time distribution should be more heavily skewed for them than for more intelligent people. Consequently, the correlation between intelligence and reaction times should increase from the lowest to the highest quantile of the response time distribution. This attentional lapses account has some intuitive appeal, but has not yet been tested empirically. Using a hierarchical modeling, we investigated whether the WPR pattern would disappear when including different behavioral, self-report, and neural measurements of attentional lapses as predictors. In a sample of N = 85, we found that attentional lapses accounted for the WPR, but effect sizes of single covariates were mostly small to very small. We replicated these results in a reanalysis of a larger previously published data set (N = 352). Our findings render empirical support to the attentional lapses account of the WPR.


2017 ◽  
Vol 33 (5) ◽  
pp. 345-351 ◽  
Author(s):  
Beate Dombert ◽  
Jan Antfolk ◽  
Lisa Kallvik ◽  
Angelo Zappalà ◽  
Michael Osterheider ◽  
...  

Abstract. Pedophilia – a disorder of sexual preference with primary sexual interest in prepubescent children – is forensically relevant yet difficult to detect using self-report methods. The present study evaluated the criterion validity of the Choice Reaction Time (CRT) task to differentiate between a sample of child sex offenders with a presumably high rate of pedophilic individuals and three control groups (other sex offenders, non-sex offenders, and community controls, all male; N = 233). The CRT task required locating a dot superimposed on images depicting men, women, girls, or boys and scrambled pictures as quickly as possible. We used two picture sets, the Not Real People (NRP) set and the Virtual People Set (VPS). We predicted sexually relevant pictures to elicit longer reaction times in interaction with the participant group. Both CRTs showed main effects of stimulus explicitness and preferred stimulus gender. The CRT-NRP also yielded an interaction effect of participant group and stimulus maturity while the CRT-VPS showed a tendency in this direction. The overall effect size was moderate. Results offer support for the usefulness of the CRT task in forensic assessment of child sex offenders.


Author(s):  
Matthias Bluemke ◽  
Joerg Zumbach

Aggressive tendencies can be assessed either commonly by explicit measures (self-report questionnaires), or by implicit measures that require the speeded classification of quickly presented stimuli and the recording and analysis of the reaction-times. We explored the psychometric properties of implicit measures assessing aggressiveness objectively: the Implicit Association Test (IAT) and its derivate, the Single-Target IAT. While the IAT focused on the automatic attitude towards aggressiveness, the ST-IAT focused on the self-concept. This feasibility study describes in methodological detail how a diversity of game players can be recruited to take these measures with common web-browser technology, even though reaction-time measurement in the range of a few hundred milliseconds is at stake. Self-reported and objective characteristics of users of violent, less violent, and no games differed. The results are partly in line with what can be expected on the basis of psychological theorizing, but structural-equation modelling shows that implicit measures on attitudes and self-concept differ in quality. Pitfalls and challenges for internet studies on computer players involving reaction-time measures are pointed out.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Troy C. Dildine ◽  
Elizabeth A. Necka ◽  
Lauren Y. Atlas

AbstractSelf-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals’ association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.


2006 ◽  
Vol 21 (4) ◽  
pp. 519-532 ◽  
Author(s):  
Edward M. Vega ◽  
Daniel K. O’Leary

The Conflict Tactics Scales (CTS) have used different formats intended to maximize the accuracy and disclosure of relationship aggression. The original CTS presented items in a hierarchical order, seeking to establish a “context of legitimation.” The CTS2 presented items in an interspersed order to reduce denial response sets. The current study used computer administration of the CTS and sought to determine whether the two presentation formats described above result in differing self-reports of aggression and to explore possible causes of such an effect, such as differences in reaction time. Results indicated that item order did not significantly affect reports of aggression, but increasing participants’ reaction times by experimental manipulation of the minimum item display duration resulted in increased self-reports of aggression.


2020 ◽  
Author(s):  
Troy C. Dildine ◽  
Elizabeth A. Necka ◽  
Lauren Yvette Atlas

Self-report is the gold standard for measuring pain. However, decisions about pain can vary substantially within and between individuals. We measured whether self-reported pain is accompanied by metacognition and variations in confidence, similar to perceptual decision-making in other modalities. Eighty healthy volunteers underwent acute thermal pain and provided pain ratings followed by confidence judgments on continuous visual analogue scales. We investigated whether eye fixations and reaction time during pain rating might serve as implicit markers of confidence. Confidence varied across trials and increased confidence was associated with faster pain rating reaction times. The association between confidence and fixations varied across individuals as a function of the reliability of individuals’ association between temperature and pain. Taken together, this work indicates that individuals can provide metacognitive judgments of pain and extends research on confidence in perceptual decision-making to pain.


Author(s):  
Mohammad Shams-Ahmar ◽  
Peter Thier

Express saccades, a distinct fast mode of visually guided saccades, are probably underpinned by a specific pathway that is at least partially different from the one underlying regular saccades. Whether and how this pathway deals with information on the subjective value of a saccade target is unknown. We studied the influence of varying reward expectancies and compared it with the impact of a temporal gap between the disappearance of the fixation dot and the appearance of the target on the visually guided saccades of two rhesus macaques (Macaca mulatta). We found that increasing reward expectancy increased the probability and decreased the reaction time of express saccades. The latter influence was stronger in the later parts of the reaction time distribution of express saccades, satisfactorily captured by a linear shift model of change in the saccadic reaction time distribution. Although different in strength, increasing reward expectancy and inserting a temporal gap resulted in similar effects on saccadic reaction times, suggesting that these two factors summon the same mechanism to facilitate saccadic reaction times.


GeroPsych ◽  
2011 ◽  
Vol 24 (4) ◽  
pp. 169-176 ◽  
Author(s):  
Philippe Rast ◽  
Daniel Zimprich

In order to model within-person (WP) variance in a reaction time task, we applied a mixed location scale model using 335 participants from the second wave of the Zurich Longitudinal Study on Cognitive Aging. The age of the respondents and the performance in another reaction time task were used to explain individual differences in the WP variance. To account for larger variances due to slower reaction times, we also used the average of the predicted individual reaction time (RT) as a predictor for the WP variability. Here, the WP variability was a function of the mean. At the same time, older participants were more variable and those with better performance in another RT task were more consistent in their responses.


2006 ◽  
Vol 20 (3) ◽  
pp. 186-194 ◽  
Author(s):  
Susanne Mayr ◽  
Michael Niedeggen ◽  
Axel Buchner ◽  
Guido Orgs

Responding to a stimulus that had to be ignored previously is usually slowed-down (negative priming effect). This study investigates the reaction time and ERP effects of the negative priming phenomenon in the auditory domain. Thirty participants had to categorize sounds as musical instruments or animal voices. Reaction times were slowed-down in the negative priming condition relative to two control conditions. This effect was stronger for slow reactions (above intraindividual median) than for fast reactions (below intraindividual median). ERP analysis revealed a parietally located negativity of the negative priming condition compared to the control conditions between 550-730 ms poststimulus. This replicates the findings of Mayr, Niedeggen, Buchner, and Pietrowsky (2003) . The ERP correlate was more pronounced for slow trials (above intraindividual median) than for fast trials (below intraindividual median). The dependency of the negative priming effect size on the reaction time level found in the reaction time analysis as well as in the ERP analysis is consistent with both the inhibition as well as the episodic retrieval account of negative priming. A methodological artifact explanation of this effect-size dependency is discussed and discarded.


Sign in / Sign up

Export Citation Format

Share Document