scholarly journals The official soundtrack to “Five shades of grey”: Generalization in multimodal distractor-based retrieval

2020 ◽  
Vol 82 (7) ◽  
pp. 3479-3489
Author(s):  
Lars-Michael Schöpper ◽  
Tarini Singh ◽  
Christian Frings

Abstract When responding to two events in a sequence, the repetition or change of stimuli and the accompanying response can benefit or interfere with response execution: Full repetition leads to benefits in performance while partial repetition leads to costs. Additionally, even distractor stimuli can be integrated with a response, and can, upon repetition, lead to benefits or interference. Recently it has been suggested that not only identical, but also perceptually similar distractors retrieve a previous response (Singh et al., Attention, Perception, & Psychophysics, 78(8), 2307-2312, 2016): Participants discriminated four visual shapes appearing in five different shades of grey, the latter being irrelevant for task execution. Exact distractor repetitions yielded the strongest distractor-based retrieval effect, which decreased with increasing dissimilarity between shades of grey. In the current study, we expand these findings by conceptually replicating Singh et al. (2016) using multimodal stimuli. In Experiment 1 (N=31), participants discriminated four visual targets accompanied by five auditory distractors. In Experiment 2 (N=32), participants discriminated four auditory targets accompanied by five visual distractors. We replicated the generalization of distractor-based retrieval – that is, the distractor-based retrieval effect decreased with increasing distractor-dissimilarity. These results not only show that generalization in distractor-based retrieval occurs in multimodal feature processing, but also that these processes can occur for distractors perceived in a different modality to that of the target.

Perception ◽  
10.1068/p6362 ◽  
2009 ◽  
Vol 38 (8) ◽  
pp. 1144-1151 ◽  
Author(s):  
Carmelo Mario Vicario ◽  
Gaetano Rappo ◽  
Anna Maria Pepi ◽  
Massimiliano Oliveri

In tasks requiring a comparison of the duration of a reference and a test visual cue, the spatial position of test cue is likely to be implicitly coded, providing a form of a congruency effect or introducing a response bias according to the environmental scale or its vectorial reference. The precise mechanism generating these perceptual shifts in subjective duration is not understood, although several studies suggest that spatial attentional factors may play a critical role. Here we use a duration comparison task within and across sensory modalities to examine if temporal performance is also modulated when people are exposed to spatial distractors involving different sensory modalities. Different groups of healthy participants performed duration comparison tasks in separate sessions: a time comparison task of visual stimuli during exposure to spatially presented auditory distractors; and a time comparison task of auditory stimuli during exposure to spatially presented visual distractors. We found the duration of visual stimuli biased depending on the spatial position of auditory distractors. Observers underestimated the duration of stimuli presented in the left spatial field, while there was an overestimation trend in estimating the duration of stimuli presented in the right spatial field. In contrast, timing of auditory stimuli was unaffected by exposure to visual distractors. These results support the existence of multisensory interactions between space and time showing that, in cross-modal paradigms, the presence of auditory distractors can modify visuo-temporal perception but not vice versa. This asymmetry is discussed in terms of sensory–perceptual differences between the two systems.


1999 ◽  
Vol 22 (4) ◽  
pp. 677-678 ◽  
Author(s):  
H. Colonius ◽  
P. Arndt

The Findlay-Walker model does not consider saccades generated by auditory targets or nontargets (distractors), or by bimodal stimulation. Empirical results suggest that the effects of auditory stimulation cannot easily be incorporated into the model, neither in the WHEN nor in the WHERE system. A two-stage model by Colonius and Arndt gives a quantitative account of the facilitative effects of auditory distractors on saccadic latencies toward visual targets.


2011 ◽  
Vol 23 (5) ◽  
pp. 1113-1124 ◽  
Author(s):  
Nathan A. Parks ◽  
Matthew R. Hilimire ◽  
Paul M. Corballis

The perceptual load theory of attention posits that attentional selection occurs early in processing when a task is perceptually demanding but occurs late in processing otherwise. We used a frequency-tagged steady-state evoked potential paradigm to investigate the modality specificity of perceptual load-induced distractor filtering and the nature of neural-competitive interactions between task and distractor stimuli. EEG data were recorded while participants monitored a stream of stimuli occurring in rapid serial visual presentation (RSVP) for the appearance of previously assigned targets. Perceptual load was manipulated by assigning targets that were identifiable by color alone (low load) or by the conjunction of color and orientation (high load). The RSVP task was performed alone and in the presence of task-irrelevant visual and auditory distractors. The RSVP stimuli, visual distractors, and auditory distractors were “tagged” by modulating each at a unique frequency (2.5, 8.5, and 40.0 Hz, respectively), which allowed each to be analyzed separately in the frequency domain. We report three important findings regarding the neural mechanisms of perceptual load. First, we replicated previous findings of within-modality distractor filtering and demonstrated a reduction in visual distractor signals with high perceptual load. Second, auditory steady-state distractor signals were unaffected by manipulations of visual perceptual load, consistent with the idea that perceptual load-induced distractor filtering is modality specific. Third, analysis of task-related signals revealed that visual distractors competed with task stimuli for representation and that increased perceptual load appeared to resolve this competition in favor of the task stimulus.


2003 ◽  
Vol 17 (3) ◽  
pp. 113-123 ◽  
Author(s):  
Jukka M. Leppänen ◽  
Mirja Tenhunen ◽  
Jari K. Hietanen

Abstract Several studies have shown faster choice-reaction times to positive than to negative facial expressions. The present study examined whether this effect is exclusively due to faster cognitive processing of positive stimuli (i.e., processes leading up to, and including, response selection), or whether it also involves faster motor execution of the selected response. In two experiments, response selection (onset of the lateralized readiness potential, LRP) and response execution (LRP onset-response onset) times for positive (happy) and negative (disgusted/angry) faces were examined. Shorter response selection times for positive than for negative faces were found in both experiments but there was no difference in response execution times. Together, these results suggest that the happy-face advantage occurs primarily at premotoric processing stages. Implications that the happy-face advantage may reflect an interaction between emotional and cognitive factors are discussed.


Author(s):  
Sander Martens ◽  
Addie Johnson ◽  
Martje Bolle ◽  
Jelmer Borst

The human mind is severely limited in processing concurrent information at a conscious level of awareness. These temporal restrictions are clearly reflected in the attentional blink (AB), a deficit in reporting the second of two targets when it occurs 200–500 ms after the first. However, we recently reported that some individuals do not show a visual AB, and presented psychophysiological evidence that target processing differs between “blinkers” and “nonblinkers”. Here, we present evidence that visual nonblinkers do show an auditory AB, which suggests that a major source of attentional restriction as reflected in the AB is likely to be modality-specific. In Experiment 3, we show that when the difficulty in identifying visual targets is increased, nonblinkers continue to show little or no visual AB, suggesting that the presence of an AB in the auditory but not in the visual modality is not due to a difference in task difficulty.


Sign in / Sign up

Export Citation Format

Share Document