scholarly journals Early, but no late ERP signature of auditory awareness in cross-modal distractor-induced deafness

2021 ◽  
Author(s):  
Lea Kern ◽  
Michael Niedeggen

Previous research showed that dual-task processes such as the attentional blink are not always transferable from unimodal to cross-modal settings. Here we ask whether such a transfer can be stated for a distractor-induced impairment of target detection, which has been established in vision (distractor-induced blindness, DIB) and was recently observed in the auditory modality (distractor-induced deafness, DID). The current study aimed to replicate the phenomenon in a cross-modal set up. An auditory target indicated by a visual cue should be detected, while task-irrelevant auditory distractors appearing before the cue had to be ignored. Behavioral data confirmed a cross-modal distractor-induced deafness: target detection was significantly reduced if multiple distractors preceded the target. Event-related brain potentials (ERPs) were used to identify the process crucial for target detection. ERPs revealed that successful target report was indicated by a larger frontal negativity around 200 ms. The same signature of target awareness has been previously observed in the auditory modality. In contrast to unimodal findings, P3 amplitude was not enhanced in case of an upcoming hit. Our results add to recent evidence that an early frontal attentional process is linked to auditory awareness, whereas the P3 is apparently not a consistent indicator of target access.

2011 ◽  
Vol 23 (5) ◽  
pp. 1136-1147 ◽  
Author(s):  
Johanna Rimmele ◽  
Hajnal Jolsvai ◽  
Elyse Sussman

Mechanisms of implicit spatial and temporal orienting were investigated by using a moving auditory stimulus. Expectations were set up implicitly, using the information inherent in the movement of a sound, directing attention to a specific moment in time with respect to a specific location. There were four conditions of expectation: temporal and spatial expectation; temporal expectation only; spatial expectation only; and no expectation. Event-related brain potentials were recorded while participants performed a go/no-go task, set up by anticipation of the reappearance of a target tone through a white noise band. Results showed that (1) temporal expectations alone speeded reaction time and increased response accuracy; and (2) implicit temporal expectations alone independently enhanced target detection at early processing stages, prior to motor response. This was reflected at stages of perceptual analysis, indexed by P1 and N1 components, as well as in task-related stages indexed by N2; and (3) spatial expectations had an effect at later response-related processing stages but only in combination with temporal expectations, indexed by the P3 component. Thus, the results, in addition to indicating a primary role for temporal orienting in audition, suggest that multiple mechanisms of attention interact in different phases of auditory target detection. Our results are consistent with the view from vision research that spatial and temporal attentional control is based on the activity of partly overlapping, and partly functionally specialized neural networks.


2020 ◽  
Vol 16 (4) ◽  
pp. 20190928 ◽  
Author(s):  
Ella Z. Lattenkamp ◽  
Sonja C. Vernes ◽  
Lutz Wiegrebe

Vocal production learning (VPL), or the ability to modify vocalizations through the imitation of sounds, is a rare trait in the animal kingdom. While humans are exceptional vocal learners, few other mammalian species share this trait. Owing to their singular ecology and lifestyle, bats are highly specialized for the precise emission and reception of acoustic signals. This specialization makes them ideal candidates for the study of vocal learning, and several bat species have previously shown evidence supportive of vocal learning. Here we use a sophisticated automated set-up and a contingency training paradigm to explore the vocal learning capacity of pale spear-nosed bats. We show that these bats are capable of directional change of the fundamental frequency of their calls according to an auditory target. With this study, we further highlight the importance of bats for the study of vocal learning and provide evidence for the VPL capacity of the pale spear-nosed bat.


2019 ◽  
Vol 14 (7) ◽  
pp. 727-735 ◽  
Author(s):  
Annett Schirmer ◽  
Maria Wijaya ◽  
Esther Wu ◽  
Trevor B Penney

Abstract This pre-registered event-related potential study explored how vocal emotions shape visual perception as a function of attention and listener sex. Visual task displays occurred in silence or with a neutral or an angry voice. Voices were task-irrelevant in a single-task block, but had to be categorized by speaker sex in a dual-task block. In the single task, angry voices increased the occipital N2 component relative to neutral voices in women, but not men. In the dual task, angry voices relative to neutral voices increased occipital N1 and N2 components, as well as accuracy, in women and marginally decreased accuracy in men. Thus, in women, vocal anger produced a strong, multifaceted visual enhancement comprising attention-dependent and attention-independent processes, whereas in men, it produced a small, behavior-focused visual processing impairment that was strictly attention-dependent. In sum, these data indicate that attention and listener sex critically modulate whether and how vocal emotions shape visual perception.


2018 ◽  
Vol 13 (2) ◽  
pp. 186-214
Author(s):  
Anna Jessen ◽  
João Veríssimo ◽  
Harald Clahsen

Abstract Speaking a late-learned second language (L2) is supposed to yield more variable and less consistent output than speaking one’s first language (L1), particularly with respect to reliably adhering to grammatical morphology. The current study investigates both internal processes involved in encoding morphologically complex words – by recording event-related brain potentials (ERPs) during participants’ silent productions – and the corresponding overt output. We specifically examined compounds with plural or singular modifiers in English. Thirty-one advanced L2 speakers of English (L1: German) were compared to a control group of 20 L1 English speakers from an earlier study. We found an enhanced (right-frontal) negativity during (silent) morphological encoding for compounds produced from regular plural forms relative to compounds formed from irregular plurals, replicating the ERP effect obtained for the L1 group. The L2 speakers’ overt productions, however, were significantly less consistent than those of the L1 speakers on the same task. We suggest that L2 speakers employ the same mechanisms for morphological encoding as L1 speakers, but with less reliance on grammatical constraints than L1 speakers.


2019 ◽  
Author(s):  
Violet Aurora Brown ◽  
Julia Feld Strand

It is widely accepted that seeing a talker improves a listener’s ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone—indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).


2020 ◽  
Vol 30 (7) ◽  
pp. 4220-4237 ◽  
Author(s):  
Thomas Hörberg ◽  
Maria Larsson ◽  
Ingrid Ekström ◽  
Camilla Sandöy ◽  
Peter Lundén ◽  
...  

Abstract Visual stimuli often dominate nonvisual stimuli during multisensory perception. Evidence suggests higher cognitive processes prioritize visual over nonvisual stimuli during divided attention. Visual stimuli should thus be disproportionally distracting when processing incongruent cross-sensory stimulus pairs. We tested this assumption by comparing visual processing with olfaction, a “primitive” sensory channel that detects potentially hazardous chemicals by alerting attention. Behavioral and event-related brain potentials (ERPs) were assessed in a bimodal object categorization task with congruent or incongruent odor–picture pairings and a delayed auditory target that indicated whether olfactory or visual cues should be categorized. For congruent pairings, accuracy was higher for visual compared to olfactory decisions. However, for incongruent pairings, reaction times (RTs) were faster for olfactory decisions. Behavioral results suggested that incongruent odors interfered more with visual decisions, thereby providing evidence for an “olfactory dominance” effect. Categorization of incongruent pairings engendered a late “slow wave” ERP effect. Importantly, this effect had a later amplitude peak and longer latency during visual decisions, likely reflecting additional categorization effort for visual stimuli in the presence of incongruent odors. In sum, contrary to what might be inferred from theories of “visual dominance,” incongruent odors may in fact uniquely attract mental processing resources during perceptual incongruence.


Sign in / Sign up

Export Citation Format

Share Document