scholarly journals Auditory Target Detection Is Affected by Implicit Temporal and Spatial Expectations

2011 ◽  
Vol 23 (5) ◽  
pp. 1136-1147 ◽  
Author(s):  
Johanna Rimmele ◽  
Hajnal Jolsvai ◽  
Elyse Sussman

Mechanisms of implicit spatial and temporal orienting were investigated by using a moving auditory stimulus. Expectations were set up implicitly, using the information inherent in the movement of a sound, directing attention to a specific moment in time with respect to a specific location. There were four conditions of expectation: temporal and spatial expectation; temporal expectation only; spatial expectation only; and no expectation. Event-related brain potentials were recorded while participants performed a go/no-go task, set up by anticipation of the reappearance of a target tone through a white noise band. Results showed that (1) temporal expectations alone speeded reaction time and increased response accuracy; and (2) implicit temporal expectations alone independently enhanced target detection at early processing stages, prior to motor response. This was reflected at stages of perceptual analysis, indexed by P1 and N1 components, as well as in task-related stages indexed by N2; and (3) spatial expectations had an effect at later response-related processing stages but only in combination with temporal expectations, indexed by the P3 component. Thus, the results, in addition to indicating a primary role for temporal orienting in audition, suggest that multiple mechanisms of attention interact in different phases of auditory target detection. Our results are consistent with the view from vision research that spatial and temporal attentional control is based on the activity of partly overlapping, and partly functionally specialized neural networks.

2021 ◽  
Author(s):  
Lea Kern ◽  
Michael Niedeggen

Previous research showed that dual-task processes such as the attentional blink are not always transferable from unimodal to cross-modal settings. Here we ask whether such a transfer can be stated for a distractor-induced impairment of target detection, which has been established in vision (distractor-induced blindness, DIB) and was recently observed in the auditory modality (distractor-induced deafness, DID). The current study aimed to replicate the phenomenon in a cross-modal set up. An auditory target indicated by a visual cue should be detected, while task-irrelevant auditory distractors appearing before the cue had to be ignored. Behavioral data confirmed a cross-modal distractor-induced deafness: target detection was significantly reduced if multiple distractors preceded the target. Event-related brain potentials (ERPs) were used to identify the process crucial for target detection. ERPs revealed that successful target report was indicated by a larger frontal negativity around 200 ms. The same signature of target awareness has been previously observed in the auditory modality. In contrast to unimodal findings, P3 amplitude was not enhanced in case of an upcoming hit. Our results add to recent evidence that an early frontal attentional process is linked to auditory awareness, whereas the P3 is apparently not a consistent indicator of target access.


2020 ◽  
Vol 16 (4) ◽  
pp. 20190928 ◽  
Author(s):  
Ella Z. Lattenkamp ◽  
Sonja C. Vernes ◽  
Lutz Wiegrebe

Vocal production learning (VPL), or the ability to modify vocalizations through the imitation of sounds, is a rare trait in the animal kingdom. While humans are exceptional vocal learners, few other mammalian species share this trait. Owing to their singular ecology and lifestyle, bats are highly specialized for the precise emission and reception of acoustic signals. This specialization makes them ideal candidates for the study of vocal learning, and several bat species have previously shown evidence supportive of vocal learning. Here we use a sophisticated automated set-up and a contingency training paradigm to explore the vocal learning capacity of pale spear-nosed bats. We show that these bats are capable of directional change of the fundamental frequency of their calls according to an auditory target. With this study, we further highlight the importance of bats for the study of vocal learning and provide evidence for the VPL capacity of the pale spear-nosed bat.


2019 ◽  
Author(s):  
Ronja Demel ◽  
Michael Waldmann ◽  
Annekathrin Schacht

AbstractThe influence of emotion on moral judgments has become increasingly prominent in recent years. While explicit normative measures are widely used to investigate this relationship, event-related potentials (ERPs) offer the advantage of a preconscious method to visualize the modulation of moral judgments. Based on Gray and Wegner’s (2009) Dimensional Moral Model, the present study investigated whether the processing of neutral faces is modulated by moral context information. We hypothesized that neutral faces gain emotional valence when presented in a moral context and thus elicit ERP responses comparable to those established for the processing of emotional faces. Participants (N= 26, 13 female) were tested with regard to their implicit (ERPs) and explicit (morality rating) responses to neutral faces, shown in either a morally positive, negative, or neutral context. Higher ERP amplitudes in early (P100, N170) and later (EPN, LPC) processing stages were expected for harmful/helpful scenarios compared to neutral scenarios. Agents and patients were expected to differ for moral compared to neutral scenarios. In the explicit ratings neutral scenarios were expected to differ from moral scenarios. In ERPs, we found indications for an early modulation of moral valence (harmful/helpful) and an interaction of agency and moral valence after 80-120 ms. Later time sequences showed no significant differences. Morally positive and negative scenarios were rated as significantly different from neutral scenarios. Overall, the results indicate that the relationship of emotion and moral judgments can be observed on a preconscious neural level at an early processing stage as well as in explicit judgments.


2016 ◽  
Author(s):  
Lieke Melsen ◽  
Adriaan Teuling ◽  
Paul Torfs ◽  
Massimiliano Zappa ◽  
Naoki Mizukami ◽  
...  

Abstract. The transfer of parameter sets over different temporal and spatial resolutions is common practice in many large-domain hydrological modelling studies. The degree to which parameters are transferable across temporal and spatial resolutions is an indicator for how well spatial and temporal variability are represented in the models. A large degree of transferability may well indicate a poor representation of such variability in the employed models. To investigate parameter transferability over resolution in time and space we have set-up a study in which the Variable Infiltration Capacity (VIC) model for the Thur basin in Switzerland was run with four different spatial resolutions (1×1 km, 5×5 km, 10×10 km, lumped) and evaluated for three relevant temporal resolutions (hour, day, month), both applied with uniform and distributed forcing. The model was run 3,150 times using a Hierarchical Latin Hypercube Sample and the best 1 % of the runs was selected as behavioural. The overlap in behavioural sets for different spatial and temporal resolutions was used as indicator for parameter transferability. A key result from this study is that the overlap in parameter sets for different spatial resolutions was much larger than for different temporal resolutions, also when the forcing was applied in a distributed fashion. This result suggests that it is easier to transfer parameters across different spatial resolutions than across different temporal resolutions. However, the result also indicates a substantial underestimation in the spatial variability represented in the hydrological simulations, suggesting that the high spatial transferability may occur because the current generation of large-domain models have an inadequate representation of spatial variability and hydrologic connectivity. The results presented in this paper provide a strong motivation to further investigate and substantially improve the representation of spatial and temporal variability in large-domain hydrological models.


2020 ◽  
Vol 30 (7) ◽  
pp. 4220-4237 ◽  
Author(s):  
Thomas Hörberg ◽  
Maria Larsson ◽  
Ingrid Ekström ◽  
Camilla Sandöy ◽  
Peter Lundén ◽  
...  

Abstract Visual stimuli often dominate nonvisual stimuli during multisensory perception. Evidence suggests higher cognitive processes prioritize visual over nonvisual stimuli during divided attention. Visual stimuli should thus be disproportionally distracting when processing incongruent cross-sensory stimulus pairs. We tested this assumption by comparing visual processing with olfaction, a “primitive” sensory channel that detects potentially hazardous chemicals by alerting attention. Behavioral and event-related brain potentials (ERPs) were assessed in a bimodal object categorization task with congruent or incongruent odor–picture pairings and a delayed auditory target that indicated whether olfactory or visual cues should be categorized. For congruent pairings, accuracy was higher for visual compared to olfactory decisions. However, for incongruent pairings, reaction times (RTs) were faster for olfactory decisions. Behavioral results suggested that incongruent odors interfered more with visual decisions, thereby providing evidence for an “olfactory dominance” effect. Categorization of incongruent pairings engendered a late “slow wave” ERP effect. Importantly, this effect had a later amplitude peak and longer latency during visual decisions, likely reflecting additional categorization effort for visual stimuli in the presence of incongruent odors. In sum, contrary to what might be inferred from theories of “visual dominance,” incongruent odors may in fact uniquely attract mental processing resources during perceptual incongruence.


Sign in / Sign up

Export Citation Format

Share Document