Causal Uncertainty and Contextual Cues in the Recognition of Environmental Sounds

1988 ◽  
Vol 32 (4) ◽  
pp. 247-251
Author(s):  
R. Timothy Mullins

Previous research has supported the hypothesis that the recognition of environmental sounds is complicated by uncertainty caused by the number of potential causes of that sound. In natural settings, contextual cues often help to specify the source of ambiguous sounds. This proposes the question of whether contextual cues can overpower auditory information to establish causal certainty of otherwise ambiguous environmental sounds. A study was conducted to examine this possibility. The results showed that contextual cues could have powerful effects on the judgement of the causal event of auditory stimuli. This result could have implications for tasks which are dependent on discrimination of auditory events. In particular, if a discrimination between two auditory events is critical, the effects of auditory context suggest that two or more possible alternatives might be indistinguishable in context and should be isolated for purposes of contrast.

1990 ◽  
Vol 13 (2) ◽  
pp. 201-233 ◽  
Author(s):  
Risto Näätänen

AbstractThis article examines the role of attention and automaticity in auditory processing as revealed by event-related potential (ERP) research. An ERP component called the mismatch negativity, generated by the brain's automatic response to changes in repetitive auditory input, reveals that physical features of auditory stimuli are fully processed whether or not they are attended. It also suggests that there exist precise neuronal representations of the physical features of recent auditory stimuli, perhaps the traces underlying acoustic sensory (“echoic”) memory. A mechanism of passive attention switching in response to changes in repetitive input is also implicated.Conscious perception of discrete acoustic stimuli might be mediated by some of the mechanisms underlying another ERP component (NI), one sensitive to stimulus onset and offset. Frequent passive attentional shifts might accountforthe effect cognitive psychologists describe as “the breakthrough of the unattended” (Broadbent 1982), that is, that even unattended stimuli may be semantically processed, without assuming automatic semantic processing or late selection in selective attention.The processing negativity supports the early-selection theory and may arise from a mechanism for selectively attending to stimuli defined by certain features. This stimulus selection occurs in the form ofa matching process in which each input is compared with the “attentional trace,” a voluntarily maintained representation of the task-relevant features of the stimulus to be attended. The attentional mechanism described might underlie the stimulus-set mode of attention proposed by Broadbent. Finally, a model of automatic and attentional processing in audition is proposed that is based mainly on the aforementioned ERP components and some other physiological measures.


2020 ◽  
Vol 57 (4) ◽  
pp. 379-405
Author(s):  
Lindsey A Wilhelm

Abstract Older adults commonly experience hearing loss that negatively affects the quality of life and creates barriers to effective therapeutic interactions as well as music listening. Music therapists have the potential to address some needs of older adults, but the effectiveness of music interventions is dependent on the perception of spoken and musical stimuli. Nonauditory information, such as contextual (e.g., keywords, picture related to song) and visual cues (e.g., clear view of singer’s face), can improve speech perception. The purpose of this study was to examine the benefit of contextual and visual cues on sung word recognition in the presence of guitar accompaniment. The researcher tested 24 community-dwelling older adult hearing aid (HA) users recruited through a university HA clinic and laboratory under 3 study conditions: (a) auditory stimuli only, (b) auditory stimuli with contextual cues, and (c) auditory stimuli with visual cues. Both visual and contextual nonauditory cues benefited participants on sung word recognition. Participants’ music background and training were predictive of success without nonauditory cues, and visual cues provided greater benefit than contextual cues. Based on the results of this study, it is recommended that music therapists increase the accessibility of music interventions reliant upon lyric recognition through the incorporation of clear visual and contextual cues.


2015 ◽  
Vol 3 (1-2) ◽  
pp. 88-101 ◽  
Author(s):  
Kathleen M. Einarson ◽  
Laurel J. Trainor

Recent work examined five-year-old children’s perceptual sensitivity to musical beat alignment. In this work, children watched pairs of videos of puppets drumming to music with simple or complex metre, where one puppet’s drumming sounds (and movements) were synchronized with the beat of the music and the other drummed with incorrect tempo or phase. The videos were used to maintain children’s interest in the task. Five-year-olds were better able to detect beat misalignments in simple than complex metre music. However, adults can perform poorly when attempting to detect misalignment of sound and movement in audiovisual tasks, so it is possible that the moving stimuli actually hindered children’s performance. Here we compared children’s sensitivity to beat misalignment in conditions with dynamic visual movement versus still (static) visual images. Eighty-four five-year-old children performed either the same task as described above or a task that employed identical auditory stimuli accompanied by a motionless picture of the puppet with the drum. There was a significant main effect of metre type, replicating the finding that five-year-olds are better able to detect beat misalignment in simple metre music. There was no main effect of visual condition. These results suggest that, given identical auditory information, children’s ability to judge beat misalignment in this task is not affected by the presence or absence of dynamic visual stimuli. We conclude that at five years of age, children can tell if drumming is aligned to the musical beat when the music has simple metric structure.


Author(s):  
Erkin Asutay ◽  
Daniel Västfjäll

The focus of Erkin Asutay and Daniel Västfjäll’s chapter is the relationship between sound and emotion. Evidence from behavioral and neuroimaging studies is presented that documents how sound can evoke emotions and how emotional processes affect sound perception. This leads to a discussion of different forms of emotional responses to auditory stimuli, such as responses to vocal signals, responses to environmental sounds, and responses to music. The authors view the auditory system as an adaptive network that governs both how auditory stimuli influence emotional reactions and how the affective significance of sound influences auditory attention. In conclusion, they argue that affective experience is integral to auditory perception.


2021 ◽  
Vol 39 (3) ◽  
pp. 315-327
Author(s):  
Marco Brambilla ◽  
Matteo Masi ◽  
Simone Mattavelli ◽  
Marco Biella

Face processing has mainly been investigated by presenting facial expressions without any contextual information. However, in everyday interactions with others, the sight of a face is often accompanied by contextual cues that are processed either visually or under different sensory modalities. Here, we tested whether the perceived trustworthiness of a face is influenced by the auditory context in which that face is embedded. In Experiment 1, participants evaluated trustworthiness from faces that were surrounded by either threatening or non-threatening auditory contexts. Results showed that faces were judged more untrustworthy when accompanied by threatening auditory information. Experiment 2 replicated the effect in a design that disentangled the effects of threatening contexts from negative contexts in general. Thus, perceiving facial trustworthiness involves a cross-modal integration of the face and the level of threat posed by the surrounding context.


2007 ◽  
Vol 98 (4) ◽  
pp. 2399-2413 ◽  
Author(s):  
Vivian M. Ciaramitaro ◽  
Giedrius T. Buračas ◽  
Geoffrey M. Boynton

Attending to a visual or auditory stimulus often requires irrelevant information to be filtered out, both within the modality attended and in other modalities. For example, attentively listening to a phone conversation can diminish our ability to detect visual events. We used functional magnetic resonance imaging (fMRI) to examine brain responses to visual and auditory stimuli while subjects attended visual or auditory information. Although early cortical areas are traditionally considered unimodal, we found that brain responses to the same ignored information depended on the modality attended. In early visual area V1, responses to ignored visual stimuli were weaker when attending to another visual stimulus, compared with attending to an auditory stimulus. The opposite was true in more central visual area MT+, where responses to ignored visual stimuli were weaker when attending to an auditory stimulus. Furthermore, fMRI responses to the same ignored visual information depended on the location of the auditory stimulus, with stronger responses when the attended auditory stimulus shared the same side of space as the ignored visual stimulus. In early auditory cortex, responses to ignored auditory stimuli were weaker when attending a visual stimulus. A simple parameterization of our data can describe the effects of redirecting attention across space within the same modality (spatial attention) or across modalities (cross-modal attention), and the influence of spatial attention across modalities (cross-modal spatial attention). Our results suggest that the representation of unattended information depends on whether attention is directed to another stimulus in the same modality or the same region of space.


2011 ◽  
Vol 23 (10) ◽  
pp. 3121-3131 ◽  
Author(s):  
M. Visser ◽  
M. A. Lambon Ralph

Studies of semantic dementia and repetitive TMS have suggested that the bilateral anterior temporal lobes (ATLs) underpin a modality-invariant representational hub within the semantic system. However, it is not clear whether all ATL subregions contribute in the same way. We utilized distortion-corrected fMRI to investigate the pattern of activation in the left and right ATL when participants performed a semantic decision task on auditory words, environmental sounds, or pictures. This showed that the ATL is not functionally homogeneous but is more graded. Both left and right ventral ATL (vATL) responded to all modalities in keeping with the notion that this region underpins multimodality semantic processing. In addition, there were graded differences across the hemispheres. Semantic processing of both picture and environmental sound stimuli was associated with equivalent bilateral vATL activation, whereas auditory words generated greater activation in left than right vATL. This graded specialization for auditory stimuli would appear to reflect the input from the left superior ATL, which responded solely to semantic decisions on the basis of spoken words and environmental sounds, suggesting that this region is specialized to auditory stimuli. A final noteworthy result was that these regions were activated for domain level decisions to singly presented stimuli, which appears to be incompatible with the hypotheses that the ATL is dedicated (a) to the representation of specific entities or (b) for combinatorial semantic processes.


Foods ◽  
2019 ◽  
Vol 8 (4) ◽  
pp. 124 ◽  
Author(s):  
Lin ◽  
Hamid ◽  
Shepherd ◽  
Kantono ◽  
Spence

Recently, it has been shown that various auditory stimuli modulate flavour perception. The present study attempts to understand the effects of environmental sounds (park, food court, fast food restaurant, cafe, and bar sounds) on the perception of chocolate gelato (specifically, sweet, bitter, milky, creamy, cocoa, roasted, and vanilla notes) using the Temporal Check-All-That-Apply (TCATA) method. Additionally, affective ratings of the auditory stimuli were obtained using the Self-Assessment Manikin (SAM) in terms of their valence, arousal, and dominance. In total, 58 panellists rated the sounds and chocolate gelato in a sensory laboratory. The results revealed that bitterness, roasted, and cocoa notes were more evident when the bar, fast food, and food court sounds were played. Meanwhile, sweetness was cited more in the early mastication period when listening to park and café sounds. The park sound was significantly higher in valence, while the bar sound was significantly higher in arousal. Dominance was significantly higher for the fast food restaurant, food court, and bar sound conditions. Intriguingly, the valence evoked by the pleasant park sound was positively correlated with the sweetness of the gelato. Meanwhile, the arousal associated with bar sounds was positively correlated with bitterness, roasted, and cocoa attributes. Taken together, these results clearly demonstrate that people’s perception of the flavour of gelato varied with the different real-world sounds used in this study.


2007 ◽  
Vol 38 (4) ◽  
pp. 193-202 ◽  
Author(s):  
Masaru Kawamata ◽  
Eiji Kirino ◽  
Reiichi Inoue ◽  
Heii Arai

The goal of this study was to explore the frontal-mid-line theta rhythm (Fm theta) generation mechanism employing event-related desynchronization/synchronization (ERD/ERS) analysis in relation to task-irrelevant external stimuli. A dual paradigm was employed: a videogame and the simultaneous presentation of passive auditory oddball stimuli. We analyzed the data concerning ERD/ERS using both Fast Fourier Transformation (FFT) and wavelet transform (WT). In the FFT data, during the periods with appearance of Fm theta, apparent ERD of the theta band was observed at Fz and Cz. ERD when Fm theta was present was much more prominent than when Fm theta was absent. In the WT data, as in the FFT data, ERD was seen again, but in this case the ERD was preceded by ERS during both the periods with and without Fm theta. Furthermore, the WT analysis indicated that ERD was followed by ERS during the periods without Fm theta. However, during Fm theta, no apparent ERS following ERD was seen. In our study, Fm theta was desynchronized by the auditory stimuli that were independent of the video game task used to evoke the Fm theta. The ERD of Fm theta might be reflecting the mechanism of “positive suppression” to process external auditory stimuli automatically and preventing attentional resources from being unnecessarily allocated to those stimuli. Another possibility is that Fm theta induced by our dual paradigm may reflect information processing modeled by multi-item working memory requirements for playing the videogame and the simultaneous auditory processing using a memory trace. ERS in the WT data without Fm theta might indicate further processing of the auditory information free from “positive suppression” control reflected by Fm theta.


Sign in / Sign up

Export Citation Format

Share Document