Audition

Author(s):  
Leslie S. Smith

Audition is the ability to sense and interpret pressure waves within a range of frequencies. The system tries to solve the what and where tasks: what is the sound source (interpretation), and where is it (location)? Auditory environments vary in the number and location of sound sources, their level and in the degree of reverberation, yet biological systems have robust techniques that work over a large range of conditions. We briefly review the auditory system, including the auditory brainstem and mid-brain major components, attempting to connect its structure with the problems to be solved: locating some sounds, and interpreting important ones. Systems using knowledge of animal auditory processing are discussed, including both CPU-based and Neuromorphic approaches, starting from the auditory filterbank, and including silicon cochleae: feature (auditory landmark) based systems are considered. The level of performance associated with animal auditory systems has not been achieved, and we discuss ways forward.

1992 ◽  
Vol 336 (1278) ◽  
pp. 295-306 ◽  

The past 30 years has seen a remarkable development in our understanding of how the auditory system - particularly the peripheral system - processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear’s overall frequency selectivity can be accounted for by this cochlear filtering, at least in band width terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea - hypoxia, disease, drugs, noise overexposure, mechanical disturbance - and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the ‘place’ and ‘time’ coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear’s remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.


2015 ◽  
Vol 32 (5) ◽  
pp. 445-459 ◽  
Author(s):  
Kyung Myun Lee ◽  
Erika Skoe ◽  
Nina Kraus ◽  
Richard Ashley

Acoustic periodicity is an important factor for discriminating consonant and dissonant intervals. While previous studies have found that the periodicity of musical intervals is temporally encoded by neural phase locking throughout the auditory system, how the nonlinearities of the auditory pathway influence the encoding of periodicity and how this effect is related to sensory consonance has been underexplored. By measuring human auditory brainstem responses (ABRs) to four diotically presented musical intervals with increasing degrees of dissonance, this study seeks to explicate how the subcortical auditory system transforms the neural representation of acoustic periodicity for consonant versus dissonant intervals. ABRs faithfully reflect neural activity in the brainstem synchronized to the stimulus while also capturing nonlinear aspects of auditory processing. Results show that for the most dissonant interval, which has a less periodic stimulus waveform than the most consonant interval, the aperiodicity of the stimulus is intensified in the subcortical response. The decreased periodicity of dissonant intervals is related to a larger number of nonlinearities (i.e., distortion products) in the response spectrum. Our findings suggest that the auditory system transforms the periodicity of dissonant intervals resulting in consonant and dissonant intervals becoming more distinct in the neural code than if they were to be processed by a linear auditory system.


PeerJ ◽  
2020 ◽  
Vol 8 ◽  
pp. e9363
Author(s):  
Priscilla Logerot ◽  
Paul F. Smith ◽  
Martin Wild ◽  
M. Fabiana Kubke

In birds the auditory system plays a key role in providing the sensory input used to discriminate between conspecific and heterospecific vocal signals. In those species that are known to learn their vocalizations, for example, songbirds, it is generally considered that this ability arises and is manifest in the forebrain, although there is no a priori reason why brainstem components of the auditory system could not also play an important part. To test this assumption, we used groups of normal reared and cross-fostered zebra finches that had previously been shown in behavioural experiments to reduce their preference for conspecific songs subsequent to cross fostering experience with Bengalese finches, a related species with a distinctly different song. The question we asked, therefore, is whether this experiential change also changes the bias in favour of conspecific song displayed by auditory midbrain units of normally raised zebra finches. By recording the responses of single units in MLd to a variety of zebra finch and Bengalese finch songs in both normally reared and cross-fostered zebra finches, we provide a positive answer to this question. That is, the difference in response to conspecific and heterospecific songs seen in normal reared zebra finches is reduced following cross-fostering. In birds the virtual absence of mammalian-like cortical projections upon auditory brainstem nuclei argues against the interpretation that MLd units change, as observed in the present experiments, as a result of top-down influences on sensory processing. Instead, it appears that MLd units can be influenced significantly by sensory inputs arising directly from a change in auditory experience during development.


2021 ◽  
pp. 1-7
Author(s):  
Kristina Anton ◽  
Arne Ernst ◽  
Dietmar Basta

BACKGROUND: During walking, postural stability is controlled by visual, vestibular and proprioceptive input. The auditory system uses acoustic input to localize sound sources. For some static balance conditions, the auditory influence on posture was already proven. Little is known about the impact of auditory inputs on balance in dynamic conditions. OBJECTIVE: This study is aimed at investigating postural stability of walking tasks in silence and sound on condition to better understand the impact of auditory input on balance in movement. METHODS: Thirty participants performed: walking (eyes open), tandem steps, walking with turning head and walking over barriers. During each task, acoustic condition changed between silence and presented noise through an earth-fixed loudspeaker located at the end of the walking distance. Body sway velocity was recorded close to the body’s center of gravity. RESULTS: A decreased body sway velocity was significant for walking (eyes open), tandem steps and walking over barriers when noise was presented. Those auditory stimuli did not affect sway velocity while walking with turning head. The posture has probably improved due to the localization ability when walking with the head facing forward, while the localization ability was impaired when turning the head. CONCLUSIONS: The localization ability of a fixed sound source through the auditory system has a significant but limited impact on posture while walking.


Author(s):  
Felix Felmy

Parallel processing streams guide ascending auditory information through the processing hierarchy of the auditory brainstem. Many of these processing streams converge in the lateral lemnisucus, the fiber bundle that connects the cochlear nuclei and superior olivary complex with the inferior colliculus. The neuronal populations within the lateral lemniscus can be segregated according to their gross structure-function relationships into three distinct nuclei. These nuclei are termed ventral, intermedial, and dorsal nucleus, according to their position within the lemniscal fiber bundle. The complexity of their input pattern increases in an ascending fashion. The three nuclei employ different neurotransmitters and exhibit distinct synaptic and biophysical features. Yet they all share a large heterogeneity. Functionally, the ventral nucleus of the lateral lemniscus has been hypothesized to reduce spectral splatter by generating a rapid, temporally precise feedforward onset inhibition in the inferior colliculus. In the intermedial nucleus of the lateral lemniscus a cross-frequency integration has been observed. The hallmark of the dorsal nucleus of the lateral lemniscus is the generation of a long-lasting inhibition in its contralateral counterpart and the inferior colliculus. This inhibition is proposed to generate a suppression of sound sources during reverberations and could act as a temporal filter capable of removing spurious interaural time differences. While great advances have been made in understanding the role that these nuclei play in auditory processing, the functional diversity of the individual neuronal responsiveness within each nucleus remains largely unsolved.


2020 ◽  
Author(s):  
Timo Oess ◽  
Heiko Neumann ◽  
Marc O. Ernst

AbstractEarly studies have shown that the localization of a sound source in the vertical plane can be accomplished with only a single ear and thus assumed to be based on monaural spectral cues. Such cues consists of notches and peaks in the perceived spectrum which vary systematically with the elevation of sound sources. This poses several problems to the auditory system like extracting relevant and direction-dependent cues among others. Interestingly, at the stage of elevation estimate binaural information from both ears is already available and it seems reasonable of the auditory system to take advantage of this information. Especially, since such a binaural integration can improve the localization performance dramatically as we demonstrate with a computational model of binaural signal integration for sound source localization in the vertical plane. In line with previous findings of vertical localization, modeling results show that the auditory system can perform monaural as well as binaural sound source localization given a single, learned map of binaural signals. Binaural localization is by far more accurate than monaural localization, however, when prior information about the perceived sound is integrated localization performance is restored. Thus, we propose that elevation estimation of sound sources is facilitated by an early binaural signal integration and can incorporate sound type specific prior information for higher accuracy.


2019 ◽  
Vol 11 (1) ◽  
pp. 48
Author(s):  
Jay R. Lucker ◽  
Alex Doman

Background: Some children cannot tolerate sounds so their systems “shut down” and stop taking in what they hear, or they fight not to listen or run away from listening situations.  Research has demonstrated that the underlying problem is not with the children’s auditory systems, but with the connections between the auditory system (listening) and the emotional system leading the children to have over sensitivities to sound and respond with negative emotional reactions when listening1,2.One treatment found effective in helping children with hypersensitive hearing is the use of specially recorded and acoustically modified music and sound, such as found in The Listening Program® (TLP)3.  Following a regiment of daily listening to this music, research has demonstrated significant improvements in listening (called auditory processing) and educational performance as noted by greater focusing and listening in the classroom, improvements in educational performance on standardized measures, and greater participation in educational activities4,5. Objective: The purpose of this paper is to discuss TLP describing some of the acoustic methods used to enhance the sound to make it therapeutic for listening. Methods: What specific music was chosen and why that music is used is discussed.  An overview of the material and equipment used in TLP training is presented.  To demonstrate the effectiveness of TLP training, research completed on children who went through such training is presented as well. Results: Review of the research on the effectiveness of TLP demonstrates that the use of the specially recorded music, significant improvements can be found in children’s listening, auditory processing, and educational abilities.


1999 ◽  
Vol 58 (3) ◽  
pp. 170-179 ◽  
Author(s):  
Barbara S. Muller ◽  
Pierre Bovet

Twelve blindfolded subjects localized two different pure tones, randomly played by eight sound sources in the horizontal plane. Either subjects could get information supplied by their pinnae (external ear) and their head movements or not. We found that pinnae, as well as head movements, had a marked influence on auditory localization performance with this type of sound. Effects of pinnae and head movements seemed to be additive; the absence of one or the other factor provoked the same loss of localization accuracy and even much the same error pattern. Head movement analysis showed that subjects turn their face towards the emitting sound source, except for sources exactly in the front or exactly in the rear, which are identified by turning the head to both sides. The head movement amplitude increased smoothly as the sound source moved from the anterior to the posterior quadrant.


Author(s):  
Laura Hurley

The inferior colliculus (IC) receives prominent projections from centralized neuromodulatory systems. These systems include extra-auditory clusters of cholinergic, dopaminergic, noradrenergic, and serotonergic neurons. Although these modulatory sites are not explicitly part of the auditory system, they receive projections from primary auditory regions and are responsive to acoustic stimuli. This bidirectional influence suggests the existence of auditory-modulatory feedback loops. A characteristic of neuromodulatory centers is that they integrate inputs from anatomically widespread and functionally diverse sets of brain regions. This connectivity gives neuromodulatory systems the potential to import information into the auditory system on situational variables that accompany acoustic stimuli, such as context, internal state, or experience. Once released, neuromodulators functionally reconfigure auditory circuitry through a variety of receptors expressed by auditory neurons. In addition to shaping ascending auditory information, neuromodulation within the IC influences behaviors that arise subcortically, such as prepulse inhibition of the startle response. Neuromodulatory systems therefore provide a route for integrative behavioral information to access auditory processing from its earliest levels.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 532
Author(s):  
Henglin Pu ◽  
Chao Cai ◽  
Menglan Hu ◽  
Tianping Deng ◽  
Rong Zheng ◽  
...  

Multiple blind sound source localization is the key technology for a myriad of applications such as robotic navigation and indoor localization. However, existing solutions can only locate a few sound sources simultaneously due to the limitation imposed by the number of microphones in an array. To this end, this paper proposes a novel multiple blind sound source localization algorithms using Source seParation and BeamForming (SPBF). Our algorithm overcomes the limitations of existing solutions and can locate more blind sources than the number of microphones in an array. Specifically, we propose a novel microphone layout, enabling salient multiple source separation while still preserving their arrival time information. After then, we perform source localization via beamforming using each demixed source. Such a design allows minimizing mutual interference from different sound sources, thereby enabling finer AoA estimation. To further enhance localization performance, we design a new spectral weighting function that can enhance the signal-to-noise-ratio, allowing a relatively narrow beam and thus finer angle of arrival estimation. Simulation experiments under typical indoor situations demonstrate a maximum of only 4∘ even under up to 14 sources.


Sign in / Sign up

Export Citation Format

Share Document