Use of Acoustically Modified Music to Reduce Auditory Hypersensitivity in Children

2019 ◽  
Vol 11 (1) ◽  
pp. 48
Author(s):  
Jay R. Lucker ◽  
Alex Doman

Background: Some children cannot tolerate sounds so their systems “shut down” and stop taking in what they hear, or they fight not to listen or run away from listening situations.  Research has demonstrated that the underlying problem is not with the children’s auditory systems, but with the connections between the auditory system (listening) and the emotional system leading the children to have over sensitivities to sound and respond with negative emotional reactions when listening1,2.One treatment found effective in helping children with hypersensitive hearing is the use of specially recorded and acoustically modified music and sound, such as found in The Listening Program® (TLP)3.  Following a regiment of daily listening to this music, research has demonstrated significant improvements in listening (called auditory processing) and educational performance as noted by greater focusing and listening in the classroom, improvements in educational performance on standardized measures, and greater participation in educational activities4,5. Objective: The purpose of this paper is to discuss TLP describing some of the acoustic methods used to enhance the sound to make it therapeutic for listening. Methods: What specific music was chosen and why that music is used is discussed.  An overview of the material and equipment used in TLP training is presented.  To demonstrate the effectiveness of TLP training, research completed on children who went through such training is presented as well. Results: Review of the research on the effectiveness of TLP demonstrates that the use of the specially recorded music, significant improvements can be found in children’s listening, auditory processing, and educational abilities.

Author(s):  
Leslie S. Smith

Audition is the ability to sense and interpret pressure waves within a range of frequencies. The system tries to solve the what and where tasks: what is the sound source (interpretation), and where is it (location)? Auditory environments vary in the number and location of sound sources, their level and in the degree of reverberation, yet biological systems have robust techniques that work over a large range of conditions. We briefly review the auditory system, including the auditory brainstem and mid-brain major components, attempting to connect its structure with the problems to be solved: locating some sounds, and interpreting important ones. Systems using knowledge of animal auditory processing are discussed, including both CPU-based and Neuromorphic approaches, starting from the auditory filterbank, and including silicon cochleae: feature (auditory landmark) based systems are considered. The level of performance associated with animal auditory systems has not been achieved, and we discuss ways forward.


Author(s):  
Laura Hurley

The inferior colliculus (IC) receives prominent projections from centralized neuromodulatory systems. These systems include extra-auditory clusters of cholinergic, dopaminergic, noradrenergic, and serotonergic neurons. Although these modulatory sites are not explicitly part of the auditory system, they receive projections from primary auditory regions and are responsive to acoustic stimuli. This bidirectional influence suggests the existence of auditory-modulatory feedback loops. A characteristic of neuromodulatory centers is that they integrate inputs from anatomically widespread and functionally diverse sets of brain regions. This connectivity gives neuromodulatory systems the potential to import information into the auditory system on situational variables that accompany acoustic stimuli, such as context, internal state, or experience. Once released, neuromodulators functionally reconfigure auditory circuitry through a variety of receptors expressed by auditory neurons. In addition to shaping ascending auditory information, neuromodulation within the IC influences behaviors that arise subcortically, such as prepulse inhibition of the startle response. Neuromodulatory systems therefore provide a route for integrative behavioral information to access auditory processing from its earliest levels.


1992 ◽  
Vol 336 (1278) ◽  
pp. 295-306 ◽  

The past 30 years has seen a remarkable development in our understanding of how the auditory system - particularly the peripheral system - processes complex sounds. Perhaps the most significant has been our understanding of the mechanisms underlying auditory frequency selectivity and their importance for normal and impaired auditory processing. Physiologically vulnerable cochlear filtering can account for many aspects of our normal and impaired psychophysical frequency selectivity with important consequences for the perception of complex sounds. For normal hearing, remarkable mechanisms in the organ of Corti, involving enhancement of mechanical tuning (in mammals probably by feedback of electro-mechanically generated energy from the hair cells), produce exquisite tuning, reflected in the tuning properties of cochlear nerve fibres. Recent comparisons of physiological (cochlear nerve) and psychophysical frequency selectivity in the same species indicate that the ear’s overall frequency selectivity can be accounted for by this cochlear filtering, at least in band width terms. Because this cochlear filtering is physiologically vulnerable, it deteriorates in deleterious conditions of the cochlea - hypoxia, disease, drugs, noise overexposure, mechanical disturbance - and is reflected in impaired psychophysical frequency selectivity. This is a fundamental feature of sensorineural hearing loss of cochlear origin, and is of diagnostic value. This cochlear filtering, particularly as reflected in the temporal patterns of cochlear fibres to complex sounds, is remarkably robust over a wide range of stimulus levels. Furthermore, cochlear filtering properties are a prime determinant of the ‘place’ and ‘time’ coding of frequency at the cochlear nerve level, both of which appear to be involved in pitch perception. The problem of how the place and time coding of complex sounds is effected over the ear’s remarkably wide dynamic range is briefly addressed. In the auditory brainstem, particularly the dorsal cochlear nucleus, are inhibitory mechanisms responsible for enhancing the spectral and temporal contrasts in complex sounds. These mechanisms are now being dissected neuropharmacologically. At the cortical level, mechanisms are evident that are capable of abstracting biologically relevant features of complex sounds. Fundamental studies of how the auditory system encodes and processes complex sounds are vital to promising recent applications in the diagnosis and rehabilitation of the hearing impaired.


2019 ◽  
Author(s):  
Jérémy Giroud ◽  
Agnès Trébuchon ◽  
Daniele Schön ◽  
Patrick Marquis ◽  
Catherine Liegeois-Chauvel ◽  
...  

AbstractSpeech perception is mediated by both left and right auditory cortices, but with differential sensitivity to specific acoustic information contained in the speech signal. A detailed description of this functional asymmetry is missing, and the underlying models are widely debated. We analyzed cortical responses from 96 epilepsy patients with electrode implantation in left or right primary, secondary, and/or association auditory cortex. We presented short acoustic transients to reveal the stereotyped spectro-spatial oscillatory response profile of the auditory cortical hierarchy. We show remarkably similar bimodal spectral response profiles in left and right primary and secondary regions, with preferred processing modes in the theta (∼4-8 Hz) and low gamma (∼25-50 Hz) ranges. These results highlight that the human auditory system employs a two-timescale processing mode. Beyond these first cortical levels of auditory processing, a hemispheric asymmetry emerged, with delta and beta band (∼3/15 Hz) responsivity prevailing in the right hemisphere and theta and gamma band (∼6/40 Hz) activity in the left. These intracranial data provide a more fine-grained and nuanced characterization of cortical auditory processing in the two hemispheres, shedding light on the neural dynamics that potentially shape auditory and speech processing at different levels of the cortical hierarchy.Author summarySpeech processing is now known to be distributed across the two hemispheres, but the origin and function of lateralization continues to be vigorously debated. The asymmetric sampling in time (AST) hypothesis predicts that (1) the auditory system employs a two-timescales processing mode, (2) present in both hemispheres but with a different ratio of fast and slow timescales, (3) that emerges outside of primary cortical regions. Capitalizing on intracranial data from 96 epileptic patients we sensitively validated each of these predictions and provide a precise estimate of the processing timescales. In particular, we reveal that asymmetric sampling in associative areas is subtended by distinct two-timescales processing modes. Overall, our results shed light on the neurofunctional architecture of cortical auditory processing.


1990 ◽  
Vol 1 (1) ◽  
pp. 31-37
Author(s):  
John Risey ◽  
Wayne Briner

This paper reports a hitherto undescribed relationship between vertigo of central origin and dyscalculia. Subjects with vertigo skipped and displaced decades when counting backwards by two. The error is not recognized when presented visually. The subjects also display decrements in ability to do mental arithmetic and in central auditory processing. The results are discussed in light of the relationship between the central vestibular/auditory system and structures involved in higher cognitive function. The relationship between balance disorders and children with learning disabilities is also examined.


2015 ◽  
Vol 32 (5) ◽  
pp. 445-459 ◽  
Author(s):  
Kyung Myun Lee ◽  
Erika Skoe ◽  
Nina Kraus ◽  
Richard Ashley

Acoustic periodicity is an important factor for discriminating consonant and dissonant intervals. While previous studies have found that the periodicity of musical intervals is temporally encoded by neural phase locking throughout the auditory system, how the nonlinearities of the auditory pathway influence the encoding of periodicity and how this effect is related to sensory consonance has been underexplored. By measuring human auditory brainstem responses (ABRs) to four diotically presented musical intervals with increasing degrees of dissonance, this study seeks to explicate how the subcortical auditory system transforms the neural representation of acoustic periodicity for consonant versus dissonant intervals. ABRs faithfully reflect neural activity in the brainstem synchronized to the stimulus while also capturing nonlinear aspects of auditory processing. Results show that for the most dissonant interval, which has a less periodic stimulus waveform than the most consonant interval, the aperiodicity of the stimulus is intensified in the subcortical response. The decreased periodicity of dissonant intervals is related to a larger number of nonlinearities (i.e., distortion products) in the response spectrum. Our findings suggest that the auditory system transforms the periodicity of dissonant intervals resulting in consonant and dissonant intervals becoming more distinct in the neural code than if they were to be processed by a linear auditory system.


2013 ◽  
Vol PP (99) ◽  
pp. 1-18 ◽  

In recent years, a number of feature extraction procedures for automatic speech recognition (ASR) systems have been based on models of human auditory processing, and one often hears arguments in favor of implementing knowledge of human auditory perception and cognition into machines for ASR. This paper takes a reverse route, and argues that the engineering techniques for automatic recognition of speech that are already in widespread use are often consistent with some well-known properties of the human auditory system.


2021 ◽  
Author(s):  
Luis M. Rivera-Perez ◽  
Julia T. Kwapiszewski ◽  
Michael T. Roberts

AbstractThe inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input from the pontomesencephalic tegmentum. Activation of nicotinic acetylcholine receptors (nAChRs) in the IC can alter acoustic processing and enhance auditory task performance. However, how nAChRs affect the excitability of specific classes of IC neurons remains unknown. Recently, we identified vasoactive intestinal peptide (VIP) neurons as a distinct class of glutamatergic principal neurons in the IC. Here, in experiments using male and female mice, we show that cholinergic terminals are routinely located adjacent to the somas and dendrites of VIP neurons. Using whole-cell electrophysiology in brain slices, we found that acetylcholine drives surprisingly strong and long-lasting excitation and inward currents in VIP neurons. This excitation was unaffected by the muscarinic receptor antagonist atropine. Application of nAChR antagonists revealed that acetylcholine excites VIP neurons mainly via activation of α3β4* nAChRs, a nAChR subtype that is rare in the brain. Furthermore, we show that cholinergic excitation is intrinsic to VIP neurons and does not require activation of presynaptic inputs. Lastly, we found that low frequency trains of acetylcholine puffs elicited temporal summation in VIP neurons, suggesting that in vivo-like patterns of cholinergic input can reshape activity for prolonged periods. These results reveal the first cellular mechanisms of nAChR regulation in the IC, identify a functional role for α3β4* nAChRs in the auditory system, and suggest that cholinergic input can potently influence auditory processing by increasing excitability in VIP neurons and their postsynaptic targets.Key points summaryThe inferior colliculus (IC), the midbrain hub of the central auditory system, receives extensive cholinergic input and expresses a variety of nicotinic acetylcholine receptor (nAChR) subunits.In vivo activation of nAChRs alters the input-output functions of IC neurons and influences performance in auditory tasks. However, how nAChR activation affects the excitability of specific IC neuron classes remains unknown.Here we show in mice that cholinergic terminals are located adjacent to the somas and dendrites of VIP neurons, a class of IC principal neurons.We find that acetylcholine elicits surprisingly strong, long-lasting excitation of VIP neurons and this is mediated mainly through activation of α3β4* nAChRs, a subtype that is rare in the brain.Our data identify a role for α3β4* nAChRs in the central auditory pathway and reveal a mechanism by which cholinergic input can influence auditory processing in the IC and the postsynaptic targets of VIP neurons.


Sign in / Sign up

Export Citation Format

Share Document