auditory periphery
Recently Published Documents


TOTAL DOCUMENTS

152
(FIVE YEARS 17)

H-INDEX

31
(FIVE YEARS 1)

2021 ◽  
Vol 15 ◽  
Author(s):  
Jennifer L. Thornton ◽  
Kelsey L. Anbuhl ◽  
Daniel J. Tollin

Temporary conductive hearing loss (CHL) can lead to hearing impairments that persist beyond resolution of the CHL. In particular, unilateral CHL leads to deficits in auditory skills that rely on binaural input (e.g., spatial hearing). Here, we asked whether single neurons in the auditory midbrain, which integrate acoustic inputs from the two ears, are altered by a temporary CHL. We introduced 6 weeks of unilateral CHL to young adult chinchillas via foam earplug. Following CHL removal and restoration of peripheral input, single-unit recordings from inferior colliculus (ICC) neurons revealed the CHL decreased the efficacy of inhibitory input to the ICC contralateral to the earplug and increased inhibitory input ipsilateral to the earplug, effectively creating a higher proportion of monaural responsive neurons than binaural. Moreover, this resulted in a ∼10 dB shift in the coding of a binaural sound location cue (interaural-level difference, ILD) in ICC neurons relative to controls. The direction of the shift was consistent with a compensation of the altered ILDs due to the CHL. ICC neuron responses carried ∼37% less information about ILDs after CHL than control neurons. Cochlear peripheral-evoked responses confirmed that the CHL did not induce damage to the auditory periphery. We find that a temporary CHL altered auditory midbrain neurons by shifting binaural responses to ILD acoustic cues, suggesting a compensatory form of plasticity occurring by at least the level of the auditory midbrain, the ICC.


2021 ◽  
Vol 17 (3) ◽  
pp. 269-277
Author(s):  
Sungmin Lee

Despite the significant contribution of hearing assistive devices, medications, and surgery to restoring auditory periphery, a large number of people with hearing loss still struggle with understanding speech. This leads many studies on speech perception to move towards the central auditory functions by looking at associated brain activities using macroscopic recording tools such as electroencephalography (EEG). Up until a few years ago, however, limitation has been given to the brain scientists who attempted to investigate speech perception mechanisms using the EEG. In particular, short duration of speech segments has inevitably been used to elicit auditory evoked potential, even though they were too brief to be considered as speech. Today, however, advance in neural engineering and better understanding of neural mechanism have better facilitated brain scientists to perform studies with running stream of continuous speech and expand the scope of EEG studies to include comprehension of more realistic speech envelope. The purpose of this study is to review literatures on neural tracking to speech envelope to discuss it in Audiology perspective. This review article consists of seven subjects including introduction, neural tracking theories, neural tracking measure, signal processing & analysis, literature review on neural tracking associated with hearing loss, application of neural tracking to audiology, and conclusion. We noted that neural tracking has potential to be used in clinical sets to objectively evaluate speech comprehension for people with hearing loss in the future.


2021 ◽  
Author(s):  
Francis Xavier Smith ◽  
Bob McMurray

Listeners often process speech in adverse conditions. One challenge is spectral degradation, where information is missing from the signal. Lexical competition dynamics change when processing degraded speech, but it is unclear why and how these changes occur. We ask if these changes are driven solely by the quality of the input from the auditory periphery, or if these changes are modulated by cognitive mechanisms. Across two experiments, we used the visual world paradigm to investigate changes in lexical processing. Listeners heard different levels of noise-vocoded speech (4- or 15-channel vocoding) and matched the auditory input to pictures of a target word and its phonological competitors. In Experiment 1 levels of vocoding were either blocked together consistently or randomly interleaved from trial-to-trial. Listeners in the blocked condition showed more differentiation between the two levels of vocoding; this suggests that some level of learning is in effect to adapt to the varying levels of uncertainty in the input. Exploratory analyses suggested that when less intelligible speech is processed there is a cost to switching processing modes. In Experiment 2 levels of vocoding were always randomly interleaved. A visual cue was added to inform listeners of the level of difficulty of the upcoming speech. This was enough to attenuate the effects of interleaving as well as the switch cost. These experiments support a role for central processing in dealing with degraded speech. Listeners may be actively forming expectations about the level of degradation they will encounter and altering the dynamics of lexical access.


2020 ◽  
Vol 117 (45) ◽  
pp. 28442-28451
Author(s):  
Monzilur Rahman ◽  
Ben D. B. Willmore ◽  
Andrew J. King ◽  
Nicol S. Harper

Sounds are processed by the ear and central auditory pathway. These processing steps are biologically complex, and many aspects of the transformation from sound waveforms to cortical response remain unclear. To understand this transformation, we combined models of the auditory periphery with various encoding models to predict auditory cortical responses to natural sounds. The cochlear models ranged from detailed biophysical simulations of the cochlea and auditory nerve to simple spectrogram-like approximations of the information processing in these structures. For three different stimulus sets, we tested the capacity of these models to predict the time course of single-unit neural responses recorded in ferret primary auditory cortex. We found that simple models based on a log-spaced spectrogram with approximately logarithmic compression perform similarly to the best-performing biophysically detailed models of the auditory periphery, and more consistently well over diverse natural and synthetic sounds. Furthermore, we demonstrated that including approximations of the three categories of auditory nerve fiber in these simple models can substantially improve prediction, particularly when combined with a network encoding model. Our findings imply that the properties of the auditory periphery and central pathway may together result in a simpler than expected functional transformation from ear to cortex. Thus, much of the detailed biological complexity seen in the auditory periphery does not appear to be important for understanding the cortical representation of sound.


2020 ◽  
Vol 10 (10) ◽  
pp. 697
Author(s):  
Agnieszka J. Szczepek ◽  
Tatyana Dudnik ◽  
Betül Karayay ◽  
Valentina Sergeeva ◽  
Heidi Olze ◽  
...  

Mast cells (MCs) are densely granulated cells of myeloid origin and are a part of immune and neuroimmune systems. MCs have been detected in the endolymphatic sac of the inner ear and are suggested to regulate allergic hydrops. However, their existence in the cochlea has never been documented. In this work, we show that MCs are present in the cochleae of C57BL/6 mice and Wistar rats, where they localize in the modiolus, spiral ligament, and stria vascularis. The identity of MCs was confirmed in cochlear cryosections and flat preparations using avidin and antibodies against c-Kit/CD117, chymase, tryptase, and FcεRIα. The number of MCs decreased significantly during postnatal development, resulting in only a few MCs present in the flat preparation of the cochlea of a rat. In addition, exposure to 40 µM cisplatin for 24 h led to a significant reduction in cochlear MCs. The presence of MCs in the cochlea may shed new light on postnatal maturation of the auditory periphery and possible involvement in the ototoxicity of cisplatin. Presented data extend the current knowledge about the physiology and pathology of the auditory periphery. Future functional studies should expand and translate this new basic knowledge to clinics.


2020 ◽  
Author(s):  
Violet Aurora Brown ◽  
Naseem Dillman-Hasso ◽  
ZHAOBIN LI ◽  
Lucia Ray ◽  
Ellen Mamantov ◽  
...  

The linguistic similarity hypothesis states that it is more difficult to segregate target and masker speech when they are linguistically similar (Brouwer et al., 2012). This may be the result of energetic masking (interference at the auditory periphery) and/or informational masking (cognitive interference). To provide a rigorous test of the hypothesis and investigate how informational masking interferes with speech identification in the absence of energetic masking, we presented target speech visually and masking babble auditorily. Participants completed an English lipreading task in silence, speech-shaped noise, semantically anomalous English, semantically meaningful English, Dutch, and Mandarin two-talker babble. Results showed that speech maskers interfere with lipreading more than stationary noise, and that maskers that are the same language as the target speech provide more interference than different-language maskers. However, the study found no evidence that a masker that is similar to the English target speech (Dutch) provides more masking than one that is less similar (Mandarin). These results provide some cross-modal support for the linguistic similarity hypothesis, but suggest that the theory should be further specified to address the conditions under which languages that differ in their similarity to the target speech should provide different levels of masking.


2020 ◽  
Author(s):  
David LK Murphy ◽  
Cynthia D King ◽  
Stephanie N Schlebusch ◽  
Christopher A Shera ◽  
Jennifer M Groh

AbstractEye movements alter the relationship between the visual and auditory spatial scenes. Signals related to eye movements affect the brain’s auditory pathways from the ear through auditory cortex and beyond, but how these signals might contribute to computing the locations of sounds with respect to the visual scene is poorly understood. Here, we evaluated the information contained in the signals observed at the earliest processing stage, eye movement-related eardrum oscillations (EMREOs). We report that human EMREOs carry information about both horizontal and vertical eye displacement as well as initial/final eye position. We conclude that all of the information necessary to contribute to a suitable coordinate transformation of auditory spatial cues into a common reference frame with visual information is present in this signal. We hypothesize that the underlying mechanism causing EMREOs could impose a transfer function on any incoming sound signal, which could permit subsequent processing stages to compute the positions of sounds in relation to the visual scene.


2020 ◽  
Author(s):  
Heivet Hernandez-Perez ◽  
Jason Mikiel-Hunter ◽  
David McAlpine ◽  
Sumitrajit Dhar ◽  
Sriram Boothalingam ◽  
...  

AbstractNavigating “cocktail party” situations by enhancing foreground sounds over irrelevant background information is typically considered from a cortico-centric perspective. However, subcortical circuits, such as the medial olivocochlear reflex (MOCR) that modulates inner ear activity itself, have ample opportunity to extract salient features from the auditory scene prior to any cortical processing. To understand the contribution of auditory subcortical nuclei and the cochleae, physiological recordings were made along the auditory pathway while listeners differentiated non(sense)-words and words. Both naturally-spoken and intrinsically-noisy, vocoded speech — filtering that mimics processing by a cochlear implant—significantly activated the MOCR, whereas listening to speechin-background noise revealed instead engagement of midbrain and cortical resources. An auditory periphery model reproduced these speech degradation-specific effects, providing a rationale for goal-directed MOCR gating to enhance representation of speech features in the auditory nerve. These results highlight two strategies co-existing in the auditory system to accommodate categorically different speech degradations.


Sign in / Sign up

Export Citation Format

Share Document