auditory responses
Recently Published Documents


TOTAL DOCUMENTS

331
(FIVE YEARS 61)

H-INDEX

42
(FIVE YEARS 3)

Author(s):  
Hamed Fanaei ◽  
Akram Pourbakht ◽  
Sadegh Jafarzadeh

Background and Aim: Ischemic injury is a major cause of hearing loss and oxidative stress is an important part of ischemic injury. The goal of this study was to evaluate the cochlear oxidative stress effect on auditory responses in male rats. Methods: Cochlear oxidative stress was induced by bilateral carotid artery occlusion for 20 minutes. The rats were evaluated by biochemical inflammatory factors tumor necrosis factor-α [TNF-α] and C-reactive protein (CRP) in the day before and 1st, 4th, and 7th days following surgery. The auditory brainstem response (ABR) and electrocochleography (ECochG) were evaluated on the day before surgery and 14th, 21th and 28th days after surgery. Results: TNF-α and CRP levels concentrations increased one day after ischemia and subsequently decreased on the 7th day. The click and tone burst evoked ABR showed increased thresholds on day14th, 21th, and 28th. The highest threshold was recorded on day14th. The ECochG results also were abnormal for 55%, 70%, and 45% of cases on day 14th, 21th, and 28th, respectively. Conclusion: Cochlear oxidative stress affects hearing sensitivity. The ABR shows elevated thresholds and abnormal ECochG was found in many cases.


2021 ◽  
Vol 11 (4) ◽  
pp. 691-705
Author(s):  
Andy J. Beynon ◽  
Bart M. Luijten ◽  
Emmanuel A. M. Mylanus

Electrically evoked auditory potentials have been used to predict auditory thresholds in patients with a cochlear implant (CI). However, with exception of electrically evoked compound action potentials (eCAP), conventional extracorporeal EEG recording devices are still needed. Until now, built-in (intracorporeal) back-telemetry options are limited to eCAPs. Intracorporeal recording of auditory responses beyond the cochlea is still lacking. This study describes the feasibility of obtaining longer latency cortical responses by concatenating interleaved short recording time windows used for eCAP recordings. Extracochlear reference electrodes were dedicated to record cortical responses, while intracochlear electrodes were used for stimulation, enabling intracorporeal telemetry (i.e., without an EEG device) to assess higher cortical processing in CI recipients. Simultaneous extra- and intra-corporeal recordings showed that it is feasible to obtain intracorporeal slow vertex potentials with a CI similar to those obtained by conventional extracorporeal EEG recordings. Our data demonstrate a proof of concept of closed-loop intracorporeal auditory cortical response telemetry (ICT) with a cochlear implant device. This research breaks new ground for next generation CI devices to assess higher cortical neural processing based on acute or continuous EEG telemetry to enable individualized automatic and/or adaptive CI fitting with only a CI.


2021 ◽  
Vol 23 ◽  
Author(s):  
Clara Martucci

Sound shapes space. However, the architectural training of designers usually prioritizes visual aspects of a building or urban space without considering the sonic environment and auditory responses of humans who may engage or occupy the built environment. The concept of the “soundscape” brings together the visual and sonic environments, allowing designers to develop more nuanced, responsive, and effective spaces (Southworth, 1967, pp. 6-8) Acousticians define soundscape as “a person’s perceptual construct of the acoustic environment of that place” (Kang & Schulte-Fortkamp, 2017, p. 5). People’s interpretation of auditory sensations can lead to either positive or negative feelings regarding that specific place. Because urban spaces include both a great number of sound sources and a high number of people occupying and moving through them, the sonic environments and urban soundscapes are complex, layered, and dense. This research evaluates the sonic qualities of urban spaces to provide designers with a means by which these complex environments can be better understood, analyzed, and created. It draws on an expanding body of research in architectural acoustics, and direct observation of cities in the United States and Italy conducted during the COVID-19 pandemic. Rather than relying solely on numeric calculations, this work probes the notion of the “perceptual construct,” seeking to make visual these constructs. Drawings and photographs from different cities are used to study the form of the city through urban edges and the emerging concept of green acoustics. The work provides a way of creating a new architecture of public space through the lens of the sonic environment.  


2021 ◽  
Vol 15 ◽  
Author(s):  
Jiaqiu Sun ◽  
Ziqing Wang ◽  
Xing Tian

How different sensory modalities interact to shape perception is a fundamental question in cognitive neuroscience. Previous studies in audiovisual interaction have focused on abstract levels such as categorical representation (e.g., McGurk effect). It is unclear whether the cross-modal modulation can extend to low-level perceptual attributes. This study used motional manual gestures to test whether and how the loudness perception can be modulated by visual-motion information. Specifically, we implemented a novel paradigm in which participants compared the loudness of two consecutive sounds whose intensity changes around the just noticeable difference (JND), with manual gestures concurrently presented with the second sound. In two behavioral experiments and two EEG experiments, we investigated our hypothesis that the visual-motor information in gestures would modulate loudness perception. Behavioral results showed that the gestural information biased the judgment of loudness. More importantly, the EEG results demonstrated that early auditory responses around 100 ms after sound onset (N100) were modulated by the gestures. These consistent results in four behavioral and EEG experiments suggest that visual-motor processing can integrate with auditory processing at an early perceptual stage to shape the perception of a low-level perceptual attribute such as loudness, at least under challenging listening conditions.


2021 ◽  
Vol 15 ◽  
Author(s):  
Arianna Gentile Polese ◽  
Sunny Nigam ◽  
Laura M. Hurley

Neuromodulatory systems may provide information on social context to auditory brain regions, but relatively few studies have assessed the effects of neuromodulation on auditory responses to acoustic social signals. To address this issue, we measured the influence of the serotonergic system on the responses of neurons in a mouse auditory midbrain nucleus, the inferior colliculus (IC), to vocal signals. Broadband vocalizations (BBVs) are human-audible signals produced by mice in distress as well as by female mice in opposite-sex interactions. The production of BBVs is context-dependent in that they are produced both at early stages of interactions as females physically reject males and at later stages as males mount females. Serotonin in the IC of males corresponds to these events, and is elevated more in males that experience less female rejection. We measured the responses of single IC neurons to five recorded examples of BBVs in anesthetized mice. We then locally activated the 5-HT1A receptor through iontophoretic application of 8-OH-DPAT. IC neurons showed little selectivity for different BBVs, but spike trains were characterized by local regions of high spike probability, which we called “response features.” Response features varied across neurons and also across calls for individual neurons, ranging from 1 to 7 response features for responses of single neurons to single calls. 8-OH-DPAT suppressed spikes and also reduced the numbers of response features. The weakest response features were the most likely to disappear, suggestive of an “iceberg”-like effect in which activation of the 5-HT1A receptor suppressed weakly suprathreshold response features below the spiking threshold. Because serotonin in the IC is more likely to be elevated for mounting-associated BBVs than for rejection-associated BBVs, these effects of the 5-HT1A receptor could contribute to the differential auditory processing of BBVs in different behavioral subcontexts.


2021 ◽  
Vol 2 ◽  
Author(s):  
Hiroyuki Sakai ◽  
Sayako Ueda ◽  
Kenichi Ueno ◽  
Takatsune Kumada

Sensory skills can be augmented through training and technological support. This process is underpinned by neural plasticity in the brain. We previously demonstrated that auditory-based sensory augmentation can be used to assist self-localization during locomotion. However, the neural mechanisms underlying this phenomenon remain unclear. Here, by using functional magnetic resonance imaging, we aimed to identify the neuroplastic reorganization induced by sensory augmentation training for self-localization during locomotion. We compared activation in response to auditory cues for self-localization before, the day after, and 1 month after 8 days of sensory augmentation training in a simulated driving environment. Self-localization accuracy improved after sensory augmentation training, compared with the control (normal driving) condition; importantly, sensory augmentation training resulted in auditory responses not only in temporal auditory areas but also in higher-order somatosensory areas extending to the supramarginal gyrus and the parietal operculum. This sensory reorganization had disappeared by 1 month after the end of the training. These results suggest that the use of auditory cues for self-localization during locomotion relies on multimodality in higher-order somatosensory areas, despite substantial evidence that information for self-localization during driving is estimated from visual cues on the proximal part of the road. Our findings imply that the involvement of higher-order somatosensory, rather than visual, areas is crucial for acquiring augmented sensory skills for self-localization during locomotion.


Author(s):  
Dr. Bhagya V ◽  
Dr. Manjushree P ◽  
Dr. Guruprasad D

Jaundice is a common finding in neonates affecting 70% of term and 80% of preterm neonates during the first week of life. Objective of this study is to evaluate auditory and brainstem responses in hyperbilirubinemia infants and to see if there is any statistically significant increase in latencies of wave I and V waves with rise in bilirubin levels. In the present study we have taken 53 infants with hyperbilirubinemia>11mg% & with no other risk factors like preterm, low birth weight, birth asphyxia and age and sex matched controls who visited pediatric OPD of Bapuji Child Health Centre were evaluated using RMS EMG. EP MARK –II machine. Latencies of Waves I and V and inter-peak latency of I-V were recorded. Latency of wave V and IPL I-V were increased slightly compared to normal control subjects. Increase in the threshold leading to hearing impairment in the affected infants and complete deafness where none of the waves were recorded signify that hyperbilirubinemia is a risk factor for deafness. Since hyperbilirubinemia is a risk factor for hearing impairment, their hearing screening by BERA at the earliest and follow up will help in their earliest initiation of rehabilitation when the brain is sensitive to the development of speech & language.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Michael Lohse ◽  
Johannes C. Dahmen ◽  
Victoria M. Bajo ◽  
Andrew J. King

AbstractIntegration of information across the senses is critical for perception and is a common property of neurons in the cerebral cortex, where it is thought to arise primarily from corticocortical connections. Much less is known about the role of subcortical circuits in shaping the multisensory properties of cortical neurons. We show that stimulation of the whiskers causes widespread suppression of sound-evoked activity in mouse primary auditory cortex (A1). This suppression depends on the primary somatosensory cortex (S1), and is implemented through a descending circuit that links S1, via the auditory midbrain, with thalamic neurons that project to A1. Furthermore, a direct pathway from S1 has a facilitatory effect on auditory responses in higher-order thalamic nuclei that project to other brain areas. Crossmodal corticofugal projections to the auditory midbrain and thalamus therefore play a pivotal role in integrating multisensory signals and in enabling communication between different sensory cortical areas.


2021 ◽  
Author(s):  
Peter Lush ◽  
Zoltan Dienes ◽  
Anil Seth ◽  
Ryan Bradley Scott

Up to 40% of people report visually evoked auditory responses (vEARs; for example, ‘hearing’ sounds in response to watching silent videos). We investigate the degree to which vEAR experiences may arise from phenomenological control, i.e. from the way people can control their experience to meet expectancies arising from imaginative suggestion. In the experimental situation, expectancies arise from demand characteristics (cues which communicate beliefs about experimental aims to participants). Trait phenomenological control has been shown to substantially predict experimental measures of changes in ‘embodiment’ experience in which demand characteristics are not controlled (e.g., mirror touch and pain, and experiences of ownership of a fake hand). Here we report substantial relationship between scores on the Phenomenological Control Scale (PCS; a test of direct imaginative suggestion) and vEAR scores (reports of auditory experience for silent videos) which indicate that vEAR experience may be an implicit imaginative suggestion effect. This study demonstrates that relationships of trait phenomenological control with subjective reports about experience are not limited to embodiment and may confound a wide range of measures in psychological science.


2021 ◽  
Author(s):  
Takafumi Iizuka ◽  
Chihiro Mori ◽  
Kazuo Okanoya

Songbirds use auditory feedback to maintain their own songs. Juveniles also memorize a tutor song and use memory as a template to make up their own songs through auditory feedback. A recent electrophysiological study revealed that HVC neurons respond to BOS playback only in low arousal, sleeping, or anesthetized conditions. One outstanding question is how does auditory suppression occur in the brain? Here, we determined how arousal affects auditory responses simultaneously in the whole brain and over the song neural circuit in Bengalese finches, using the immediate early gene egr-1 as a marker of neural activity. Our results showed that auditory responses in the low-arousal state were less susceptible to gating, which was also confirmed by gene expression, and that the suppression may be weaker than observed in previous zebra finch studies. This may be because the Bengalese finch is a domesticated species. In addition, our results suggest that information may flow from the MLd.I of the midbrain to higher auditory regions. Altogether, this study presents a new attempt to explore the auditory suppression network by simultaneously investigating the whole brain using molecular biology methods.


Sign in / Sign up

Export Citation Format

Share Document