scholarly journals Neural activity in human brainstem adapts to sound-level statistics

2021 ◽  
Author(s):  
Bjorn Herrmann ◽  
Sonia Yasmin ◽  
Kurdo Araz ◽  
David W. Purcell ◽  
Ingrid S. Johnsrude

Optimal perception requires adaptation to sounds in the environment. Adaptation involves representing the acoustic stimulation history in neural response patterns, for example, by altering response magnitude or latency as sound-level statistics change. Neurons in the auditory brainstem of rodents are sensitive to acoustic stimulation history and sound-level statistics, but the degree to which the human brainstem exhibits such neural adaptation is unclear. In six electroencephalography experiments with over 125 participants, we demonstrate that acoustic stimuli within a time window of at least 40 ms are represented in response latency of the human brainstem. We further show that human brainstem responses adapt to sound-level statistical information, but that neural sensitivity to sound-level statistics is less reliable when acoustic stimuli need to be integrated over periods of ~40 ms. Our results provide evidence of adaptation to sound-level statistics in the human brainstem and of the timescale over which sound-level statistics affect neural responses to sound. The research delivers an important link to studies on neural adaptation in non-human animals.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Björn Herrmann ◽  
Sonia Yasmin ◽  
Kurdo Araz ◽  
David W. Purcell ◽  
Ingrid S. Johnsrude

AbstractOptimal perception requires adaptation to sounds in the environment. Adaptation involves representing the acoustic stimulation history in neural response patterns, for example, by altering response magnitude or latency as sound-level context changes. Neurons in the auditory brainstem of rodents are sensitive to acoustic stimulation history and sound-level context (often referred to as sensitivity to stimulus statistics), but the degree to which the human brainstem exhibits such neural adaptation is unclear. In six electroencephalography experiments with over 125 participants, we demonstrate that the response latency of the human brainstem is sensitive to the history of acoustic stimulation over a few tens of milliseconds. We further show that human brainstem responses adapt to sound-level context in, at least, the last 44 ms, but that neural sensitivity to sound-level context decreases when the time window over which acoustic stimuli need to be integrated becomes wider. Our study thus provides evidence of adaptation to sound-level context in the human brainstem and of the timescale over which sound-level information affects neural responses to sound. The research delivers an important link to studies on neural adaptation in non-human animals.


2008 ◽  
Vol 28 (25) ◽  
pp. 6430-6438 ◽  
Author(s):  
I. Dean ◽  
B. L. Robinson ◽  
N. S. Harper ◽  
D. McAlpine

2004 ◽  
Vol 124 (0) ◽  
pp. 43-49
Author(s):  
Shu-Ping Cai ◽  
Tadashi Doi ◽  
Shen Jing ◽  
Toshihiko Kaneko ◽  
Shi-Ming Yang ◽  
...  

2004 ◽  
Vol 91 (1) ◽  
pp. 136-151 ◽  
Author(s):  
Sarah M. N. Woolley ◽  
John H. Casseday

The avian mesencephalicus lateralis, dorsalis (MLd) is the auditory midbrain nucleus in which multiple parallel inputs from lower brain stem converge and through which most auditory information passes to reach the forebrain. Auditory processing in the MLd has not been investigated in songbirds. We studied the tuning properties of single MLd neurons in adult male zebra finches. Pure tones were used to examine tonotopy, temporal response patterns, frequency coding, intensity coding, spike latencies, and duration tuning. Most neurons had no spontaneous activity. The tonotopy of MLd is like that of other birds and mammals; characteristic frequencies (CFs) increase in a dorsal to ventral direction. Four major response patterns were found: 1) onset (49% of cells); 2) primary-like (20%); 3) sustained (19%); and 4) primary-like with notch (12%). CFs ranged between 0.9 and 6.1 kHz, matching the zebra finch hearing range and the power spectrum of song. Tuning curves were generally V-shaped, but complex curves, with multiple peaks or noncontiguous excitatory regions, were observed in 22% of cells. Rate-level functions indicated that 51% of nononset cells showed monotonic relationships between spike rate and sound level. Other cells showed low saturation or nonmonotonic responses. Spike latencies ranged from 4 to 40 ms, measured at CF. Spike latencies generally decreased with increasing sound pressure level (SPL), although paradoxical latency shifts were observed in 16% of units. For onset cells, changes in SPL produced smaller latency changes than for cells showing other response types. Results suggest that auditory midbrain neurons may be particularly suited for processing temporally complex signals with a high degree of precision.


2009 ◽  
Vol 29 (44) ◽  
pp. 13797-13808 ◽  
Author(s):  
B. Wen ◽  
G. I. Wang ◽  
I. Dean ◽  
B. Delgutte

2020 ◽  
Vol 124 (4) ◽  
pp. 1165-1182
Author(s):  
Hariprakash Haragopal ◽  
Ryan Dorkoski ◽  
Austin R. Pollard ◽  
Gareth A. Whaley ◽  
Timothy R. Wohl ◽  
...  

Sensorineural hearing loss compromises perceptual abilities that arise from hearing with two ears, yet its effects on binaural aspects of neural responses are largely unknown. We found that, following severe hearing loss because of acoustic trauma, auditory midbrain neurons specifically lost the ability to encode time differences between the arrival of a broadband noise stimulus to the two ears, whereas the encoding of sound level differences between the two ears remained uncompromised.


Sign in / Sign up

Export Citation Format

Share Document