scholarly journals Inhibitory gating of coincidence-dependent sensory binding in secondary auditory cortex

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Amber M. Kline ◽  
Destinee A. Aponte ◽  
Hiroaki Tsukano ◽  
Andrea Giovannucci ◽  
Hiroyuki K. Kato

Abstract Integration of multi-frequency sounds into a unified perceptual object is critical for recognizing syllables in speech. This “feature binding” relies on the precise synchrony of each component’s onset timing, but little is known regarding its neural correlates. We find that multi-frequency sounds prevalent in vocalizations, specifically harmonics, preferentially activate the mouse secondary auditory cortex (A2), whose response deteriorates with shifts in component onset timings. The temporal window for harmonics integration in A2 was broadened by inactivation of somatostatin-expressing interneurons (SOM cells), but not parvalbumin-expressing interneurons (PV cells). Importantly, A2 has functionally connected subnetworks of neurons preferentially encoding harmonic over inharmonic sounds. These subnetworks are stable across days and exist prior to experimental harmonics exposure, suggesting their formation during development. Furthermore, A2 inactivation impairs performance in a discrimination task for coincident harmonics. Together, we propose A2 as a locus for multi-frequency integration, which may form the circuit basis for vocal processing.

2021 ◽  
Author(s):  
Amber M Kline ◽  
Destinee A Aponte ◽  
Hiroaki Tsukano ◽  
Andrea Giovannucci ◽  
Hiroyuki K Kato

Integration of multi-frequency sounds into a unified perceptual object is critical for recognizing syllables in speech. This "feature binding" relies on the precise synchrony of each component's onset timing, but little is known regarding its neural correlates. We find that multi-frequency sounds prevalent in vocalizations, specifically harmonics, preferentially activate the mouse secondary auditory cortex (A2), whose response deteriorates with shifts in component onset timings. The temporal window for harmonics integration in A2 was broadened by inactivation of somatostatin-expressing interneurons (SOM cells), but not parvalbumin-expressing interneurons (PV cells). Importantly, A2 has functionally connected subnetworks of neurons encoding harmonic, but not inharmonic sounds. These subnetworks are stable across days and exist prior to experimental harmonics exposure, suggesting their formation during development. Furthermore, A2 inactivation impairs performance in a discrimination task for coincident harmonics. Together, we propose A2 as a locus for harmonic integration, which may form the circuit basis for vocal processing.


Author(s):  
Lasse Pelzer ◽  
Christoph Naefgen ◽  
Robert Gaschler ◽  
Hilde Haider

AbstractDual-task costs might result from confusions on the task-set level as both tasks are not represented as distinct task-sets, but rather being integrated into a single task-set. This suggests that events in the two tasks are stored and retrieved together as an integrated memory episode. In a series of three experiments, we tested for such integrated task processing and whether it can be modulated by regularities between the stimuli of the two tasks (across-task contingencies) or by sequential regularities within one of the tasks (within-task contingencies). Building on the experimental approach of feature binding in action control, we tested whether the participants in a dual-tasking experiment will show partial-repetition costs: they should be slower when only the stimulus in one of the two tasks is repeated from Trial n − 1 to Trial n than when the stimuli in both tasks repeat. In all three experiments, the participants processed a visual-manual and an auditory-vocal tone-discrimination task which were always presented concurrently. In Experiment 1, we show that retrieval of Trial n − 1 episodes is stable across practice if the stimulus material is drawn randomly. Across-task contingencies (Experiment 2) and sequential regularities within a task (Experiment 3) can compete with n − 1-based retrieval leading to a reduction of partial-repetition costs with practice. Overall the results suggest that participants do not separate the processing of the two tasks, yet, within-task contingencies might reduce integrated task processing.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Taishi Hosaka ◽  
Marino Kimura ◽  
Yuko Yotsumoto

AbstractWe have a keen sensitivity when it comes to the perception of our own voices. We can detect not only the differences between ourselves and others, but also slight modifications of our own voices. Here, we examined the neural correlates underlying such sensitive perception of one’s own voice. In the experiments, we modified the subjects’ own voices by using five types of filters. The subjects rated the similarity of the presented voices to their own. We compared BOLD (Blood Oxygen Level Dependent) signals between the voices that subjects rated as least similar to their own voice and those they rated as most similar. The contrast revealed that the bilateral superior temporal gyrus exhibited greater activities while listening to the voice least similar to their own voice and lesser activation while listening to the voice most similar to their own. Our results suggest that the superior temporal gyrus is involved in neural sharpening for the own-voice. The lesser degree of activations observed by the voices that were similar to the own-voice indicates that these areas not only respond to the differences between self and others, but also respond to the finer details of own-voices.


2000 ◽  
Vol 83 (5) ◽  
pp. 2708-2722 ◽  
Author(s):  
Jos J. Eggermont

Neural synchrony within and between auditory cortical fields is evaluated with respect to its potential role in feature binding and in the coding of tone and noise sound pressure level. Simultaneous recordings were made in 24 cats with either two electrodes in primary auditory cortex (AI) and one in anterior auditory field (AAF) or one electrode each in AI, AAF, and secondary auditory cortex. Cross-correlograms (CCHs) for 1-ms binwidth were calculated for tone pips, noise bursts, and silence (i.e., poststimulus) as a function of intensity level. Across stimuli and intensity levels the total percentage of significant stimulus onset CCHs was 62% and that of significant poststimulus CCHs was 58% of 1,868 pairs calculated for each condition. The cross-correlation coefficient to stimulus onsets was higher for single-electrode pairs than for dual-electrode pairs and higher for noise bursts compared with tone pips. The onset correlation for single-electrode pairs was only marginally larger than the poststimulus correlation. For pairs from electrodes across area boundaries, the onset correlations were a factor 3–4 higher than the poststimulus correlations. The within-AI dual-electrode peak correlation was higher than that across areas, especially for spontaneous conditions. Correlation strengths for between area pairs were independent of the difference in characteristic frequency (CF), thereby providing a mechanism of feature binding for broadband sounds. For noise-burst stimulation, the onset correlation for between area pairs was independent of stimulus intensity regardless the difference in CF. In contrast, for tone-pip stimulation a significant dependence on intensity level of the peak correlation strength was found for pairs involving AI and/or AAF with CF difference less than one octave. Across all areas, driven rate, between-area peak correlation strength, or a combination of the two did not predict stimulus intensity. However, between-area peak correlation strength performs better than firing rate to decide if a stimulus is present or absent.


Neuroreport ◽  
2014 ◽  
Vol 25 (18) ◽  
pp. 1418-1423 ◽  
Author(s):  
Juan M. Gutiérrez-Garralda ◽  
Carlos R. Hernandez-Castillo ◽  
Fernando A. Barrios ◽  
Erick H. Pasaye ◽  
Juan Fernandez-Ruiz

Author(s):  
Stefan Koelsch

During listening, acoustic features of sounds are extracted in the auditory system (in the auditory brainstem, thalamus, and auditory cortex). To establish auditory percepts of melodies and rhythms (i.e., to establish auditory “Gestalten” and auditory objects), sound information is buffered and processed in the auditory sensory memory. Musical structure is then processed based on acoustical similarities and rhythmical organization. In addition, musical structure is processed according to (implicit) knowledge about musical regularities underlying scales, melodic and harmonic progressions, and so on. These structures are based on both local and (hierarchically organized) nonlocal dependencies. This chapter reviews neural correlates of these processes, with regard to both brain-electric responses to sounds, and the neuroanatomical architecture of music perception.


Sign in / Sign up

Export Citation Format

Share Document