a1 neurons
Recently Published Documents


TOTAL DOCUMENTS

26
(FIVE YEARS 4)

H-INDEX

14
(FIVE YEARS 1)

2020 ◽  
Vol 124 (6) ◽  
pp. 1706-1726
Author(s):  
Jeffrey S. Johnson ◽  
Mamiko Niwa ◽  
Kevin N. O’Connor ◽  
Mitchell L. Sutter

ML neurons synchronized less than A1 neurons, consistent with a hierarchical temporal-to-rate transformation. Both A1 and ML had a class of modulation transfer functions previously unreported in the cortex with a low-modulation-frequency (MF) peak, a middle-MF trough, and responses similar to unmodulated noise responses at high MFs. The results support a hierarchical shift toward a two-pool opponent code, where subtraction of neural activity between two populations of oppositely tuned neurons encodes AM.


2020 ◽  
Author(s):  
Huan-huan Zeng ◽  
Jun-feng Huang ◽  
Zhiming Shen ◽  
Neng Gong ◽  
Yun-qing Wen ◽  
...  

AbstractVocal communication is crucial for animals’ survival, but the underlying neural mechanism remains largely unclear. Using calcium imaging of large neuronal populations in the primary auditory cortex (A1) of head-fixed awake marmosets, we found specific ensembles of A1 neurons that responded selectively to distinct monosyllables or disyllables in natural marmoset calls. These selective responses were stable over one-week recording time, and disyllable-selective cells completely lost selective responses after anesthesia. No selective response was found for novel disyllables constructed by reversing the sequence of constituent monosyllables or by extending the interval between them beyond ~1 second. These findings indicate that neuronal selectivity to natural calls exists in A1 and pave the way for studying circuit mechanisms underlying vocal communication in awake non-human primates.One Sentence SummaryPrimary auditory cortex neurons in awake marmosets can encode the sequence and interval of syllables in natural calls.


2020 ◽  
Vol 30 (5) ◽  
pp. 3130-3147
Author(s):  
Jonathan Y Shih ◽  
Kexin Yuan ◽  
Craig A Atencio ◽  
Christoph E Schreiner

Abstract Classic spectrotemporal receptive fields (STRFs) for auditory neurons are usually expressed as a single linear filter representing a single encoded stimulus feature. Multifilter STRF models represent the stimulus-response relationship of primary auditory cortex (A1) neurons more accurately because they can capture multiple stimulus features. To determine whether multifilter processing is unique to A1, we compared the utility of single-filter versus multifilter STRF models in the medial geniculate body (MGB), anterior auditory field (AAF), and A1 of ketamine-anesthetized cats. We estimated STRFs using both spike-triggered average (STA) and maximally informative dimension (MID) methods. Comparison of basic filter properties of first maximally informative dimension (MID1) and second maximally informative dimension (MID2) in the 3 stations revealed broader spectral integration of MID2s in MGBv and A1 as opposed to AAF. MID2 peak latency was substantially longer than for STAs and MID1s in all 3 stations. The 2-filter MID model captured more information and yielded better predictions in many neurons from all 3 areas but disproportionately more so in AAF and A1 compared with MGBv. Significantly, information-enhancing cooperation between the 2 MIDs was largely restricted to A1 neurons. This demonstrates significant differences in how these 3 forebrain stations process auditory information, as expressed in effective and synergistic multifilter processing.


2019 ◽  
Vol 116 (8) ◽  
pp. 3239-3244 ◽  
Author(s):  
Huan-huan Zeng ◽  
Jun-feng Huang ◽  
Ming Chen ◽  
Yun-qing Wen ◽  
Zhi-ming Shen ◽  
...  

Marmoset has emerged as a useful nonhuman primate species for studying brain structure and function. Previous studies on the mouse primary auditory cortex (A1) showed that neurons with preferential frequency-tuning responses are mixed within local cortical regions, despite a large-scale tonotopic organization. Here we found that frequency-tuning properties of marmoset A1 neurons are highly uniform within local cortical regions. We first defined the tonotopic map of A1 using intrinsic optical imaging and then used in vivo two-photon calcium imaging of large neuronal populations to examine the tonotopic preference at the single-cell level. We found that tuning preferences of layer 2/3 neurons were highly homogeneous over hundreds of micrometers in both horizontal and vertical directions. Thus, marmoset A1 neurons are distributed in a tonotopic manner at both macro- and microscopic levels. Such organization is likely to be important for the organization of auditory circuits in the primate brain.


2018 ◽  
Author(s):  
Huan-huan Zeng ◽  
Jun-feng Huang ◽  
Ming Chen ◽  
Yun-qing Wen ◽  
Zhi-ming Shen ◽  
...  

AbstractMarmoset has emerged as a useful non-human primate species for studying the brain structure and function. Previous studies on the mouse primary auditory cortex (A1) showed that neurons with preferential frequency tuning responses are mixed within local cortical regions, despite a large-scale tonotopic organization. Here we found that frequency tuning properties of marmoset A1 neurons are highly uniform within local cortical regions. We first defined tonotopic map of A1 using intrinsic optical imaging, and then used in vivo two-photon calcium imaging of large neuronal populations to examine the tonotopic preference at the single-cell level. We found that tuning preferences of layer 2/3 neurons were highly homogeneous over hundreds of micrometers in both horizontal and vertical directions. Thus, marmoset A1 neurons are distributed in a tonotopic manner at both macro- and microscopic levels. Such organization is likely to be important for the organization of auditory circuits in the primate brain.


2015 ◽  
Vol 113 (1) ◽  
pp. 307-327 ◽  
Author(s):  
Mamiko Niwa ◽  
Kevin N. O'Connor ◽  
Elizabeth Engall ◽  
Jeffrey S. Johnson ◽  
M. L. Sutter

We recorded from middle lateral belt (ML) and primary (A1) auditory cortical neurons while animals discriminated amplitude-modulated (AM) sounds and also while they sat passively. Engagement in AM discrimination improved ML and A1 neurons' ability to discriminate AM with both firing rate and phase-locking; however, task engagement affected neural AM discrimination differently in the two fields. The results suggest that these two areas utilize different AM coding schemes: a “single mode” in A1 that relies on increased activity for AM relative to unmodulated sounds and a “dual-polar mode” in ML that uses both increases and decreases in neural activity to encode modulation. In the dual-polar ML code, nonsynchronized responses might play a special role. The results are consistent with findings in the primary and secondary somatosensory cortices during discrimination of vibrotactile modulation frequency, implicating a common scheme in the hierarchical processing of temporal information among different modalities. The time course of activity differences between behaving and passive conditions was also distinct in A1 and ML and may have implications for auditory attention. At modulation depths ≥ 16% (approximately behavioral threshold), A1 neurons' improvement in distinguishing AM from unmodulated noise is relatively constant or improves slightly with increasing modulation depth. In ML, improvement during engagement is most pronounced near threshold and disappears at highly suprathreshold depths. This ML effect is evident later in the stimulus, and mainly in nonsynchronized responses. This suggests that attention-related increases in activity are stronger or longer-lasting for more difficult stimuli in ML.


2012 ◽  
Vol 108 (5) ◽  
pp. 1366-1380 ◽  
Author(s):  
Stefan Klampfl ◽  
Stephen V. David ◽  
Pingbo Yin ◽  
Shihab A. Shamma ◽  
Wolfgang Maass

To process the rich temporal structure of their acoustic environment, organisms have to integrate information over time into an appropriate neural response. Previous studies have addressed the modulation of responses of auditory neurons to a current sound in dependence of the immediate stimulation history, but a quantitative analysis of this important computational process has been missing. In this study, we analyzed temporal integration of information in the spike output of 122 single neurons in primary auditory cortex (A1) of four awake ferrets in response to random tone sequences. We quantified the information contained in the responses about both current and preceding sounds in two ways: by estimating directly the mutual information between stimulus and response, and by training linear classifiers to decode information about the stimulus from the neural response. We found that 1) many neurons conveyed a significant amount of information not only about the current tone but also simultaneously about the previous tone, 2) the neural response to tone sequences was a nonlinear combination of responses to the tones in isolation, and 3) nevertheless, much of the information about current and previous tones could be extracted by linear decoders. Furthermore, our analysis of these experimental data shows that methods from information theory and the application of standard machine learning methods for extracting specific information yield quite similar results.


2011 ◽  
Vol 163 (1-2) ◽  
pp. 53
Author(s):  
K. Palma-Rigo ◽  
T.P. Nguyen-Huu ◽  
P.J. Davern ◽  
J.K. Bassi ◽  
J.L. Elghozi ◽  
...  

2011 ◽  
Vol 105 (2) ◽  
pp. 582-600 ◽  
Author(s):  
Pingbo Yin ◽  
Jeffrey S. Johnson ◽  
Kevin N. O'Connor ◽  
Mitchell L. Sutter

Conflicting results have led to different views about how temporal modulation is encoded in primary auditory cortex (A1). Some studies find a substantial population of neurons that change firing rate without synchronizing to temporal modulation, whereas other studies fail to see these nonsynchronized neurons. As a result, the role and scope of synchronized temporal and nonsynchronized rate codes in AM processing in A1 remains unresolved. We recorded A1 neurons' responses in awake macaques to sinusoidal AM noise. We find most (37–78%) neurons synchronize to at least one modulation frequency (MF) without exhibiting nonsynchronized responses. However, we find both exclusively nonsynchronized neurons (7–29%) and “mixed-mode” neurons (13–40%) that synchronize to at least one MF and fire nonsynchronously to at least one other. We introduce new measures for modulation encoding and temporal synchrony that can improve the analysis of how neurons encode temporal modulation. These include comparing AM responses to the responses to unmodulated sounds, and a vector strength measure that is suitable for single-trial analysis. Our data support a transformation from a temporally based population code of AM to a rate-based code as information ascends the auditory pathway. The number of mixed-mode neurons found in A1 indicates this transformation is not yet complete, and A1 neurons may carry multiplexed temporal and rate codes.


2011 ◽  
Vol 63 (1) ◽  
pp. 167-176 ◽  
Author(s):  
Larisa Ilijin ◽  
Milena Vlahovic ◽  
Marija Mrdakovic ◽  
D. Mircic ◽  
Dajana Todorovic ◽  
...  

The morphometric changes (size of neurons and their nuclei) of protocerebral dorsomedial A1? and dorsolateral L2? neurosecretory neurons were analyzed in Lymantria dispar larvae after exposure to strong static (SMF, 235 mT) and extremely low frequency magnetic fields (ELF MF, 2 mT). Increase in the size of A1? neurons and their nuclei were observed after acute exposure to SMF. Decrease in the size of these neurons and their nuclei was observed after exposure to ELF MF. The size of L2? neurons and their nuclei tend to decrease after exposure to SMF and ELF MF. The quantification of protein bands within the Mr range corresponding to the large form of the prothoracicotropic neurohormone indicates that the amount of protein decreased after exposure to both types of magnetic fields.


Sign in / Sign up

Export Citation Format

Share Document