scholarly journals Social experience dependent plasticity of mouse song selectivity without that of song components

2021 ◽  
Author(s):  
Swapna Agarwalla ◽  
Sharba Bandyopadhyay

Syllable sequences in male mouse ultrasonic-vocalizations (USVs), songs, contain structure - quantified through predictability, like birdsong and aspects of speech. Apparent USV innateness and lack of learnability, discount mouse USVs for modelling speech-like social communication and its deficits. Informative contextual natural sequences (SN) were theoretically extracted and they were preferred by female mice. Primary auditory cortex (A1) supragranular neurons show differential selectivity to the same syllables in SN and random sequences (SR). Excitatory neurons (EXNs) in females showed increases in selectivity to whole SNs over SRs based on extent of social exposure with male, but syllable selectivity remained unchanged. Thus mouse A1 single neurons adaptively represent entire order of acoustic units without altering selectivity of individual units, fundamental to speech perception. Additionally, observed plasticity was replicated with silencing of somatostatin positive neurons, which had plastic effects opposite to EXNs, thus pointing out possible pathways involved in perception of sound sequences.

2009 ◽  
Vol 5 (5) ◽  
pp. 589-592 ◽  
Author(s):  
K. Hammerschmidt ◽  
K. Radyushkin ◽  
H. Ehrenreich ◽  
J. Fischer

The ultrasonic vocalizations of mice are attracting increasing attention, because they have been recognized as an informative readout in genetically modified strains. In addition, the observation that male mice produce elaborate sequences of ultrasonic vocalizations (‘song’) when exposed to female mice or their scents has sparked a debate as to whether these sounds are—in terms of their structure and function—analogous to bird song. We conducted playback experiments with cycling female mice to explore the function of male mouse songs. Using a place preference design, we show that these vocalizations elicited approach behaviour in females. In contrast, the playback of whistle-like artificial control sounds did not evoke approach responses. Surprisingly, the females also did not respond to pup isolation calls. In addition, female responses did not vary in relation to reproductive cycle, i.e. whether they were in oestrus or not. Furthermore, our data revealed a rapid habituation of subjects to the experimental situation, which stands in stark contrast to other species' responses to courtship vocalizations. Nevertheless, our results clearly demonstrate that male mouse songs elicit females' interest.


2013 ◽  
Vol 110 (5) ◽  
pp. 1087-1096 ◽  
Author(s):  
Heesoo Kim ◽  
Shaowen Bao

Cortical sensory representation is highly adaptive to the environment, and prevalent or behaviorally important stimuli are often overrepresented. One class of such stimuli is species-specific vocalizations. Rats vocalize in the ultrasonic range >30 kHz, but cortical representation of this frequency range has not been systematically examined. We recorded in vivo cortical electrophysiological responses to ultrasonic pure-tone pips, natural ultrasonic vocalizations, and pitch-shifted vocalizations to assess how rats represent this ethologically relevant frequency range. We find that nearly 40% of the primary auditory cortex (AI) represents an octave-wide band of ultrasonic vocalization frequencies (UVFs; 32–64 kHz) compared with <20% for other octave bands <32 kHz. These UVF neurons respond preferentially and reliably to ultrasonic vocalizations. The UVF overrepresentation matures in the cortex before it develops in the central nucleus of inferior colliculus, suggesting a cortical origin and corticofugal influences. Furthermore, the development of cortical UVF overrepresentation depends on early acoustic experience. These results indicate that natural sensory experience causes large-scale cortical map reorganization and improves representations of species-specific vocalizations.


eLife ◽  
2014 ◽  
Vol 3 ◽  
Author(s):  
Rajnish P Rao ◽  
Falk Mielke ◽  
Evgeny Bobrov ◽  
Michael Brecht

Social interactions involve multi-modal signaling. Here, we study interacting rats to investigate audio-haptic coordination and multisensory integration in the auditory cortex. We find that facial touch is associated with an increased rate of ultrasonic vocalizations, which are emitted at the whisking rate (∼8 Hz) and preferentially initiated in the retraction phase of whisking. In a small subset of auditory cortex regular-spiking neurons, we observed excitatory and heterogeneous responses to ultrasonic vocalizations. Most fast-spiking neurons showed a stronger response to calls. Interestingly, facial touch-induced inhibition in the primary auditory cortex and off-responses after termination of touch were twofold stronger than responses to vocalizations. Further, touch modulated the responsiveness of auditory cortex neurons to ultrasonic vocalizations. In summary, facial touch during social interactions involves precisely orchestrated calling-whisking patterns. While ultrasonic vocalizations elicited a rather weak population response from the regular spikers, the modulation of neuronal responses by facial touch was remarkably strong.


2020 ◽  
Author(s):  
Emmanuel Biau ◽  
Danying Wang ◽  
Hyojin Park ◽  
Ole Jensen ◽  
Simon Hanslmayr

ABSTRACTAudiovisual speech perception relies, among other things, on our expertise to map a speaker’s lip movements with speech sounds. This multimodal matching is facilitated by salient syllable features that align lip movements and acoustic envelope signals in the 4 - 8 Hz theta band. Although non-exclusive, the predominance of theta rhythms in speech processing has been firmly established by studies showing that neural oscillations track the acoustic envelope in the primary auditory cortex. Equivalently, theta oscillations in the visual cortex entrain to lip movements, and the auditory cortex is recruited during silent speech perception. These findings suggest that neuronal theta oscillations may play a functional role in organising information flow across visual and auditory sensory areas. We presented silent speech movies while participants performed a pure tone detection task to test whether entrainment to lip movements directs the auditory system and drives behavioural outcomes. We showed that auditory detection varied depending on the ongoing theta phase conveyed by lip movements in the movies. In a complementary experiment presenting the same movies while recording participants’ electro-encephalogram (EEG), we found that silent lip movements entrained neural oscillations in the visual and auditory cortices with the visual phase leading the auditory phase. These results support the idea that the visual cortex entrained by lip movements filtered the sensitivity of the auditory cortex via theta phase synchronisation.


2019 ◽  
Author(s):  
Jong Hoon Lee ◽  
Xiaoqin Wang ◽  
Daniel Bendor

AbstractIn primary auditory cortex, slowly repeated acoustic events are represented temporally by phase-locked activity of single neurons. Single-unit studies in awake marmosets (Callithrix jacchus) have shown that a sub-population of these neurons also monotonically increase or decrease their average discharge rate during stimulus presentation for higher repetition rates. Building on a computational single-neuron model that generates phase-locked responses with stimulus evoked excitation followed by strong inhibition, we find that stimulus-evoked short-term depression is sufficient to produce synchronized monotonic positive and negative responses to slowly repeated stimuli. By exploring model robustness and comparing it to other models for adaptation to such stimuli, we conclude that short-term depression best explains our observations in single-unit recordings in awake marmosets. Using this model, we emulated how single neurons could encode and decode multiple aspects of an acoustic stimuli with the monotonic positive and negative encoding of a given stimulus feature. Together, our results show that a simple biophysical mechanism in single neurons can allow a more complex encoding and decoding of acoustic stimuli.


2007 ◽  
Vol 97 (2) ◽  
pp. 1726-1737 ◽  
Author(s):  
M. L. Phan ◽  
G. H. Recanzone

One fundamental process of the auditory system is to process rapidly occurring acoustic stimuli, which are fundamental components of complex stimuli such as animal vocalizations and human speech. Although the auditory cortex is known to subserve the perception of acoustic temporal events, relatively little is currently understood about how single neurons respond to such stimuli. We recorded the responses of single neurons in the primary auditory cortex of alert monkeys performing an auditory task. The stimuli consisted of four tone pips with equal duration and interpip interval, with the first and last pip of the sequence being near the characteristic frequency of the neuron under study. We manipulated the rate of presentation, the frequency of the middle two tone pips, and the order by which they were presented. Our results indicate that single cortical neurons are ineffective at responding to the individual tone pips of the sequence for pip durations of <12 ms, but did begin to respond synchronously to each pip of the sequence at 18-ms durations. In addition, roughly 40% of the neurons tested were able to discriminate the order that the two middle tone pips were presented in at durations of ≥24 ms. These data place the primate primary auditory cortex at an early processing stage of temporal rate discrimination.


2004 ◽  
Vol 91 (1) ◽  
pp. 118-135 ◽  
Author(s):  
Kyle T. Nakamoto ◽  
Jiping Zhang ◽  
Leonard M. Kitzes

The topographical response of a portion of an isofrequency contour in primary cat auditory cortex (AI) to a series of monaural and binaural stimuli was studied. Responses of single neurons to monaural and a matrix of binaural characteristic frequency tones, varying in average binaural level (ABL) and interaural level differences (ILD), were recorded. The topography of responses to monaural and binaural stimuli was appreciably different. Patches of cells that responded monotonically to increments in ABL alternated with patches that responded nonmonotonically to ABL. The patches were between 0.4 and 1 mm in length along an isofrequency contour. Differences were found among monotonic patches and among nonmonotonic patches. Topographically, activated and silent populations of neurons varied with both changes in ILD and changes in ABL, suggesting that the area of responsive units may underlie the coding of sound level and sound location.


1993 ◽  
Vol 69 (2) ◽  
pp. 449-461 ◽  
Author(s):  
M. N. Semple ◽  
L. M. Kitzes

1. Single-neuron responses were recorded in high-frequency regions of primary auditory cortex (AI) of anesthetized cats. Best-frequency tone pips were presented to each ear independently via sealed stimulus delivery systems, and the sound pressure level (SPL) at each ear was independently manipulated. Each neuron was studied with many dichotic combinations of SPL, chosen to incorporate a broad range of the two synthetic interaural level variables, interaural level difference (ILD) and average binaural level (ABL). This paper illustrates the common forms of binaural SPL selectivity observed in a sample of 204 single neurons located in AI. 2. Most neurons (> 90%) were jointly influenced by ILD and ABL. A small proportion of bilaterally excitable (EE) neurons responded to ABL rather independently of ILD. Only one neuron was determined to respond to ILD independently of ABL. 3. Nonmonotonic selectivity for one or both of the binaural level cues was evident in > 60% of our sample. Within the most effective range of ILD values, response strength was usually related nonmonotonically to related both to ILD and ABL. We have described units exhibiting this kind of dual nonmonotonic selectivity for the two binaural variables as being influenced by a Two-Way Intensity Network (TWIN). 4. Each of the response forms identified in an earlier study of the gerbil inferior colliculus were found in this study of cat auditory cortex. However the classes were evident in markedly different proportions. In particular, TWIN responses alone accounted for 36.2% of the sample, nearly four times the proportion found in the inferior colliculus in a previous study. 5. Units with similar binaural responses do not necessarily have similar monaural properties. For example, the typically nonmonotonic relation between response strength and ABL was often observed in the absence of a monaurally demonstrable nonmonotonicity. There is no simple relation between a neuron's classification according to the sign of monaural influence and its response to ILD and ABL. In particular, EE neurons exhibited remarkably diverse binaural properties. 6. Since responses of nearly all AI neurons are influenced jointly by ABL and ILD, we contend that single neurons in primary auditory cortex are not specifically tuned to either cue. ILD and ABL are mathematical expressions relating the SPLs at the two ears to each other (as the difference and average, respectively) and any such combination is expressed most simply as a particular combination of SPL at each ear.(ABSTRACT TRUNCATED AT 400 WORDS)


Sign in / Sign up

Export Citation Format

Share Document