scholarly journals Temporally selective processing of communication signals by auditory midbrain neurons

2011 ◽  
Vol 105 (4) ◽  
pp. 1620-1632 ◽  
Author(s):  
Taffeta M. Elliott ◽  
Jakob Christensen-Dalsgaard ◽  
Darcy B. Kelley

Perception of the temporal structure of acoustic signals contributes critically to vocal signaling. In the aquatic clawed frog Xenopus laevis, calls differ primarily in the temporal parameter of click rate, which conveys sexual identity and reproductive state. We show here that an ensemble of auditory neurons in the laminar nucleus of the torus semicircularis (TS) of X. laevis specializes in encoding vocalization click rates. We recorded single TS units while pure tones, natural calls, and synthetic clicks were presented directly to the tympanum via a vibration-stimulation probe. Synthesized click rates ranged from 4 to 50 Hz, the rate at which the clicks begin to overlap. Frequency selectivity and temporal processing were characterized using response-intensity curves, temporal-discharge patterns, and autocorrelations of reduplicated responses to click trains. Characteristic frequencies ranged from 140 to 3,250 Hz, with minimum thresholds of −90 dB re 1 mm/s at 500 Hz and −76 dB at 1,100 Hz near the dominant frequency of female clicks. Unlike units in the auditory nerve and dorsal medullary nucleus, most toral units respond selectively to the behaviorally relevant temporal feature of the rate of clicks in calls. The majority of neurons (85%) were selective for click rates, and this selectivity remained unchanged over sound levels 10 to 20 dB above threshold. Selective neurons give phasic, tonic, or adapting responses to tone bursts and click trains. Some algorithms that could compute temporally selective receptive fields are described.

2005 ◽  
Vol 94 (1) ◽  
pp. 314-326 ◽  
Author(s):  
Alexander V. Galazyuk ◽  
Wenyu Lin ◽  
Daniel Llano ◽  
Albert S. Feng

A number of central auditory neurons exhibit paradoxical latency shift (PLS), a response characterized by longer response latencies at higher sound levels. PLS neurons are known to play a role in target ranging for echolocating bats that emit frequency-modulated sounds. We recently reported that early inhibition of unit’s oscillatory discharges is critical for PLS in the inferior colliculus (IC) of little brown bats. The goal of this study was to determine in echolocating bats and in nonecholocating animals (frogs): 1) the detailed characteristics of PLS and whether PLS was dependent on sound level, frequency, and duration; 2) the time course of inhibition underlying PLS using a paired-pulse paradigm. We found that 22% of IC neurons in bats and 15% in frogs exhibited periodic discharge patterns in response to tone pulses at high sound levels. The firing periodicity was unit specific and independent of sound level and duration. Other IC neurons (28% in bats; 14% in frogs) exhibited PLS. These PLS neurons shared several response characteristics: 1) PLS was largely independent of sound frequency and 2) the magnitude of shift in first-spike latency was either duration dependent or duration tolerant. For PLS neurons, application of bicuculline abolished PLS and unmasked the unit’s periodical firing pattern that served as the building block for PLS. In response to paired sound pulses, PLS neurons exhibited delay-dependent response suppression, confirming that high-threshold leading inhibition was responsible for PLS. Results also revealed the timing of excitatory and inhibitory inputs underlying PLS and its role in time-domain processing.


2010 ◽  
Vol 104 (2) ◽  
pp. 784-798 ◽  
Author(s):  
Noopur Amin ◽  
Patrick Gill ◽  
Frédéric E. Theunissen

We estimated the spectrotemporal receptive fields of neurons in the songbird auditory thalamus, nucleus ovoidalis, and compared the neural representation of complex sounds in the auditory thalamus to those found in the upstream auditory midbrain nucleus, mesencephalicus lateralis dorsalis (MLd), and the downstream auditory pallial region, field L. Our data refute the idea that the primary sensory thalamus acts as a simple, relay nucleus: we find that the auditory thalamic receptive fields obtained in response to song are more complex than the ones found in the midbrain. Moreover, we find that linear tuning diversity and complexity in ovoidalis (Ov) are closer to those found in field L than in MLd. We also find prevalent tuning to intermediate spectral and temporal modulations, a feature that is unique to Ov. Thus even a feed-forward model of the sensory processing chain, where neural responses in the sensory thalamus reveals intermediate response properties between those in the sensory periphery and those in the primary sensory cortex, is inadequate in describing the tuning found in Ov. Based on these results, we believe that the auditory thalamic circuitry plays an important role in generating novel complex representations for specific features found in natural sounds.


2021 ◽  
Author(s):  
Nasim Winchester Vahidi

The mechanisms underlying how single auditory neurons and neuron populations encode natural and acoustically complex vocal signals, such as human speech or bird songs, are not well understood. Classical models focus on individual neurons, whose spike rates vary systematically as a function of change in a small number of simple acoustic dimensions. However, neurons in the caudal medial nidopallium (NCM), an auditory forebrain region in songbirds that is analogous to the secondary auditory cortex in mammals, have composite receptive fields (CRFs) that comprise multiple acoustic features tied to both increases and decreases in firing rates. Here, we investigated the anatomical organization and temporal activation patterns of auditory CRFs in European starlings exposed to natural vocal communication signals (songs). We recorded extracellular electrophysiological responses to various bird songs at auditory NCM sites, including both single and multiple neurons, and we then applied a quadratic model to extract large sets of CRF features that were tied to excitatory and suppressive responses at each measurement site. We found that the superset of CRF features yielded spatially and temporally distributed, generalizable representations of a conspecific song. Individual sites responded to acoustically diverse features, as there was no discernable organization of features across anatomically ordered sites. The CRF features at each site yielded broad, temporally distributed responses that spanned the entire duration of many starling songs, which can last for 50 s or more. Based on these results, we estimated that a nearly complete representation of any conspecific song, regardless of length, can be obtained by evaluating populations as small as 100 neurons. We conclude that natural acoustic communication signals drive a distributed yet highly redundant representation across the songbird auditory forebrain, in which adjacent neurons contribute to the encoding of multiple diverse and time-varying spectro-temporal features.


2021 ◽  
Author(s):  
Isabelle Pia Maiditsch ◽  
Friedrich Ladich

Abstract Predation is an important ecological constraint that influences communication in animals. Fish respond to predators by adjusting their visual signalling behaviour, but the responses in calling behaviour in the presence of a visually detected predator are largely unknown. We hypothesize that fish will reduce visual and acoustic signalling including sound levels and avoid escalating fights in the presence of a predator. To test this we investigated dyadic contests in female croaking gouramis (Trichopsis vittata, Osphronemidae) in the presence and absence of a predator (Astronotus ocellatus, Cichlidae) in an adjoining tank. Agonistic behaviour in T. vittata consists of lateral (visual) displays, antiparallel circling and production of croaking sounds and may escalate to frontal displays. We analysed the number and duration of lateral display bouts, the number, duration, sound pressure level and dominant frequency of croaking sounds as well as contest outcomes. The number and duration of lateral displays decreased significantly in predator as compared to no-predator trials. Total number of sounds per contest dropped in parallel but no significant changes were observed in sound characteristics. In the presence of a predator dyadic contests were decided or terminated during lateral displays and never escalated to frontal displays. The gouramis showed approaching behaviour towards the predator between lateral displays. This is the first study supporting the hypothesis that predators reduce visual and acoustic signalling in a vocal fish. Sound properties, in contrast, did not change. Decreased signalling and the lack of escalating contests reduce the fish’s conspicuousness and thus predation threat.


2011 ◽  
Vol 106 (2) ◽  
pp. 500-514 ◽  
Author(s):  
Joseph W. Schumacher ◽  
David M. Schneider ◽  
Sarah M. N. Woolley

The majority of sensory physiology experiments have used anesthesia to facilitate the recording of neural activity. Current techniques allow researchers to study sensory function in the context of varying behavioral states. To reconcile results across multiple behavioral and anesthetic states, it is important to consider how and to what extent anesthesia plays a role in shaping neural response properties. The role of anesthesia has been the subject of much debate, but the extent to which sensory coding properties are altered by anesthesia has yet to be fully defined. In this study we asked how urethane, an anesthetic commonly used for avian and mammalian sensory physiology, affects the coding of complex communication vocalizations (songs) and simple artificial stimuli in the songbird auditory midbrain. We measured spontaneous and song-driven spike rates, spectrotemporal receptive fields, and neural discriminability from responses to songs in single auditory midbrain neurons. In the same neurons, we recorded responses to pure tone stimuli ranging in frequency and intensity. Finally, we assessed the effect of urethane on population-level representations of birdsong. Results showed that intrinsic neural excitability is significantly depressed by urethane but that spectral tuning, single neuron discriminability, and population representations of song do not differ significantly between unanesthetized and anesthetized animals.


1999 ◽  
Vol 81 (2) ◽  
pp. 825-834 ◽  
Author(s):  
Iran Salimi ◽  
Thomas Brochier ◽  
Allan M. Smith

Neuronal activity in somatosensory cortex of monkeys using a precision grip. I. Receptive fields and discharge patterns. Three adolescent Macaca fascicularis monkeys weighing between 3.5 and 4 kg were trained to use a precision grip to grasp a metal tab mounted on a low friction vertical track and to lift and hold it in a 12- to 25-mm position window for 1 s. The surface texture of the metal tab in contact with the fingers and the weight of the object could be varied. The activity of 386 single cells with cutaneous receptive fields contacting the metal tab were recorded in Brodmann’s areas 3b, 1, 2, 5, and 7 of the somatosensory cortex. In this first of a series of papers, we describe three types of discharge pattern, the receptive-field properties, and the anatomic distribution of the neurons. The majority of the receptive fields were cutaneous and covered less than one digit, and a χ2 test did not reveal any significant differences in the Brodmann’s areas representing the thumb and index finger. Two broad categories of discharge pattern cells were identified. The first category, dynamic cells, showed a brief increase in activity beginning near grip onset, which quickly subsided despite continued pressure applied to the receptive field. Some of the dynamic neurons responded to both skin indentation and release. The second category, static cells, had higher activity during the stationary holding phase of the task. These static neurons demonstrated varying degrees of sensitivity to rates of pressure change on the skin. The percentage of dynamic versus static cells was about equal for areas 3b, 2, 5, and 7. Only area 1 had a higher proportion of dynamic cells (76%). A third category was identified that contained cells with significant pregrip activity and included cortical cells with both dynamic or static discharge patterns. Cells in this category showed activity increases before movement in the absence of receptive-field stimulation, suggesting that, in addition to peripheral cutaneous input, these cells also receive strong excitation from movement-related regions of the brain.


2005 ◽  
Vol 94 (4) ◽  
pp. 2970-2975 ◽  
Author(s):  
Rajiv Narayan ◽  
Ayla Ergün ◽  
Kamal Sen

Although auditory cortex is thought to play an important role in processing complex natural sounds such as speech and animal vocalizations, the specific functional roles of cortical receptive fields (RFs) remain unclear. Here, we study the relationship between a behaviorally important function: the discrimination of natural sounds and the structure of cortical RFs. We examine this problem in the model system of songbirds, using a computational approach. First, we constructed model neurons based on the spectral temporal RF (STRF), a widely used description of auditory cortical RFs. We focused on delayed inhibitory STRFs, a class of STRFs experimentally observed in primary auditory cortex (ACx) and its analog in songbirds (field L), which consist of an excitatory subregion and a delayed inhibitory subregion cotuned to a characteristic frequency. We quantified the discrimination of birdsongs by model neurons, examining both the dynamics and temporal resolution of discrimination, using a recently proposed spike distance metric (SDM). We found that single model neurons with delayed inhibitory STRFs can discriminate accurately between songs. Discrimination improves dramatically when the temporal structure of the neural response at fine timescales is considered. When we compared discrimination by model neurons with and without the inhibitory subregion, we found that the presence of the inhibitory subregion can improve discrimination. Finally, we modeled a cortical microcircuit with delayed synaptic inhibition, a candidate mechanism underlying delayed inhibitory STRFs, and showed that blocking inhibition in this model circuit degrades discrimination.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Zbyněk Bureš ◽  
Kateryna Pysanenko ◽  
Jiří Lindovský ◽  
Josef Syka

It is well known that auditory experience during early development shapes response properties of auditory cortex (AC) neurons, influencing, for example, tonotopical arrangement, response thresholds and strength, or frequency selectivity. Here, we show that rearing rat pups in a complex acoustically enriched environment leads to an increased reliability of responses of AC neurons, affecting both the rate and the temporal codes. For a repetitive stimulus, the neurons exhibit a lower spike count variance, indicating a more stable rate coding. At the level of individual spikes, the discharge patterns of individual neurons show a higher degree of similarity across stimulus repetitions. Furthermore, the neurons follow more precisely the temporal course of the stimulus, as manifested by improved phase-locking to temporally modulated sounds. The changes are persistent and present up to adulthood. The results document that besides basic alterations of receptive fields presented in our previous study, the acoustic environment during the critical period of postnatal development also leads to a decreased stochasticity and a higher reproducibility of neuronal spiking patterns.


2000 ◽  
Vol 83 (4) ◽  
pp. 2300-2314 ◽  
Author(s):  
U. Koch ◽  
B. Grothe

To date, most physiological studies that investigated binaural auditory processing have addressed the topic rather exclusively in the context of sound localization. However, there is strong psychophysical evidence that binaural processing serves more than only sound localization. This raises the question of how binaural processing of spatial cues interacts with cues important for feature detection. The temporal structure of a sound is one such feature important for sound recognition. As a first approach, we investigated the influence of binaural cues on temporal processing in the mammalian auditory system. Here, we present evidence that binaural cues, namely interaural intensity differences (IIDs), have profound effects on filter properties for stimulus periodicity of auditory midbrain neurons in the echolocating big brown bat, Eptesicus fuscus. Our data indicate that these effects are partially due to changes in strength and timing of binaural inhibitory inputs. We measured filter characteristics for the periodicity (modulation frequency) of sinusoidally frequency modulated sounds (SFM) under different binaural conditions. As criteria, we used 50% filter cutoff frequencies of modulation transfer functions based on discharge rate as well as synchronicity of discharge to the sound envelope. The binaural conditions were contralateral stimulation only, equal stimulation at both ears (IID = 0 dB), and more intense at the ipsilateral ear (IID = −20, −30 dB). In 32% of neurons, the range of modulation frequencies the neurons responded to changed considerably comparing monaural and binaural (IID =0) stimulation. Moreover, in ∼50% of neurons the range of modulation frequencies was narrower when the ipsilateral ear was favored (IID = −20) compared with equal stimulation at both ears (IID = 0). In ∼10% of the neurons synchronization differed when comparing different binaural cues. Blockade of the GABAergic or glycinergic inputs to the cells recorded from revealed that inhibitory inputs were at least partially responsible for the observed changes in SFM filtering. In 25% of the neurons, drug application abolished those changes. Experiments using electronically introduced interaural time differences showed that the strength of ipsilaterally evoked inhibition increased with increasing modulation frequencies in one third of the cells tested. Thus glycinergic and GABAergic inhibition is at least one source responsible for the observed interdependence of temporal structure of a sound and spatial cues.


1991 ◽  
Vol 66 (5) ◽  
pp. 1549-1563 ◽  
Author(s):  
J. J. Eggermont

1. With the use of two independent microelectrodes and multiunit spike separation, 26 neural pairs, 17 triplets, and 8 quadruplets were recorded in the auditory midbrain of the leopard frog, resulting in a total of 125 neural pairs. 2. Functional interrelationships between neurons were studied by analyzing 638 cross-coincidence histograms as functions of stimulus type, stimulus level, and estimated neuron distance. Significance criteria for correlograms were established on the basis of the distribution of extreme values in a large number of correlograms for nonsimultaneously recorded pairs. 3. Simultaneous recordings from three neurons, that all showed significant neural pair correlations were analyzed with the use of the joint occurrence diagram, which displays the joint coincidences for the firings of two units (a and b) with the firings of the trigger unit (c). 4. It was found that 97.5% of the pairs showed a significant stimulus-induced correlation; neighboring neurons exhibited a stronger stimulus correlation (synchrony) than more distant neurons. 5. Positive neural interaction strength (75% to shared excitatory input) was independent of neuron distance (taking into account that the estimated electrode distance in the present investigation was never greater than 300 microns) and occurred in 25% of the pairs investigated. About 25% of the positive neural correlations could be attributed to unidirectional excitation, the majority of which was found for single-electrode pairs. Negative neural correlation occurred in 8% of the pairs and, with one exception, was found only for neurons recorded on the same electrode. 6. Evidence for the presence of feed-forward and/or feedback inhibition was found. 7. There was a strong stimulus-type influence on stimulus correlation and on positive neural correlation, whereas stimulus intensity affected the stimulus correlation but not the neural correlation. 8. From the incidence of triplet correlations, it was concluded that the divergence of afferents onto midbrain neurons was limited; it was unlikely that more than three neurons were contacted by one afferent. In contrast, convergence of afferents on torus semicircularis cells was widespread; 40-50% of the midbrain neurons were bimodally tuned and received input originating from the two auditory papillae. Convergence of fibers from the same papilla was also extensive. 9. Fast modulation of functional neural connectivity through the activity of other neurons was found, although this was probably not the result of actual changes in synaptic strength but of synchronized changes in firing rate.


Sign in / Sign up

Export Citation Format

Share Document