scholarly journals Imprinting on time-structured acoustic stimuli in ducklings

2021 ◽  
Vol 17 (9) ◽  
Author(s):  
Tiago Monteiro ◽  
Tom Hart ◽  
Alex Kacelnik

Filial imprinting is a dedicated learning process that lacks explicit reinforcement. The phenomenon itself is narrowly heritably canalized, but its content, the representation of the parental object, reflects the circumstances of the newborn. Imprinting has recently been shown to be even more subtle and complex than previously envisaged, since ducklings and chicks are now known to select and represent for later generalization abstract conceptual properties of the objects they perceive as neonates, including movement pattern, heterogeneity and inter-component relationships of same or different. Here, we investigate day-old Mallard ( Anas platyrhynchos ) ducklings’ bias towards imprinting on acoustic stimuli made from mallards’ vocalizations as opposed to white noise, whether they imprint on the temporal structure of brief acoustic stimuli of either kind, and whether they generalize timing information across the two sounds. Our data are consistent with a strong innate preference for natural sounds, but do not reliably establish sensitivity to temporal relations. This fits with the view that imprinting includes the establishment of representations of both primary percepts and selective abstract properties of their early perceptual input, meshing together genetically transmitted prior pre-dispositions with active selection and processing of the perceptual input.

2021 ◽  
Author(s):  
Tiago Monteiro ◽  
Tom Hart ◽  
Alex Kacelnik

AbstractFilial imprinting is a dedicated learning process that lacks explicit reinforcement. The phenomenon itself is narrowly heritably canalized, but its content, the representation of the parental object, reflects the circumstances of the newborn. Imprinting has recently been shown to be even more subtle and complex than previously envisaged, since ducklings and chicken are now known to select and represent for later generalization abstract conceptual properties of the objects they perceive as neonates, including movement pattern, heterogeneity, and inter-component relationships of same or different. Here we investigate whether day-old Mallard ducklings (Anas platyrhynchos) imprint on the temporal separation between duos of brief acoustic stimuli, and whether they generalize such timing information to novel sound types. Subjects did discriminate temporal structure when imprinted and tested on natural duck calls, but not when using white noise for imprinting or testing. Our data confirm that imprinting includes the establishment of neural representations of both primary percepts and abstract properties of candidate objects, meshing together genetically transmitted prior knowledge with selected perceptual input.


2005 ◽  
Vol 94 (4) ◽  
pp. 2970-2975 ◽  
Author(s):  
Rajiv Narayan ◽  
Ayla Ergün ◽  
Kamal Sen

Although auditory cortex is thought to play an important role in processing complex natural sounds such as speech and animal vocalizations, the specific functional roles of cortical receptive fields (RFs) remain unclear. Here, we study the relationship between a behaviorally important function: the discrimination of natural sounds and the structure of cortical RFs. We examine this problem in the model system of songbirds, using a computational approach. First, we constructed model neurons based on the spectral temporal RF (STRF), a widely used description of auditory cortical RFs. We focused on delayed inhibitory STRFs, a class of STRFs experimentally observed in primary auditory cortex (ACx) and its analog in songbirds (field L), which consist of an excitatory subregion and a delayed inhibitory subregion cotuned to a characteristic frequency. We quantified the discrimination of birdsongs by model neurons, examining both the dynamics and temporal resolution of discrimination, using a recently proposed spike distance metric (SDM). We found that single model neurons with delayed inhibitory STRFs can discriminate accurately between songs. Discrimination improves dramatically when the temporal structure of the neural response at fine timescales is considered. When we compared discrimination by model neurons with and without the inhibitory subregion, we found that the presence of the inhibitory subregion can improve discrimination. Finally, we modeled a cortical microcircuit with delayed synaptic inhibition, a candidate mechanism underlying delayed inhibitory STRFs, and showed that blocking inhibition in this model circuit degrades discrimination.


2016 ◽  
Vol 113 (9) ◽  
pp. 2508-2513 ◽  
Author(s):  
Melville J. Wohlgemuth ◽  
Cynthia F. Moss

This study investigated auditory stimulus selectivity in the midbrain superior colliculus (SC) of the echolocating bat, an animal that relies on hearing to guide its orienting behaviors. Multichannel, single-unit recordings were taken across laminae of the midbrain SC of the awake, passively listening big brown bat, Eptesicus fuscus. Species-specific frequency-modulated (FM) echolocation sound sequences with dynamic spectrotemporal features served as acoustic stimuli along with artificial sound sequences matched in bandwidth, amplitude, and duration but differing in spectrotemporal structure. Neurons in dorsal sensory regions of the bat SC responded selectively to elements within the FM sound sequences, whereas neurons in ventral sensorimotor regions showed broad response profiles to natural and artificial stimuli. Moreover, a generalized linear model (GLM) constructed on responses in the dorsal SC to artificial linear FM stimuli failed to predict responses to natural sounds and vice versa, but the GLM produced accurate response predictions in ventral SC neurons. This result suggests that auditory selectivity in the dorsal extent of the bat SC arises through nonlinear mechanisms, which extract species-specific sensory information. Importantly, auditory selectivity appeared only in responses to stimuli containing the natural statistics of acoustic signals used by the bat for spatial orientation—sonar vocalizations—offering support for the hypothesis that sensory selectivity enables rapid species-specific orienting behaviors. The results of this study are the first, to our knowledge, to show auditory spectrotemporal selectivity to natural stimuli in SC neurons and serve to inform a more general understanding of mechanisms guiding sensory selectivity for natural, goal-directed orienting behaviors.


1991 ◽  
Vol 158 (1) ◽  
pp. 391-410 ◽  
Author(s):  
ANDREAS STUMPNER ◽  
BERNHARD RONACHER

1. Auditory intemeurones originating in the metathoracic ganglion of females of the grasshopper Chorthippus biguttulus can be classified as local (SN), bisegmental (BSN), T-shaped (TN) and ascending neurones (AN). A comparison of branching patterns and physiological properties indicates that auditory interneurones of C. biguttulus are homologous with those described for the locust. 2. Eighteen types of auditory neurones are morphologically characterized on the basis of Lucifer Yellow staining. All of them branch bilaterally in the metathoracic ganglion. Smooth dendrites, from which postsynaptic potentials (PSPs) can be recorded, predominate on the side ipsilateral to the soma. If ‘beaded’ branches exist, they predominate contralaterally. The ascending axon runs contralaterally to the soma, except in T-fibres. 3. Auditory receptors respond tonically. The dynamic range of their intensity-response curve covers 20–25 dB. Local, bisegmental and T-shaped neurones are most sensitive to stimulation ipsilateral to the soma. The responses of SN1 and TNI to white-noise stimuli are similar to those of receptors, while phasic-tonic responses are found in SN4, SN5, SN7 and BSN1. The bisegmental neurones receive side-dependent inhibition that corresponds to a 20–30dB attenuation. One local element (SN6) is predominantly inhibited by acoustic stimuli. 4. Ascending neurones are more sensitive to contralateral stimulation (i.e. on their axon side). Only one of them (AN6) responds tonically to white-noise stimuli at all intensities; others exhibit a tonic discharge only at low or at high intensities.One neurone (AN12) responds with a phasic burst over a wide intensity range. The most directional neurones (AN1, AN2) are excited by contralateral stimuli and (predominantly) inhibited by ipsilateral stimuli. Three ascending neurones (AN13-AN15) are spontaneously active and are inhibited by acoustic stimuli. 5. All auditory intemeurones, except SN5, are more sensitive to pure tones below 10 kHz than to ultrasound.


2009 ◽  
Vol 102 (2) ◽  
pp. 1086-1091 ◽  
Author(s):  
Patrick Sabourin ◽  
Gerald S. Pollack

Bursts of action potentials in sensory interneurons are believed to signal the occurrence of particularly salient stimulus features. Previous work showed that bursts in an identified, ultrasound-tuned interneuron (AN2) of the cricket Teleogryllus oceanicus code for conspicuous increases in amplitude of an ultrasound stimulus, resulting in behavioral responses that are interpreted as avoidance of echolocating bats. We show that the primary sensory neurons that inform AN2 about high-frequency acoustic stimuli also produce bursts. As is the case for AN2, bursts in sensory neurons perform better as feature detectors than isolated, nonburst, spikes. Bursting is temporally correlated between sensory neurons, suggesting that on occurrence of a salient stimulus feature, AN2 will receive strong synaptic input in the form of coincident bursts, from several sensory neurons, and that this might result in bursting in AN2. Our results show that an important feature of the temporal structure of interneuron spike trains can be established at the earliest possible level of sensory processing, i.e., that of the primary sensory neuron.


2013 ◽  
Vol 280 (1769) ◽  
pp. 20131747 ◽  
Author(s):  
Alison B. Duncan ◽  
Andrew Gonzalez ◽  
Oliver Kaltz

Environmental fluctuations are important for parasite spread and persistence. However, the effects of the spatial and temporal structure of environmental fluctuations on host–parasite dynamics are not well understood. Temporal fluctuations can be random but positively autocorrelated, such that the environment is similar to the recent past (red noise), or random and uncorrelated with the past (white noise). We imposed red or white temporal temperature fluctuations on experimental metapopulations of Paramecium caudatum , experiencing an epidemic of the bacterial parasite Holospora undulata . Metapopulations (two subpopulations linked by migration) experienced fluctuations between stressful (5°C) and permissive (23°C) conditions following red or white temporal sequences. Spatial variation in temperature fluctuations was implemented by exposing subpopulations to the same (synchronous temperatures) or different (asynchronous temperatures) temporal sequences. Red noise, compared with white noise, enhanced parasite persistence. Despite this, red noise coupled with asynchronous temperatures allowed infected host populations to maintain sizes equivalent to uninfected populations. It is likely that this occurs because subpopulations in permissive conditions rescue declining subpopulations in stressful conditions. We show how patterns of temporal and spatial environmental fluctuations can impact parasite spread and host population abundance. We conclude that accurate prediction of parasite epidemics may require realistic models of environmental noise.


2015 ◽  
Author(s):  
Maria Botcharova ◽  
Luc Berthouze ◽  
Matthew J Brookes ◽  
Gareth R Barnes ◽  
Simon F Farmer

The capacity of the human brain to interpret and respond to multiple temporal scales in its surroundings suggests that its internal interactions must also be able to operate over a broad temporal range. In this paper, we utilise a recently introduced method for characterising the rate of change of the phase difference between MEG signals and use it to study the temporal structure of the phase interactions between MEG recordings from the left and right motor cortices during rest and during a finger-tapping task. We use the Hilbert transform to estimate moment-to-moment fluctuations of the phase difference between signals. After confirming the presence of scale-invariance we estimate the Hurst exponent using detrended fluctuation analysis (DFA). An exponent of >0.5 is indicative of long-range temporal correlations (LRTCs) in the signal. We find that LRTCs are present in the α/μ and β frequency bands of resting state MEG data. We demonstrate that finger movement disrupts LRTCs correlations, producing a phase relationship with a structure similar to that of Gaussian white noise. The results are validated by applying the same analysis to data with Gaussian white noise phase difference, recordings from an empty scanner and phase-shuffled time series. We interpret the findings through comparison of the results with those we obtained from an earlier study during which we adopted this method to characterise phase relationships within a Kuramoto model of oscillators in its sub-critical, critical and super-critical synchronisation states. We find that the resting state MEG from left and right motor cortices shows moment-to-moment fluctuations of phase difference with a similar temporal structure to that of a system of Kuramoto oscillators just prior to its critical level of coupling, and that finger tapping moves the system away from this pre-critical state towards a more random state.


2021 ◽  
Vol 15 ◽  
Author(s):  
Ruud J. R. Den Hartigh ◽  
Sem Otten ◽  
Zuzanna M. Gruszczynska ◽  
Yannick Hill

Complex systems typically demonstrate a mixture of regularity and flexibility in their behavior, which would make them adaptive. At the same time, adapting to perturbations is a core characteristic of resilience. The first aim of the current research was therefore to test the possible relation between complexity and resilient motor performance (i.e., performance while being perturbed). The second aim was to test whether complexity and resilient performance improve through differential learning. To address our aims, we designed two parallel experiments involving a motor task, in which participants moved a stick with their non-dominant hand along a slider. Participants could score points by moving a cursor as fast and accurately as possible between two boxes, positioned on the right- and left side of the screen in front of them. In a first session, we determined the complexity by analyzing the temporal structure of variation in the box-to-box movement intervals with a Detrended Fluctuation Analysis. Then, we introduced perturbations to the task: We altered the tracking speed of the cursor relative to the stick-movements briefly (i.e., 4 s) at intervals of 1 min (Experiment 1), or we induced a prolonged change of the tracking speed each minute (Experiment 2). Subsequently, participants had three sessions of either classical learning or differential learning. Participants in the classical learning condition were trained to perform the ideal movement pattern, whereas those in the differential learning condition had to perform additional and irrelevant movements. Finally, we conducted a posttest that was the same as the first session. In both experiments, results showed moderate positive correlations between complexity and points scored (i.e., box touches) in the perturbation-period of the first session. Across the two experiments, only differential learning led to a higher complexity index (i.e., more prominent patterns of pink noise) from baseline to post-test. Unexpectedly, the classical learning group improved more in their resilient performance than the differential learning group. Together, this research provides empirical support for the relation between complexity and resilience, and between complexity and differential learning in human motor performance, which should be examined further.


2020 ◽  
Vol 11 ◽  
Author(s):  
Miki Uetsuki ◽  
Junji Watanabe ◽  
Kazushi Maruya

Recently, dynamic text presentation, such as scrolling text, has been widely used. Texts are often presented at constant timing and speed in conventional dynamic text presentation. However, dynamic text presentation enables visually presented texts to indicate timing information, such as prosody, and the texts might influence the impression of reading. In this paper, we examined this possibility by focusing on the temporal features of digital text in which texts are represented sequentially and with varying speed, duration, and timing. We call this “textual prosody.” We used three types of textual prosody: “Recorded,” “Shuffled,” and “Constant.” Recorded prosody is the reproduction of a reader’s reading with pauses and varying speed that simulates talking. Shuffled prosody randomly shuffles the time course of speed and pauses in the recorded type. Constant prosody has a constant presentation speed and provides no timing information. Experiment 1 examined the effect of textual prosody on people with normal hearing. Participants read dynamic text with textual prosody silently and rated their impressions of texts. The results showed that readers with normal hearing preferred recorded textual prosody and constant prosody at the optimum speed (6 letters/second). Recorded prosody was also preferred at a low presentation speed. Experiment 2 examined the characteristics of textual prosody using an articulatory suppression paradigm. The results showed that some textual prosody was stored in the articulatory loop despite it being presented visually. In Experiment 3, we examined the effect of textual prosody with readers with hearing loss. The results demonstrated that readers with hearing loss had positive impressions at relatively low presentation speeds when the recorded prosody was presented. The results of this study indicate that the temporal structure is processed regardless of whether the input is visual or auditory. Moreover, these results suggest that textual prosody can enrich reading not only in people with normal hearing but also in those with hearing loss, regardless of acoustic experiences.


2017 ◽  
Author(s):  
Vincent Jacob ◽  
Christelle Monsempès ◽  
Jean-Pierre Rospars ◽  
Jean-Baptiste Masson ◽  
Philippe Lucas

AbstractLong-distance olfactory search behaviors depend on odor detection dynamics. Due to turbulence, olfactory signals travel as bursts of variable concentration and spacing and are characterized by long-tail distributions of odor/no-odor events, challenging the computing capacities of olfactory systems. How animals encode complex olfactory scenes to track the plume far from the source remains unclear. Here we focus on the coding of the plume temporal dynamics in moths. We compare responses of olfactory receptor neurons (ORNs) and antennal lobe projection neurons (PNs) to sequences of pheromone stimuli either with white-noise patterns or with realistic turbulent temporal structures simulating a large range of distances (8 to 64 m) from the odor source. For the first time, we analyze what information is extracted by the olfactory system at large distances from the source. Neuronal responses are analyzed using linear–nonlinear models fitted with white-noise stimuli and used for predicting responses to turbulent stimuli. We found that neuronal firing rate is less correlated with the dynamic odor time course when distance to the source increases because of improper coding during long odor and no-odor events that characterize large distances. Rapid adaptation during long puffs does not preclude however the detection of puff transitions in PNs. Individual PNs but not individual ORNs encode the onset and offset of odor puffs for any temporal structure of stimuli. A higher spontaneous firing rate coupled to an inhibition phase at the end of PN responses contributes to this coding property. This allows PNs to decode the temporal structure of the odor plume at any distance to the source, an essential piece of information moths can use in their tracking behavior.Author SummaryLong-distance olfactory search is a difficult task because atmospheric turbulence erases global gradients and makes the plume discontinuous. The dynamics of odor detections is the sole information about the position of the source. Male moths successfully track female pheromone plumes at large distances. Here we show that the moth olfactory system encodes olfactory scenes simulating variable distances from the odor source by characterizing puff onsets and offsets. A single projection neuron is sufficient to provide an accurate representation of the dynamic pheromone time course at any distance to the source while this information seems to be encoded at the population level in olfactory receptor neurons.


Sign in / Sign up

Export Citation Format

Share Document