temporal cues
Recently Published Documents


TOTAL DOCUMENTS

300
(FIVE YEARS 79)

H-INDEX

38
(FIVE YEARS 5)

2022 ◽  
Vol 12 ◽  
Author(s):  
Anjali Mahilkar ◽  
Pavithra Venkataraman ◽  
Akshat Mall ◽  
Supreet Saini

Environmental cues in an ecological niche are often temporal in nature. For instance, in temperate climates, temperature is higher in daytime compared to during night. In response to these temporal cues, bacteria have been known to exhibit anticipatory regulation, whereby triggering response to a yet to appear cue. Such an anticipatory response in known to enhance Darwinian fitness, and hence, is likely an important feature of regulatory networks in microorganisms. However, the conditions under which an anticipatory response evolves as an adaptive response are not known. In this work, we develop a quantitative model to study response of a population to two temporal environmental cues, and predict variables which are likely important for evolution of anticipatory regulatory response. We follow this with experimental evolution of Escherichia coli in alternating environments of rhamnose and paraquat for ∼850 generations. We demonstrate that growth in this cyclical environment leads to evolution of anticipatory regulation. As a result, pre-exposure to rhamnose leads to a greater fitness in paraquat environment. Genome sequencing reveals that this anticipatory regulation is encoded via mutations in global regulators. Overall, our study contributes to understanding of how environment shapes the topology of regulatory networks in an organism.


2021 ◽  
Author(s):  
Laurianne Cabrera ◽  
Bonnie K. Lau

The processing of auditory temporal information is important for the extraction of voice pitch, linguistic information, as well as the overall temporal structure of speech. However, many aspects regarding its early development remains not well understood. This paper reviews the development of different aspects of auditory temporal processing during the first year of life when infants are acquiring their native language. First, potential mechanisms of neural immaturity are discussed in the context of neurophysiological studies. Next, what is known about infant auditory capabilities is considered with a focus on psychophysical studies involving non-speech stimuli to investigate the perception of temporal fine structure and envelope cues. This is followed by a review of studies involving speech stimuli, including those that present vocoded signals as a method of degrading the spectro-temporal information available to infant listeners. Finally, we highlight key findings from the cochlear implant literature that illustrate the importance of temporal cues in speech perception.


2021 ◽  
Vol 5 ◽  
pp. 4
Author(s):  
Kyle Jasmin ◽  
Frederic Dick ◽  
Adam Taylor Tierney

Prosody can be defined as the rhythm and intonation patterns spanning words, phrases and sentences. Accurate perception of prosody is an important component of many aspects of language processing, such as parsing grammatical structures, recognizing words, and determining where emphasis may be placed. Prosody perception is important for language acquisition and can be impaired in language-related developmental disorders. However, existing assessments of prosodic perception suffer from some shortcomings.  These include being unsuitable for use with typically developing adults due to ceiling effects and failing to allow the investigator to distinguish the unique contributions of individual acoustic features such as pitch and temporal cues. Here we present the Multi-Dimensional Battery of Prosody Perception (MBOPP), a novel tool for the assessment of prosody perception. It consists of two subtests: Linguistic Focus, which measures the ability to hear emphasis or sentential stress, and Phrase Boundaries, which measures the ability to hear where in a compound sentence one phrase ends, and another begins. Perception of individual acoustic dimensions (Pitch and Duration) can be examined separately, and test difficulty can be precisely calibrated by the experimenter because stimuli were created using a continuous voice morph space. We present validation analyses from a sample of 59 individuals and discuss how the battery might be deployed to examine perception of prosody in various populations.


2021 ◽  
Author(s):  
Neelu Madan ◽  
Arya Farkhondeh ◽  
Kamal Nasrollahi ◽  
Sergio Escalera ◽  
Thomas B. Moeslund

2021 ◽  
Vol 150 (4) ◽  
pp. A337-A338
Author(s):  
Ellen Peng ◽  
Viji Easwar
Keyword(s):  

2021 ◽  
Author(s):  
Chundi Xu ◽  
Tyler Ramos ◽  
Chris Q. Doe

AbstractIt is widely accepted that neuronal fate is initially determined by spatial and temporal cues acting in progenitors, followed by transcription factors (TFs) that act in post-mitotic neurons to specify their functional identity (e.g. ion channels, cell surface molecules, and neurotransmitters). It remains unclear, however, whether a single TF can coordinately regulate both steps. The five lamina neurons (L1-L5) in the Drosophila visual system, are an ideal model for addressing this question. Here we show that the homeodomain TF Brain-specific homeobox (Bsh) is expressed in a subset of lamina precursor cells (LPCs) where it specifies L4 and L5 fate, and suppresses homeodomain TF Zfh1 to prevent L1 and L3 fate. Subsequently, in L4 neurons, Bsh initiates a feed forward loop with another homeodomain TF Apterous (Ap) to drive recognition molecule DIP-β expression, which is required for precise L4 synaptic connectivity. We conclude that a single homeodomain TF expressed in both precursors and neurons can coordinately generate neuronal fate and synaptic connectivity, thereby linking these two developmental events. Furthermore, our results suggest that acquiring LPC expression of a single TF, Bsh, may be sufficient to drive the evolution of increased brain complexity.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257676
Author(s):  
Giorgia Bertonati ◽  
Maria Bianca Amadeo ◽  
Claudio Campus ◽  
Monica Gori

Multisensory experience is crucial for developing a coherent perception of the world. In this context, vision and audition are essential tools to scaffold spatial and temporal representations, respectively. Since speed encompasses both space and time, investigating this dimension in blindness allows deepening the relationship between sensory modalities and the two representation domains. In the present study, we hypothesized that visual deprivation influences the use of spatial and temporal cues underlying acoustic speed perception. To this end, ten early blind and ten blindfolded sighted participants performed a speed discrimination task in which spatial, temporal, or both cues were available to infer moving sounds’ velocity. The results indicated that both sighted and early blind participants preferentially relied on temporal cues to determine stimuli speed, by following an assumption that identified as faster those sounds with a shorter duration. However, in some cases, this temporal assumption produces a misperception of the stimulus speed that negatively affected participants’ performance. Interestingly, early blind participants were more influenced by this misleading temporal assumption than sighted controls, resulting in a stronger impairment in the speed discrimination performance. These findings demonstrate that the absence of visual experience in early life increases the auditory system’s preference for the time domain and, consequentially, affects the perception of speed through audition.


Author(s):  
J. Wei ◽  
J. Jiang ◽  
A. Yilmaz

Abstract. Background subtraction aims at detecting salient background which in return provides regions of moving objects referred to as the foreground. Background subtraction inherently uses the temporal relations by including time dimension in its formulation. Alternative techniques to background subtraction require stationary cameras for learning the background. Stationary cameras provide semi-constant background images that make learning salient background easier. Still cameras, however, are not applicable to moving camera scenarios, such as vehicle embedded camera for autonomous driving. For moving cameras, due to the complexity of modelling changing background, recent approaches focus on directly detecting the foreground objects in each frame independently. This treatment, however, requires learning all possible objects that can appear in the field of view. In this paper, we achieve background subtraction for moving cameras using specialized deep learning approach, the Moving-camera Background Subtraction Network (MBS-Net). Our approach is robust to detect changing background in various scenarios and does not require training on foreground objects. The developed approach uses temporal cues from past frames by applying Conditional Random Fields as a part of the developed neural network. Our proposed method have a good performance on ApolloScape dataset (Huang et al., 2018) with resolution 3384 × 2710 videos. To the best of our acknowledge, this paper is the first to propose background subtraction for moving cameras using deep learning.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Assaf Breska ◽  
Richard B Ivry

A functional benefit of attention is to proactively enhance perceptual sensitivity in space and time. Although attentional orienting has traditionally been associated with cortico-thalamic networks, recent evidence has shown that individuals with cerebellar degeneration (CD) show a reduced reaction time benefit from cues that enable temporal anticipation. The present study examined whether the cerebellum contributes to the proactive attentional modulation in time of perceptual sensitivity. We tested CD participants on a non-speeded, challenging perceptual discrimination task, asking if they benefit from temporal cues. Strikingly, the CD group showed no duration-specific perceptual sensitivity benefit when cued by repeated but aperiodic presentation of the target interval. In contrast, they performed similar to controls when cued by a rhythmic stream. This dissociation further specifies the functional domain of the cerebellum and establishes its role in the attentional adjustment of perceptual sensitivity in time in addition to its well-documented role in motor timing.


2021 ◽  
Vol 13 (11) ◽  
pp. 2197
Author(s):  
François Waldner ◽  
Foivos I. Diakogiannis ◽  
Kathryn Batchelor ◽  
Michael Ciccotosto-Camp ◽  
Elizabeth Cooper-Williams ◽  
...  

Digital agriculture services can greatly assist growers to monitor their fields and optimize their use throughout the growing season. Thus, knowing the exact location of fields and their boundaries is a prerequisite. Unlike property boundaries, which are recorded in local council or title records, field boundaries are not historically recorded. As a result, digital services currently ask their users to manually draw their field, which is time-consuming and creates disincentives. Here, we present a generalized method, hereafter referred to as DECODE (DEtect, COnsolidate, and DElinetate), that automatically extracts accurate field boundary data from satellite imagery using deep learning based on spatial, spectral, and temporal cues. We introduce a new convolutional neural network (FracTAL ResUNet) as well as two uncertainty metrics to characterize the confidence of the field detection and field delineation processes. We finally propose a new methodology to compare and summarize field-based accuracy metrics. To demonstrate the performance and scalability of our method, we extracted fields across the Australian grains zone with a pixel-based accuracy of 0.87 and a field-based accuracy of up to 0.88 depending on the metric. We also trained a model on data from South Africa instead of Australia and found it transferred well to unseen Australian landscapes. We conclude that the accuracy, scalability and transferability of DECODE shows that large-scale field boundary extraction based on deep learning has reached operational maturity. This opens the door to new agricultural services that provide routine, near-real time field-based analytics.


Sign in / Sign up

Export Citation Format

Share Document