scholarly journals Sampling motion trajectories during hippocampal theta sequences

2021 ◽  
Author(s):  
Balazs B Ujfalussy ◽  
Gergő Orbán

Efficient planning in complex environments requires that uncertainty associated with current inferences and possible consequences of forthcoming actions is represented. Representation of uncertainty has been established in sensory systems during simple perceptual decision making tasks but it remains unclear if complex cognitive computations such as planning and navigation are also supported by probabilistic neural representations. Here we capitalized on gradually changing uncertainty along planned motion trajectories during hippocampal theta sequences to capture signatures of uncertainty representation in population responses. In contrast with prominent theories, we found no evidence of encoding parameters of probability distributions in the momentary population activity recorded in an open-field navigation task in rats. Instead, uncertainty was encoded sequentially by sampling motion trajectories randomly in subsequent theta cycles from the distribution of potential trajectories. Our analysis is the first to demonstrate that the hippocampus is well equipped to contribute to optimal planning by representing uncertainty.

2007 ◽  
pp. 176-193
Author(s):  
Qian Diao ◽  
Jianye Lu ◽  
Wei Hu ◽  
Yimin Zhang ◽  
Gary Bradski

In a visual tracking task, the object may exhibit rich dynamic behavior in complex environments that can corrupt target observations via background clutter and occlusion. Such dynamics and background induce nonlinear, nonGaussian and multimodal observation densities. These densities are difficult to model with traditional methods such as Kalman filter models (KFMs) due to their Gaussian assumptions. Dynamic Bayesian networks (DBNs) provide a more general framework in which to solve these problems. DBNs generalize KFMs by allowing arbitrary probability distributions, not just (unimodal) linear-Gaussian. Under the DBN umbrella, a broad class of learning and inference algorithms for time-series models can be used in visual tracking. Furthermore, DBNs provide a natural way to combine multiple vision cues. In this chapter, we describe some DBN models for tracking in nonlinear, nonGaussian and multimodal situations, and present a prediction method to assist feature extraction part by making a hypothesis for the new observations.


2018 ◽  
Vol 120 (1) ◽  
pp. 171-185 ◽  
Author(s):  
Seth Haney ◽  
Debajit Saha ◽  
Baranidharan Raman ◽  
Maxim Bazhenov

Adaptation of neural responses is ubiquitous in sensory systems and can potentially facilitate many important computational functions. Here we examined this issue with a well-constrained computational model of the early olfactory circuits. In the insect olfactory system, the responses of olfactory receptor neurons (ORNs) on the antennae adapt over time. We found that strong adaptation of sensory input is important for rapidly detecting a fresher stimulus encountered in the presence of other background cues and for faithfully representing its identity. However, when the overlapping odorants were chemically similar, we found that adaptation could alter the representation of these odorants to emphasize only distinguishing features. This work demonstrates novel roles for peripheral neurons during olfactory processing in complex environments. NEW & NOTEWORTHY Olfactory systems face the problem of distinguishing salient information from a complex olfactory environment. The neural representations of specific odor sources should be consistent regardless of the background. How are olfactory representations robust to varying environmental interference? We show that in locusts the extraction of salient information begins in the periphery. Olfactory receptor neurons adapt in response to odorants. Adaptation can provide a computational mechanism allowing novel odorant components to be highlighted during complex stimuli.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Sabina Gherman ◽  
Marios G. Philiastides

Choice confidence, an individual’s internal estimate of judgment accuracy, plays a critical role in adaptive behaviour, yet its neural representations during decision formation remain underexplored. Here, we recorded simultaneous EEG-fMRI while participants performed a direction discrimination task and rated their confidence on each trial. Using multivariate single-trial discriminant analysis of the EEG, we identified a stimulus-independent component encoding confidence, which appeared prior to subjects’ explicit choice and confidence report, and was consistent with a confidence measure predicted by an accumulation-to-bound model of decision-making. Importantly, trial-to-trial variability in this electrophysiologically-derived confidence signal was uniquely associated with fMRI responses in the ventromedial prefrontal cortex (VMPFC), a region not typically associated with confidence for perceptual decisions. Furthermore, activity in the VMPFC was functionally coupled with regions of the frontal cortex linked to perceptual decision-making and metacognition. Our results suggest that the VMPFC holds an early confidence representation arising from decision dynamics, preceding and potentially informing metacognitive evaluation.


Author(s):  
Benjamin R. Cowley ◽  
Adam C. Snyder ◽  
Katerina Acar ◽  
Ryan C. Williamson ◽  
Byron M. Yu ◽  
...  

AbstractAn animal’s decision depends not only on incoming sensory evidence but also on its fluctuating internal state. This internal state is a product of cognitive factors, such as fatigue, motivation, and arousal, but it is unclear how these factors influence the neural processes that encode the sensory stimulus and form a decision. We discovered that, over the timescale of tens of minutes during a perceptual decision-making task, animals slowly shifted their likelihood of reporting stimulus changes. They did this unprompted by task conditions. We recorded neural population activity from visual area V4 as well as prefrontal cortex, and found that the activity of both areas slowly drifted together with the behavioral fluctuations. We reasoned that such slow fluctuations in behavior could either be due to slow changes in how the sensory stimulus is processed or due to a process that acts independently of sensory processing. By analyzing the recorded activity in conjunction with models of perceptual decision-making, we found evidence for the slow drift in neural activity acting as an impulsivity signal, overriding sensory evidence to dictate the final decision. Overall, this work uncovers an internal state embedded in the population activity across multiple brain areas, hidden from typical trial-averaged analyses and revealed only when considering the passage of time within each experimental session. Knowledge of this cognitive factor was critical in elucidating how sensory signals and the internal state together contribute to the decision-making process.


2019 ◽  
Author(s):  
R.S. van Bergen ◽  
J.F.M. Jehee

AbstractHow does the brain represent the reliability of its sensory evidence? Here, we test whether sensory uncertainty is encoded in cortical population activity as the width of a probability distribution – a hypothesis that lies at the heart of Bayesian models of neural coding. We probe the neural representation of uncertainty by capitalizing on a well-known behavioral bias called serial dependence. Human observers of either sex reported the orientation of stimuli presented in sequence, while activity in visual cortex was measured with fMRI. We decoded probability distributions from population-level activity and found that serial dependence effects in behavior are consistent with a statistically advantageous sensory integration strategy, in which uncertain sensory information is given less weight. More fundamentally, our results suggest that probability distributions decoded from human visual cortex reflect the sensory uncertainty that observers rely on in their decisions, providing critical evidence for Bayesian theories of perception.


Author(s):  
Shany Nivinsky Margalit ◽  
Neta Gery Golomb ◽  
Omer Tsur ◽  
Aeyal Raz ◽  
Hamutal Slovin

AbstractAnesthetic drugs are widely used in medicine and research to mediate loss of consciousness (LOC). Despite the vast use of anesthesia, how LOC affects cortical sensory processing and the underlying neural circuitry, is not well understood. We measured neuronal population activity in the visual cortices of awake and isoflurane anesthetized mice and compared the visually evoked responses under different levels of consciousness. We used voltage-sensitive dye imaging (VSDI) to characterize the temporal and spatial properties of cortical responses to visual stimuli over a range of states from wakefulness to deep anesthesia. VSDI enabled measuring the neuronal population responses at high spatial (meso-scale) and temporal resolution from several visual regions (V1, extrastiate-lateral (ESL) and extrastiate-medial (ESM)) simultaneously. We found that isoflurane has multiple effects on the population evoked response that augmented with anesthetic depth, where the largest changes occurred at LOC. Isoflurane reduced the response amplitude and prolonged the latency of response in all areas. In addition, the intra-areal spatial spread of the visually evoked activity decreased. During visual stimulation, intra-areal and inter-areal correlation between neuronal populations decreased with increasing doses of isoflurane. Finally, while in V1 the majority of changes occurred at higher doses of isoflurane, higher visual areas showed marked changes at lower doses of isoflurane. In conclusion, our results demonstrate a reverse hierarchy shutdown of the visual cortices regions: low-dose isoflurane diminishes the visually evoked activity in higher visual areas before lower order areas and cause a reduction in inter-areal connectivity leading to a disconnected network.


2021 ◽  
Author(s):  
Mirko Klukas ◽  
Sugandha Sharma ◽  
Yilun Du ◽  
Tomas Lozano-Perez ◽  
Leslie Pack Kaelbling ◽  
...  

When animals explore spatial environments, their representations often fragment into multiple maps. What determines these map fragmentations, and can we predict where they will occur with simple principles? We pose the problem of fragmentation of an environment as one of (online) spatial clustering. Taking inspiration from the notion of a "contiguous region" in robotics, we develop a theory in which fragmentation decisions are driven by surprisal. When this criterion is implemented with boundary, grid, and place cells in various environments, it produces map fragmentations from the first exploration of each space. Augmented with a long-term spatial memory and a rule similar to the distance-dependent Chinese Restaurant Process for selecting among relevant memories, the theory predicts the reuse of map fragments in environments with repeating substructures. Our model provides a simple rule for generating spatial state abstractions and predicts map fragmentations observed in electrophysiological recordings. It further predicts that there should be "fragmentation decision" or "fracture" cells, which in multicompartment environments could be called "doorway" cells. Finally, we show that the resulting abstractions can lead to large (orders of magnitude) improvements in the ability to plan and navigate through complex environments.


2010 ◽  
Vol 30 (47) ◽  
pp. 15778-15789 ◽  
Author(s):  
A. S. Kayser ◽  
D. T. Erickson ◽  
B. R. Buchsbaum ◽  
M. D'Esposito

2018 ◽  
Vol 115 (30) ◽  
pp. E7202-E7211 ◽  
Author(s):  
Scott L. Brincat ◽  
Markus Siegel ◽  
Constantin von Nicolai ◽  
Earl K. Miller

Somewhere along the cortical hierarchy, behaviorally relevant information is distilled from raw sensory inputs. We examined how this transformation progresses along multiple levels of the hierarchy by comparing neural representations in visual, temporal, parietal, and frontal cortices in monkeys categorizing across three visual domains (shape, motion direction, and color). Representations in visual areas middle temporal (MT) and V4 were tightly linked to external sensory inputs. In contrast, lateral prefrontal cortex (PFC) largely represented the abstracted behavioral relevance of stimuli (task rule, motion category, and color category). Intermediate-level areas, including posterior inferotemporal (PIT), lateral intraparietal (LIP), and frontal eye fields (FEF), exhibited mixed representations. While the distribution of sensory information across areas aligned well with classical functional divisions (MT carried stronger motion information, and V4 and PIT carried stronger color and shape information), categorical abstraction did not, suggesting these areas may participate in different networks for stimulus-driven and cognitive functions. Paralleling these representational differences, the dimensionality of neural population activity decreased progressively from sensory to intermediate to frontal cortex. This shows how raw sensory representations are transformed into behaviorally relevant abstractions and suggests that the dimensionality of neural activity in higher cortical regions may be specific to their current task.


2004 ◽  
Vol 556 (3) ◽  
pp. 971-982 ◽  
Author(s):  
Dirk Jancke ◽  
Wolfram Erlhagen ◽  
Gregor Schöner ◽  
Hubert R. Dinse

Sign in / Sign up

Export Citation Format

Share Document