Decoding Movement Trajectories Through a T-Maze Using Point Process Filters Applied to Place Field Data from Rat Hippocampal Region CA1

2009 ◽  
Vol 21 (12) ◽  
pp. 3305-3334 ◽  
Author(s):  
Yifei Huang ◽  
Mark P. Brandon ◽  
Amy L. Griffin ◽  
Michael E. Hasselmo ◽  
Uri T. Eden

Firing activity from neural ensembles in rat hippocampus has been previously used to determine an animal's position in an open environment and separately to predict future behavioral decisions. However, a unified statistical procedure to combine information about position and behavior in environments with complex topological features from ensemble hippocampal activity has yet to be described. Here we present a two-stage computational framework that uses point process filters to simultaneously estimate the animal's location and predict future behavior from ensemble neural spiking activity. First, in the encoding stage, we linearized a two-dimensional T-maze, and used spline-based generalized linear models to characterize the place-field structure of different neurons. All of these neurons displayed highly specific position-dependent firing, which frequently had several peaks at multiple locations along the maze. When the rat was at the stem of the T-maze, the firing activity of several of these neurons also varied significantly as a function of the direction it would turn at the decision point, as detected by ANOVA. Second, in the decoding stage, we developed a state-space model for the animal's movement along a T-maze and used point process filters to accurately reconstruct both the location of the animal and the probability of the next decision. The filter yielded exact full posterior densities that were highly nongaussian and often multimodal. Our computational framework provides a reliable approach for characterizing and extracting information from ensembles of neurons with spatially specific context or task-dependent firing activity.

Author(s):  
Xi Wang ◽  
Daoliang Tan ◽  
Tiejun Zheng

This paper presents an approach to turbofan engine dynamical output feedback controller (DOFC) design in the framework of LMI (Linear Matrix Inequality)-based H∞ control. In combination with loop shaping and internal model principle, the linear state space model of a turbofan engine is converted into that of some augmented plant, which is used to establish the LMI formulations of the standard H∞ control problem with respect to this augmented plant. Furthermore, by solving optimal H∞ controller for the augmented plant, we indirectly obtain the H∞ DOFC of turbofan engine which successfully achieves the tracking of reference instructions and effective constraints on control inputs. This design method is applied to the H∞ DOFC design for the linear models of an advanced multivariate turbofan engine. The obtained H∞ DOFC is only in control of the steady state of this turbofan engine. Simulation results from the linear and nonlinear models of this turbofan engine show that the resulting controller has such properties as good tracking performance, strong disturbance rejection, and satisfying robustness.


2014 ◽  
Vol 26 (2) ◽  
pp. 237-263 ◽  
Author(s):  
Luca Citi ◽  
Demba Ba ◽  
Emery N. Brown ◽  
Riccardo Barbieri

Likelihood-based encoding models founded on point processes have received significant attention in the literature because of their ability to reveal the information encoded by spiking neural populations. We propose an approximation to the likelihood of a point-process model of neurons that holds under assumptions about the continuous time process that are physiologically reasonable for neural spike trains: the presence of a refractory period, the predictability of the conditional intensity function, and its integrability. These are properties that apply to a large class of point processes arising in applications other than neuroscience. The proposed approach has several advantages over conventional ones. In particular, one can use standard fitting procedures for generalized linear models based on iteratively reweighted least squares while improving the accuracy of the approximation to the likelihood and reducing bias in the estimation of the parameters of the underlying continuous-time model. As a result, the proposed approach can use a larger bin size to achieve the same accuracy as conventional approaches would with a smaller bin size. This is particularly important when analyzing neural data with high mean and instantaneous firing rates. We demonstrate these claims on simulated and real neural spiking activity. By allowing a substantive increase in the required bin size, our algorithm has the potential to lower the barrier to the use of point-process methods in an increasing number of applications.


2003 ◽  
Vol 15 (5) ◽  
pp. 965-991 ◽  
Author(s):  
Anne C. Smith ◽  
Emery N. Brown

A widely used signal processing paradigm is the state-space model. The state-space model is defined by two equations: an observation equation that describes how the hidden state or latent process is observed and a state equation that defines the evolution of the process through time. Inspired by neurophysiology experiments in which neural spiking activity is induced by an implicit (latent) stimulus, we develop an algorithm to estimate a state-space model observed through point process measurements. We represent the latent process modulating the neural spiking activity as a gaussian autoregressive model driven by an external stimulus. Given the latent process, neural spiking activity is characterized as a general point process defined by its conditional intensity function. We develop an approximate expectation-maximization (EM) algorithm to estimate the unobservable state-space process, its parameters, and the parameters of the point process. The EM algorithm combines a point process recursive nonlinear filter algorithm, the fixed interval smoothing algorithm, and the state-space covariance algorithm to compute the complete data log likelihood efficiently. We use a Kolmogorov-Smirnov test based on the time-rescaling theorem to evaluate agreement between the model and point process data. We illustrate the model with two simulated data examples: an ensemble of Poisson neurons driven by a common stimulus and a single neuron whose conditional intensity function is approximated as a local Bernoulli process.


Energies ◽  
2019 ◽  
Vol 12 (19) ◽  
pp. 3791 ◽  
Author(s):  
Qianjing Chen ◽  
Jinquan Huang ◽  
Muxuan Pan ◽  
Feng Lu

Nonlinear component level model (NCLM) is a widely used model for aeroengines. However, it requires iterative calculation and is, therefore, time-consuming, which restricts its real-time application. This study aims at developing a simplified real-time modeling approach for turbofan engines. A mechanism modeling approach is proposed based on linear models to avoid the iterative calculation in NCLM so as to effectively reduce the computational complexity. Linear local models, of which the outputs are the solution of the balance equations in NCLM, are established at the ground operating points and are combined into a linear parameter varying (LPV) state-space model. Then, the model is extended throughout the full flight envelope in a polytopic expression and is integrated with the flow path calculation to obtain satisfactory real-time performance. In order to ensure the accuracy of the integrated model, the upper bound of convergence residual of the iteration is strictly set and consideration on the interpolation method is taken. The simulation results demonstrate that the integrated model requires much less computational resources than the NCLM does. Meanwhile, it maintains an acceptable accuracy performance and, therefore, is suitable for real-time application.


2010 ◽  
Vol 22 (8) ◽  
pp. 1993-2001 ◽  
Author(s):  
Ke Yuan ◽  
Mahesan Niranjan

Physiological signals such as neural spikes and heartbeats are discrete events in time, driven by continuous underlying systems. A recently introduced data-driven model to analyze such a system is a state-space model with point process observations, parameters of which and the underlying state sequence are simultaneously identified in a maximum likelihood setting using the expectation-maximization (EM) algorithm. In this note, we observe some simple convergence properties of such a setting, previously un-noticed. Simulations show that the likelihood is unimodal in the unknown parameters, and hence the EM iterations are always able to find the globally optimal solution.


2020 ◽  
Author(s):  
Gregory Edward Cox ◽  
Gordon D. Logan ◽  
Jeffrey Schall ◽  
Thomas Palmeri

Evidence accumulation is a computational framework that accounts for behavior as well as the dynamics of individual neurons involved in decision making. Linking these two levels of description reveals a scaling paradox: How do choices and response times (RT) explained by models assuming single accumulators arise from a large ensemble of idiosyncratic accumulator neurons? We created a simulation model that makes decisions by aggregating across ensembles of accumulators, thereby instantiating the essential structure of neural ensembles that make decisions. Across different levels of simulated choice difficulty and speed-accuracy emphasis, choice proportions and RT distributions simulated by the ensembles are invariant to ensemble size and the accumulated evidence at RT is invariant across RT when the accumulators are at least moderately correlated in either baseline evidence or rates of accumulation and when RT is not governed by the most extreme accumulators. To explore the relationship between the low-level ensemble accumulators and high-level cognitive models, we fit simulated ensemble behavior with a standard LBA model. The standard LBA model generally recovered the core accumulator parameters (particularly drift rates and residual time) of individual ensemble accumulators with high accuracy, with variability parameters of the standard LBA modulating as a function of various ensemble parameters. Ensembles of accumulators also provide an alternative conception of speed-accuracy tradeoff without relying on varying thresholds of individual accumulators, instead by adjusting how ensembles of accumulators are aggregated or by how accumulators are correlated within ensembles. These results clarify relationships between neural and computational accounts of decision making.


2020 ◽  
Author(s):  
Mehrad Sarmashghi ◽  
Shantanu P Jadhav ◽  
Uri Eden

AbstractPoint process generalized linear models (GLMs) provide a powerful tool for characterizing the coding properties of neural populations. Spline basis functions are often used in point process GLMs, when the relationship between the spiking and driving signals are nonlinear, but common choices for the structure of these spline bases often lead to loss of statistical power and numerical instability when the signals that influence spiking are bounded above or below. In particular, history dependent spike train models often suffer these issues at times immediately following a previous spike. This can make inferences related to refractoriness and bursting activity more challenging. Here, we propose a modified set of spline basis functions that assumes a flat derivative at the endpoints and show that this limits the uncertainty and numerical issues associated with cardinal splines. We illustrate the application of this modified basis to the problem of simultaneously estimating the place field and history dependent properties of a set of neurons from the CA1 region of rat hippocampus, and compare it with the other commonly used basis functions. We have made code available in MATLAB to implement spike train regression using these modified basis functions.


Sign in / Sign up

Export Citation Format

Share Document