2020 ◽  
Vol 32 (11) ◽  
pp. 2145-2186
Author(s):  
Ali Yousefi ◽  
Yalda Amidi ◽  
Behzad Nazari ◽  
Uri. T. Eden

Marked point process models have recently been used to capture the coding properties of neural populations from multiunit electrophysiological recordings without spike sorting. These clusterless models have been shown in some instances to better describe the firing properties of neural populations than collections of receptive field models for sorted neurons and to lead to better decoding results. To assess their quality, we previously proposed a goodness-of-fit technique for marked point process models based on time rescaling, which for a correct model produces a set of uniform samples over a random region of space. However, assessing uniformity over such a region can be challenging, especially in high dimensions. Here, we propose a set of new transformations in both time and the space of spike waveform features, which generate events that are uniformly distributed in the new mark and time spaces. These transformations are scalable to multidimensional mark spaces and provide uniformly distributed samples in hypercubes, which are well suited for uniformity tests. We discuss the properties of these transformations and demonstrate aspects of model fit captured by each transformation. We also compare multiple uniformity tests to determine their power to identify lack-of-fit in the rescaled data. We demonstrate an application of these transformations and uniformity tests in a simulation study. Proofs for each transformation are provided in the appendix.


2020 ◽  
Author(s):  
Ali Yousefi ◽  
Yalda Amidi ◽  
Behzad Nazari ◽  
Uri. T. Eden

AbstractMarked-point process models have recently been used to capture the coding properties of neural populations from multi-unit electrophysiological recordings without spike sorting. These ‘clusterless’ models have been shown in some instances to better describe the firing properties of neural populations than collections of receptive field models for sorted neurons and to lead to better decoding results. To assess their quality, we previously proposed a goodness-of-fit technique for marked-point process models based on time-rescaling, which for a correct model, produces a set of uniform samples over a random region of space. However, assessing uniformity over such a region can be challenging, especially in high dimensions. Here, we propose a set of new transformations both in time and in the space of spike waveform features, which generate events that are uniformly distributed in the new mark and time spaces. These transformations are scalable to multi-dimensional mark spaces and provide uniformly distributed samples in hypercubes, which are well suited for uniformity tests. We discuss properties of these transformations and demonstrate aspects of model fit captured by each transformation. We also compare multiple uniformity tests to determine their power to identify lack-of-fit in the rescaled data. We demonstrate an application of these transformations and uniformity tests in a simulation study. Proofs for each transformation are provided in the Appendix section. We have made the MATLAB code used for the analyses in this paper publicly available through our Github repository at https://github.com/YousefiLab/Marked-PointProcess-Goodness-of-Fit


1998 ◽  
Vol 30 (1) ◽  
pp. 64-84 ◽  
Author(s):  
Håvard Rue ◽  
Anne Randi Syversveen

A common problem in Bayesian object recognition using marked point process models is to produce a point estimate of the true underlying object configuration: the number of objects and the size, location and shape of each object. We use decision theory and the concept of loss functions to design a more reasonable estimator for this purpose, rather than using the common zero-one loss corresponding to the maximum a posteriori estimator. We propose to use the squared Δ-metric of Baddeley (1992) as our loss function and demonstrate that the corresponding optimal Bayesian estimator can be well approximated by combining Markov chain Monte Carlo methods with simulated annealing into a two-step algorithm. The proposed loss function is tested using a marked point process model developed for locating cells in confocal microscopy images.


1998 ◽  
Vol 30 (01) ◽  
pp. 64-84 ◽  
Author(s):  
Håvard Rue ◽  
Anne Randi Syversveen

A common problem in Bayesian object recognition using marked point process models is to produce a point estimate of the true underlying object configuration: the number of objects and the size, location and shape of each object. We use decision theory and the concept of loss functions to design a more reasonable estimator for this purpose, rather than using the common zero-one loss corresponding to the maximum a posteriori estimator. We propose to use the squared Δ-metric of Baddeley (1992) as our loss function and demonstrate that the corresponding optimal Bayesian estimator can be well approximated by combining Markov chain Monte Carlo methods with simulated annealing into a two-step algorithm. The proposed loss function is tested using a marked point process model developed for locating cells in confocal microscopy images.


2019 ◽  
Vol 609 ◽  
pp. 239-256 ◽  
Author(s):  
TL Silva ◽  
G Fay ◽  
TA Mooney ◽  
J Robbins ◽  
MT Weinrich ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document