scholarly journals Spike-Centered Jitter Can Mistake Temporal Structure

2017 ◽  
Vol 29 (3) ◽  
pp. 783-803 ◽  
Author(s):  
Jonathan Platkiewicz ◽  
Eran Stark ◽  
Asohan Amarasingham

Jitter-type spike resampling methods are routinely applied in neurophysiology for detecting temporal structure in spike trains (point processes). Several variations have been proposed. The concern has been raised, based on numerical experiments involving Poisson spike processes, that such procedures can be conservative. We study the issue and find it can be resolved by reemphasizing the distinction between spike-centered (basic) jitter and interval jitter. Focusing on spiking processes with no temporal structure, interval jitter generates an exact hypothesis test, guaranteeing valid conclusions. In contrast, such a guarantee is not available for spike-centered jitter. We construct explicit examples in which spike-centered jitter hallucinates temporal structure, in the sense of exaggerated false-positive rates. Finally, we illustrate numerically that Poisson approximations to jitter computations, while computationally efficient, can also result in inaccurate hypothesis tests. We highlight the value of classical statistical frameworks for guiding the design and interpretation of spike resampling methods.

2007 ◽  
Vol 97 (4) ◽  
pp. 2744-2757 ◽  
Author(s):  
Brent Doiron ◽  
Anne-Marie M. Oswald ◽  
Leonard Maler

The rich temporal structure of neural spike trains provides multiple dimensions to code dynamic stimuli. Popular examples are spike trains from sensory cells where bursts and isolated spikes can serve distinct coding roles. In contrast to analyses of neural coding, the cellular mechanics of burst mechanisms are typically elucidated from the neural response to static input. Bridging the mechanics of bursting with coding of dynamic stimuli is an important step in establishing theories of neural coding. Electrosensory lateral line lobe (ELL) pyramidal neurons respond to static inputs with a complex dendrite-dependent burst mechanism. Here we show that in response to dynamic broadband stimuli, these bursts lack some of the electrophysiological characteristics observed in response to static inputs. A simple leaky integrate-and-fire (LIF)-style model with a dendrite-dependent depolarizing afterpotential (DAP) is sufficient to match both the output statistics and coding performance of experimental spike trains. We use this model to investigate a simplification of interval coding where the burst interspike interval (ISI) codes for the scale of a canonical upstroke rather than a multidimensional stimulus feature. Using this stimulus reduction, we compute a quantization of the burst ISIs and the upstroke scale to show that the mutual information rate of the interval code is maximized at a moderate DAP amplitude. The combination of a reduced description of ELL pyramidal cell bursting and a simplification of the interval code increases the generality of ELL burst codes to other sensory modalities.


2020 ◽  
Author(s):  
Daniel Lakens ◽  
Lisa Marie DeBruine

Making scientific information machine-readable greatly facilitates its re-use. Many scientific articles have the goal to test a hypothesis, so making the tests of statistical predictions easier to find and access could be very beneficial. We propose an approach that can be used to make hypothesis tests machine readable. We believe there are two benefits to specifying a hypothesis test in a way that a computer can evaluate whether the statistical prediction is corroborated or not. First, hypothesis test will become more transparent, falsifiable, and rigorous. Second, scientists will benefit if information related to hypothesis tests in scientific articles is easily findable and re-usable, for example when performing meta-analyses, during peer review, and when examining meta-scientific research questions. We examine what a machine readable hypothesis test should look like, and demonstrate the feasibility of machine readable hypothesis tests in a real-life example using the fully operational prototype R package scienceverse.


2019 ◽  
Vol 48 (4) ◽  
pp. 241-243
Author(s):  
Jordan Rickles ◽  
Jessica B. Heppen ◽  
Elaine Allensworth ◽  
Nicholas Sorensen ◽  
Kirk Walters

In response to the concerns White raises in his technical comment on Rickles, Heppen, Allensworth, Sorensen, and Walters (2018), we discuss whether it would have been appropriate to test for nominally equivalent outcomes, given that the study was initially conceived and designed to test for significant differences, and that the conclusion of no difference was not solely based on a null hypothesis test. To further support the article’s conclusion, confidence intervals for the null hypothesis tests and a test of equivalence are provided.


Author(s):  
Rui Zhen Tan ◽  
Corey Markus ◽  
Tze Ping Loh

Objectives The interpretation of delta check rules in a panel of tests should be different to that at the single analyte level, as the number of hypothesis tests conducted (i.e. the number of delta check rules) is greater and needs to be taken into account. Methods De-identified paediatric laboratory results were extracted, and the first two serial results for each patient were used for analysis. Analytes were grouped into four common laboratory test panels consisting of renal, liver, bone and full blood count panels. The sensitivities and specificities of delta check limits as discrete panel tests were assessed by random permutation of the original data-set to simulate a wrong blood in tube situation. Results Generally, as the number of analytes included in a panel increases, the delta check rules deteriorate considerably due to the increased number of false positives, i.e. increased number hypothesis tests performed. To reduce high false-positive rates, patient results may be rejected from autovalidation only if the number of analytes failing the delta check limits exceeds a certain threshold of the total number of analytes in the panel (N). Our study found that the use of the ([Formula: see text] rule) for panel results had a specificity >90% and sensitivity ranging from 25% to 45% across the four common laboratory panels. However, this did not achieve performance close to some analytes when considered in isolation. Conclusions The simple [Formula: see text] rule reduces the false-positive rate and minimizes unnecessary, resource-intensive investigations for potentially erroneous results.


2005 ◽  
Vol 92 (2) ◽  
pp. 110-127 ◽  
Author(s):  
Leonel G�mez ◽  
Ruben Budelli ◽  
Rafael Saa ◽  
Michael Stiber ◽  
Jos� Pedro Segundo

2013 ◽  
Vol 25 (2) ◽  
pp. 418-449 ◽  
Author(s):  
Matthew T. Harrison

Controlling for multiple hypothesis tests using standard spike resampling techniques often requires prohibitive amounts of computation. Importance sampling techniques can be used to accelerate the computation. The general theory is presented, along with specific examples for testing differences across conditions using permutation tests and for testing pairwise synchrony and precise lagged-correlation between many simultaneously recorded spike trains using interval jitter.


1970 ◽  
Vol 6 ◽  
pp. 331-335 ◽  
Author(s):  
S.K. Srinivasan ◽  
G. Rajamannar

2020 ◽  
Author(s):  
Mark Naylor ◽  
Kirsty Bayliss ◽  
Finn Lindgren ◽  
Francesco Serafini ◽  
Ian Main

<p>Many earthquake forecasting approaches have developed bespokes codes to model and forecast the spatio-temporal eveolution of seismicity. At the same time, the statistics community have been working on a range of point process modelling codes. For example, motivated by ecological applications, inlabru models spatio-temporal point processes as a log-Gaussian Cox Process and is implemented in R. Here we present an initial implementation of inlabru to model seismicity. This fully Bayesian approach is computationally efficient because it uses a nested Laplace approximation such that posteriors are assumed to be Gaussian so that their means and standard deviations can be deterministically estimated rather than having to be constructed through sampling. Further, building on existing packages in R to handle spatial data, it can construct covariate maprs from diverse data-types, such as fault maps, in an intutitive and simple manner.</p><p>Here we present an initial application to the California earthqauke catalogue to determine the relative performance of different data-sets for describing the spatio-temporal evolution of seismicity.</p>


Sign in / Sign up

Export Citation Format

Share Document