SYNCHRONIZATION AND DECODING INTERSPIKE INTERVALS

2002 ◽  
Vol 12 (05) ◽  
pp. 983-999 ◽  
Author(s):  
SEUNG KEE HAN ◽  
WON SUP KIM ◽  
HYUNGTAE KOOK

Decoding of a sequence of interspike intervals (ISIs) of a neuron model driven by a chaotic stimulus is performed based on the attractor reconstruction method. As stimulus strength increases, both the stimulus estimation error and the prediction error in predicting stimulus crosswise by exploiting ISIs information tend to decrease with transitional drops at certain parameter values. It is analyzed that such behaviors are well explained in the context of synchronization between two chaotic patterns of stimulus and ISI sequence. The result implies that a new scheme of temporal coding at low firing rate regime can be achieved which exploits the preservation of nonlinear deterministic structures in stimulus.

2018 ◽  
Vol 72 (6) ◽  
pp. 1453-1465 ◽  
Author(s):  
Arthur Prével ◽  
Vinca Rivière ◽  
Jean-Claude Darcheville ◽  
Gonzalo P Urcelay ◽  
Ralph R Miller

Prével and colleagues reported excitatory learning with a backward conditioned stimulus (CS) in a conditioned reinforcement preparation. Their results add to existing evidence of backward CSs sometimes being excitatory and were viewed as challenging the view that learning is driven by prediction error reduction, which assumes that only predictive (i.e., forward) relationships are learned. The results instead were consistent with the assumptions of both Miller’s Temporal Coding Hypothesis and Wagner’s Sometimes Opponent Processes (SOP) model. The present experiment extended the conditioned reinforcement preparation developed by Prével et al. to a backward second-order conditioning preparation, with the aim of discriminating between these two accounts. We tested whether a second-order CS can serve as an effective conditioned reinforcer, even when the first-order CS with which it was paired is a backward CS that elicits no responding. Evidence of conditioned reinforcement was found, despite no conditioned response (CR) being elicited by the first-order backward CS. The evidence of second-order conditioning in the absence of excitatory conditioning to the first-order CS is interpreted as a challenge to SOP. In contrast, the present results are consistent with the Temporal Coding Hypothesis and constitute a conceptual replication in humans of previous reports of excitatory second-order conditioning in rodents with a backward CS. The proposal is made that learning is driven by “discrepancy” with prior experience as opposed to “ prediction error.”


1977 ◽  
Vol 12 (4) ◽  
pp. 667-667
Author(s):  
P. P. Boyle ◽  
A. L. Ananthanarayan

The Black-Scholes option pricing formula assumes that the variance of the return on the underlying stock is known with certainty. In practice an estimate of the variance, based on a sample of historical stock prices, is used. The estimation error in the variance induces error in the option price. Since the option price is a nonlinear function of the variance, an unbiased estimate of the variance does not produce an unbiased estimate of the option price. For reasonable parameter values, it is shown that the magnitude of the bias is not large.


2015 ◽  
Vol 91 (2) ◽  
Author(s):  
Finn Müller-Hansen ◽  
Felix Droste ◽  
Benjamin Lindner

2018 ◽  
Vol 2018 (3) ◽  
pp. 84-104 ◽  
Author(s):  
Takao Murakami ◽  
Hideitsu Hino ◽  
Jun Sakuma

Abstract A number of studies have recently been made on discrete distribution estimation in the local model, in which users obfuscate their personal data (e.g., location, response in a survey) by themselves and a data collector estimates a distribution of the original personal data from the obfuscated data. Unlike the centralized model, in which a trusted database administrator can access all users’ personal data, the local model does not suffer from the risk of data leakage. A representative privacy metric in this model is LDP (Local Differential Privacy), which controls the amount of information leakage by a parameter ∈ called privacy budget. When ∈ is small, a large amount of noise is added to the personal data, and therefore users’ privacy is strongly protected. However, when the number of users ℕ is small (e.g., a small-scale enterprise may not be able to collect large samples) or when most users adopt a small value of ∈, the estimation of the distribution becomes a very challenging task. The goal of this paper is to accurately estimate the distribution in the cases explained above. To achieve this goal, we focus on the EM (Expectation-Maximization) reconstruction method, which is a state-of-the-art statistical inference method, and propose a method to correct its estimation error (i.e., difference between the estimate and the true value) using the theory of Rilstone et al. We prove that the proposed method reduces the MSE (Mean Square Error) under some assumptions.We also evaluate the proposed method using three largescale datasets, two of which contain location data while the other contains census data. The results show that the proposed method significantly outperforms the EM reconstruction method in all of the datasets when ℕ or ∈ is small.


Author(s):  
Parvaneh Rashvand ◽  
Mohammad Reza Ahmadzadeh ◽  
Farzaneh Shayegh

In contrast to the previous artificial neural networks (ANNs), spiking neural networks (SNNs) work based on temporal coding approaches. In the proposed SNN, the number of neurons, neuron models, encoding method, and learning algorithm design are described in a correct and pellucid fashion. It is also discussed that optimizing the SNN parameters based on physiology, and maximizing the information they pass leads to a more robust network. In this paper, inspired by the “center-surround” structure of the receptive fields in the retina, and the amount of overlap that they have, a robust SNN is implemented. It is based on the Integrate-and-Fire (IF) neuron model and uses the time-to-first-spike coding to train the network by a newly proposed method. The Iris and MNIST datasets were employed to evaluate the performance of the proposed network whose accuracy, with 60 input neurons, was 96.33% on the Iris dataset. The network was trained in only 45 iterations indicating its reasonable convergence rate. For the MNIST dataset, when the gray level of each pixel was considered as input to the network, 600 input neurons were required, and the accuracy of the network was 90.5%. Next, 14 structural features were used as input. Therefore, the number of input neurons decreased to 210, and accuracy increased up to 95%, meaning that an SNN with fewer input neurons and good skill was implemented. Also, the ABIDE1 dataset is applied to the proposed SNN. Of the 184 data, 79 are used for healthy people and 105 for people with autism. One of the characteristics that can differentiate between these two classes is the entropy of the existing data. Therefore, Shannon entropy is used for feature extraction. Applying these values to the proposed SNN, an accuracy of 84.42% was achieved by only 120 iterations, which is a good result compared to the recent results.


Geophysics ◽  
1998 ◽  
Vol 63 (2) ◽  
pp. 713-722 ◽  
Author(s):  
Enders A. Robinson

A gap‐deconvolution filter with gap α is defined as the prediction error operator with prediction distance α. A spike‐deconvolution filter is defined as the prediction error operator with prediction distance unity. That is, a spike‐deconvolution filter is the special case of a gap‐deconvolution filter with gap equal to one time unit. Generally, the designation “gap deconvolution” is reserved for the case when α is greater than one, and the term “spike deconvolution” is used when α is equal to one. It is often stated that gap deconvolution with gap α shortens an input wavelet of arbitrary length to an output wavelet of length α (or less). Since an arbitrary value of α can be chosen, it would follow that resolution or wavelet contraction may be controlled by use of gap deconvolution. In general, this characterization of gap deconvolution is true for arbitrary α if and only if the wavelet is minimum delay (i.e., minimum phase). The method of model‐driven deconvolution can be used in the case of a nonminimum‐delay wavelet. The wavelet is the convolution of a minimum‐delay reverberation and a short nonminimum‐delay orphan. The model specifies that the given trace is the convolution of the white reflectivity and this nonminimum‐delay wavelet. The given trace yields the spike‐deconvolution filter and its inverse. These two signals are then used to compute the gap‐deconvolution filters and their inverses for various prediction distances. The inverses are examined, and a stable one is picked as the most likely minimum‐delay reverberation. The corresponding gap‐deconvolution filter is the optimum one for the removal of this minimum‐delay reverberation from the given trace. As a byproduct, the minimum‐delay counterpart of the orphan can be obtained. The optimum gap‐deconvolved trace is examined for zones that contain little activity, and the leading edge of the wavelet following such a zone is chosen. Next, the phase of the minimum‐delay counterpart of the orphan is rotated until it fits the extracted leading edge. From the amount of phase rotation, the required phase‐correcting filter can be estimated. Alternatively, downhole information, if available, can be used to estimate the phase‐correcting filter. Application of the phase‐correcting filter to the spike‐deconvolved trace gives the required approximation to the reflectivity. As a final step, wavelet processing can be applied to yield a final interpreter trace made up of zero‐phase wavelets.


Sign in / Sign up

Export Citation Format

Share Document