scholarly journals Necessary Conditions for Reliable Propagation of Slowly Time-Varying Firing Rate

Author(s):  
Navid Hasanzadeh ◽  
Mohammadreza Rezaei ◽  
Sayan Faraz ◽  
Milos R. Popovic ◽  
Milad Lankarany
1980 ◽  
Vol 102 (2) ◽  
pp. 379-383 ◽  
Author(s):  
M. Benton ◽  
A. Seireg

This paper investigates the conditions for transforming coupled systems with time-varying stiffness to normal mode coordinates. The stability regions are determined from the normal mode equations and the effect of model damping on the width of these regions is investigated. An approximate method for uncoupling is also presented which can be used to give good practical solutions in situations where the theoretical necessary conditions are not met. The proposed method can provide a simple and effective tool for the analysis of parametric vibrations for systems with any number of degrees of freedom as in the case of high speed gear trains.


2014 ◽  
Vol 926-930 ◽  
pp. 2982-2985
Author(s):  
Rui Hui Peng ◽  
Yong Sheng Lv ◽  
Xiang Wei Wang ◽  
Ying Qian Gao

The limitations of basic active deception jamming model against SAR reconnaissance were analyzed detailed, and the necessary conditions of false target deployment for the model were derived. Some modifications and adjustment of the basic model were made, for which target-centric signal transmission schedule was proposed instead of jammer-centric schedule, time-varying repetition delay and time-varying Doppler parameters of false signal were made instead fixed parameters. The modified model extended false target’s displacement and improved the fidelity of false target. And some calculation examples and discussions were done in the end.


1996 ◽  
Vol 118 (1) ◽  
pp. 132-138
Author(s):  
Lilai Yan ◽  
C. James Li ◽  
Tung-Yung Huang

This paper describes a new learning algorithm for time-varying recurrent neural networks whose weights are functions of time instead of scalars. First, an objective functional that is a function of the weight functions quantifying the discrepancies between the desired outputs and the network’s outputs is formulated. Then, dynamical optimization is used to derive the necessary conditions for the extreme of the functional. These necessary conditions result in a two-point boundary-value problem. This two-point boundary-value problem is subsequently solved by the Hilbert function space BFGS quasi-Newton algorithm, which is obtained by using the dyadic operator to extend the Euclidean space BFGS method into an infinite-dimensional, real Hilbert space. Finally, the ability of the network and the learning algorithm is demonstrated in the identification of three simulated nonlinear systems and a resistance spot welding process.


2005 ◽  
Vol 94 (1) ◽  
pp. 8-25 ◽  
Author(s):  
Robert E. Kass ◽  
Valérie Ventura ◽  
Emery N. Brown

Analysis of data from neurophysiological investigations can be challenging. Particularly when experiments involve dynamics of neuronal response, scientific inference can become subtle and some statistical methods may make much more efficient use of the data than others. This article reviews well-established statistical principles, which provide useful guidance, and argues that good statistical practice can substantially enhance results. Recent work on estimation of firing rate, population coding, and time-varying correlation provides improvements in experimental sensitivity equivalent to large increases in the number of neurons examined. Modern nonparametric methods are applicable to data from repeated trials. Many within-trial analyses based on a Poisson assumption can be extended to non-Poisson data. New methods have made it possible to track changes in receptive fields, and to study trial-to-trial variation, with modest amounts of data.


2017 ◽  
Vol 118 (1) ◽  
pp. 544-563 ◽  
Author(s):  
Nathaniel Zuk ◽  
Bertrand Delgutte

Binaural cues occurring in natural environments are frequently time varying, either from the motion of a sound source or through interactions between the cues produced by multiple sources. Yet, a broad understanding of how the auditory system processes dynamic binaural cues is still lacking. In the current study, we directly compared neural responses in the inferior colliculus (IC) of unanesthetized rabbits to broadband noise with time-varying interaural time differences (ITD) with responses to noise with sinusoidal amplitude modulation (SAM) over a wide range of modulation frequencies. On the basis of prior research, we hypothesized that the IC, one of the first stages to exhibit tuning of firing rate to modulation frequency, might use a common mechanism to encode time-varying information in general. Instead, we found weaker temporal coding for dynamic ITD compared with amplitude modulation and stronger effects of adaptation for amplitude modulation. The differences in temporal coding of dynamic ITD compared with SAM at the single-neuron level could be a neural correlate of “binaural sluggishness,” the inability to perceive fluctuations in time-varying binaural cues at high modulation frequencies, for which a physiological explanation has so far remained elusive. At ITD-variation frequencies of 64 Hz and above, where a temporal code was less effective, noise with a dynamic ITD could still be distinguished from noise with a constant ITD through differences in average firing rate in many neurons, suggesting a frequency-dependent tradeoff between rate and temporal coding of time-varying binaural information. NEW & NOTEWORTHY Humans use time-varying binaural cues to parse auditory scenes comprising multiple sound sources and reverberation. However, the neural mechanisms for doing so are poorly understood. Our results demonstrate a potential neural correlate for the reduced detectability of fluctuations in time-varying binaural information at high speeds, as occurs in reverberation. The results also suggest that the neural mechanisms for processing time-varying binaural and monaural cues are largely distinct.


1996 ◽  
Vol 8 (1) ◽  
pp. 67-84 ◽  
Author(s):  
David A. August ◽  
William B Levy

Reconstructing a time-varying stimulus estimate from a spike train (Bialek's “decoding” of a spike train) has become an important way to study neural information processing. In this paper, we describe a simple method for reconstructing a time-varying current injection signal from the simulated spike train it produces. This technique extracts most of the information from the spike train, provided that the input signal is appropriately matched to the spike generator. To conceptualize this matching, we consider spikes as instantaneous “samples” of the somatic current. The Sampling Theorem is then applicable, and it suggests that the bandwidth of the injected signal not exceed half the spike generator's average firing rate. The average firing rate, in turn, depends on the amplitude range and DC bias of the injected signal. We hypothesize that nature faces similar problems and constraints when transmitting a time-varying waveform from the soma of one neuron to the dendrite of the postsynaptic cell.


Sign in / Sign up

Export Citation Format

Share Document