Synchronous Firing and Higher-Order Interactions in Neuron Pool

2003 ◽  
Vol 15 (1) ◽  
pp. 127-142 ◽  
Author(s):  
Shun-ichi Amari ◽  
Hiroyuki Nakahara ◽  
Si Wu ◽  
Yutaka Sakai

The stochastic mechanism of synchronous firing in a population of neurons is studied from the point of view of information geometry. Higher-order interactions of neurons, which cannot be reduced to pairwise correlations, are proved to exist in synchronous firing. In a neuron pool where each neuron fires stochastically, the probability distribution q(r) of the activity r, which is the fraction of firing neurons in the pool, is studied. When q(r) has a widespread distribution, in particular, when q(r) has two peaks, the neurons fire synchronously at one time and are quiescent at other times. The mechanism of generating such a probability distribution is interesting because the activity r is concentrated on its mean value when each neuron fires independently, because of the law of large numbers. Even when pairwise interactions, or third-order interactions, exist, the concentration is not resolved. This shows that higher-order interactions are necessary to generate widespread activity distributions. We analyze a simple model in which neurons receive common overlapping inputs and prove that such a model can have a widespread distribution of activity, generating higher-order stochastic interactions.

2010 ◽  
Vol 35 (4) ◽  
pp. 543-550 ◽  
Author(s):  
Wojciech Batko ◽  
Bartosz Przysucha

AbstractAssessment of several noise indicators are determined by the logarithmic mean <img src="/fulltext-image.asp?format=htmlnonpaginated&src=P42524002G141TV8_html\05_paper.gif" alt=""/>, from the sum of independent random resultsL1;L2; : : : ;Lnof the sound level, being under testing. The estimation of uncertainty of such averaging requires knowledge of probability distribution of the function form of their calculations. The developed solution, leading to the recurrent determination of the probability distribution function for the estimation of the mean value of noise levels and its variance, is shown in this paper.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Qing Yao ◽  
Bingsheng Chen ◽  
Tim S. Evans ◽  
Kim Christensen

AbstractWe study the evolution of networks through ‘triplets’—three-node graphlets. We develop a method to compute a transition matrix to describe the evolution of triplets in temporal networks. To identify the importance of higher-order interactions in the evolution of networks, we compare both artificial and real-world data to a model based on pairwise interactions only. The significant differences between the computed matrix and the calculated matrix from the fitted parameters demonstrate that non-pairwise interactions exist for various real-world systems in space and time, such as our data sets. Furthermore, this also reveals that different patterns of higher-order interaction are involved in different real-world situations. To test our approach, we then use these transition matrices as the basis of a link prediction algorithm. We investigate our algorithm’s performance on four temporal networks, comparing our approach against ten other link prediction methods. Our results show that higher-order interactions in both space and time play a crucial role in the evolution of networks as we find our method, along with two other methods based on non-local interactions, give the best overall performance. The results also confirm the concept that the higher-order interaction patterns, i.e., triplet dynamics, can help us understand and predict the evolution of different real-world systems.


2020 ◽  
Vol 25 (3) ◽  
pp. 49
Author(s):  
Silvia Licciardi ◽  
Rosa Maria Pidatella ◽  
Marcello Artioli ◽  
Giuseppe Dattoli

In this paper, we show that the use of methods of an operational nature, such as umbral calculus, allows achieving a double target: on one side, the study of the Voigt function, which plays a pivotal role in spectroscopic studies and in other applications, according to a new point of view, and on the other, the introduction of a Voigt transform and its possible use. Furthermore, by the same method, we point out that the Hermite and Laguerre functions, extension of the corresponding polynomials to negative and/or real indices, can be expressed through a definition in a straightforward and unified fashion. It is illustrated how the techniques that we are going to suggest provide an easy derivation of the relevant properties along with generalizations to higher order functions.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Ryuya Namba

AbstractModerate deviation principles (MDPs) for random walks on covering graphs with groups of polynomial volume growth are discussed in a geometric point of view. They deal with any intermediate spatial scalings between those of laws of large numbers and those of central limit theorems. The corresponding rate functions are given by quadratic forms determined by the Albanese metric associated with the given random walks. We apply MDPs to establish laws of the iterated logarithm on the covering graphs by characterizing the set of all limit points of the normalized random walks.


1994 ◽  
Vol 116 (4) ◽  
pp. 741-750 ◽  
Author(s):  
C. H. Venner

This paper addresses the development of efficient numerical solvers for EHL problems from a rather fundamental point of view. A work-accuracy exchange criterion is derived, that can be interpreted as setting a limit to the price paid in terms of computing time for a solution of a given accuracy. The criterion can serve as a guideline when reviewing or selecting a numerical solver and a discretization. Earlier developed multilevel solvers for the EHL line and circular contact problem are tested against this criterion. This test shows that, to satisfy the criterion a second-order accurate solver is needed for the point contact problem whereas the solver developed earlier used a first-order discretization. This situation arises more often in numerical analysis, i.e., a higher order discretization is desired when a lower order solver already exists. It is explained how in such a case the multigrid methodology provides an easy and straightforward way to obtain the desired higher order of approximation. This higher order is obtained at almost negligible extra work and without loss of stability. The approach was tested out by raising an existing first order multilevel solver for the EHL line contact problem to second order. Subsequently, it was used to obtain a second-order solver for the EHL circular contact problem. Results for both the line and circular contact problem are presented.


1960 ◽  
Vol 50 (4) ◽  
pp. 537-552
Author(s):  
G. N. Bycroft

ABSTRACT An investigation is made of the effect of changing the stiffness distribution up the height of a linear shear framed structure when subjected to idealized earthquake motions. The mean value of the largest strains arising in successive earthquakes is determined together with the associated probability distribution. It appears that the chances of finding a strain value greater than twice the mean are very small.


Filomat ◽  
2013 ◽  
Vol 27 (4) ◽  
pp. 515-528 ◽  
Author(s):  
Miodrag Mateljevic ◽  
Marek Svetlik ◽  
Miloljub Albijanic ◽  
Nebojsa Savic

In this paper we give a generalization of the Lagrange mean value theorem via lower and upper derivative, as well as appropriate criteria of monotonicity and convexity for arbitrary function f : (a, b) ( R. Some applications to the neoclassical economic growth model are given (from mathematical point of view).


2018 ◽  
Author(s):  
Andrey Chetverikov ◽  
Gianluca Campana ◽  
Arni Kristjansson

Our interactions with the visual world are guided by attention and visual working memory. Things that we look for and those we ignore are stored as templates that reflect our goals and the tasks at hand. The nature of such templates has been widely debated. A recent proposal is that these templates can be thought of as probabilistic representations of task-relevant features. Crucially, such probabilistic templates should accurately reflect feature probabilities in the environment. Here we ask whether observers can quickly form a correct internal model of a complex (bimodal) distribution of distractor features. We assessed observers’ representations by measuring the slowing of visual search when target features unexpectedly match a distractor template. Distractor stimuli were heterogeneous, randomly drawn on each trial from a bimodal probability distribution. Using two targets on each trial, we tested whether observers encode the full distribution, only one peak of it, or the average of the two peaks. Search was slower when the two targets corresponded to the two modes of a previous distractor distribution than when one target was at one of the modes and another between them or outside the distribution range. Furthermore, targets on the modes were reported later than targets between the modes that, in turn, were reported later than targets outside this range. This shows that observers use a correct internal model, representing both distribution modes using templates based on the full probability distribution rather than just one peak or simple summary statistics. The findings further confirm that performance in odd-one out search with repeated distractors cannot be described by a simple decision rule. Our findings indicate that probabilistic visual working memory templates guiding attention, dynamically adapt to task requirements, accurately reflecting the probabilistic nature of the input.


2005 ◽  
Vol 23 (6) ◽  
pp. 429-461
Author(s):  
Ian Lerche ◽  
Brett S. Mudford

This article derives an estimation procedure to evaluate how many Monte Carlo realisations need to be done in order to achieve prescribed accuracies in the estimated mean value and also in the cumulative probabilities of achieving values greater than, or less than, a particular value as the chosen particular value is allowed to vary. In addition, by inverting the argument and asking what the accuracies are that result for a prescribed number of Monte Carlo realisations, one can assess the computer time that would be involved should one choose to carry out the Monte Carlo realisations. The arguments and numerical illustrations are carried though in detail for the four distributions of lognormal, binomial, Cauchy, and exponential. The procedure is valid for any choice of distribution function. The general method given in Lerche and Mudford (2005) is not merely a coincidence owing to the nature of the Gaussian distribution but is of universal validity. This article provides (in the Appendices) the general procedure for obtaining equivalent results for any distribution and shows quantitatively how the procedure operates for the four specific distributions. The methodology is therefore available for any choice of probability distribution function. Some distributions have more than two parameters that are needed to define precisely the distribution. Estimates of mean value and standard error around the mean only allow determination of two parameters for each distribution. Thus any distribution with more than two parameters has degrees of freedom that either have to be constrained from other information or that are unknown and so can be freely specified. That fluidity in such distributions allows a similar fluidity in the estimates of the number of Monte Carlo realisations needed to achieve prescribed accuracies as well as providing fluidity in the estimates of achievable accuracy for a prescribed number of Monte Carlo realisations. Without some way to control the free parameters in such distributions one will, presumably, always have such dynamic uncertainties. Even when the free parameters are known precisely, there is still considerable uncertainty in determining the number of Monte Carlo realisations needed to achieve prescribed accuracies, and in the accuracies achievable with a prescribed number of Monte Carol realisations because of the different functional forms of probability distribution that can be invoked from which one chooses the Monte Carlo realisations. Without knowledge of the underlying distribution functions that are appropriate to use for a given problem, presumably the choices one makes for numerical implementation of the basic logic procedure will bias the estimates of achievable accuracy and estimated number of Monte Carlo realisations one should undertake. The cautionary note, which is the main point of this article, and which is exhibited sharply with numerical illustrations, is that one must clearly specify precisely what distributions one is using and precisely what free parameter values one has chosen (and why the choices were made) in assessing the accuracy achievable and the number of Monte Carlo realisations needed with such choices. Without such available information it is not a very useful exercise to undertake Monte Carlo realisations because other investigations, using other distributions and with other values of available free parameters, will arrive at very different conclusions.


2014 ◽  
Vol 2014 ◽  
pp. 1-13
Author(s):  
Huilin Huang

We consider an inhomogeneous growing network with two types of vertices. The degree sequences of two different types of vertices are investigated, respectively. We not only prove that the asymptotical degree distribution of typesfor this process is power law with exponent2+1+δqs+β1-qs/αqs, but also give the strong law of large numbers for degree sequences of two different types of vertices by using a different method instead of Azuma’s inequality. Then we determine asymptotically the joint probability distribution of degree for pairs of adjacent vertices with the same type and with different types, respectively.


Sign in / Sign up

Export Citation Format

Share Document