scholarly journals Long- and short-term history effects in a spiking network model of statistical learning

2021 ◽  
Author(s):  
Amadeus Maes ◽  
Mauricio Barahona ◽  
Claudia Clopath

The statistical structure of the environment is often important when making decisions. There are multiple theories of how the brain represents statistical structure. One such theory states that neural activity spontaneously samples from probability distributions. In other words, the network spends more time in states which encode high-probability stimuli. Existing spiking network models implementing sampling lack the ability to learn the statistical structure from observed stimuli and instead often hard-code a dynamics. Here, we focus on how arbitrary prior knowledge about the external world can both be learned and spontaneously recollected. We present a model based upon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neurons and biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectations and signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoing learning.

2019 ◽  
Author(s):  
Amadeus Maes ◽  
Mauricio Barahona ◽  
Claudia Clopath

AbstractLearning to produce spatiotemporal sequences is a common task the brain has to solve. The same neural substrate may be used by the brain to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory biophysical neurons drives a read-out layer: the dynamics of the recurrent network is constrained to encode time while the read-out neurons encode space. Space is then linked with time through plastic synapses that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on a timescale that is behaviourally relevant. Learned sequences are robustly replayed during a regime of spontaneous activity.Author summaryThe brain has the ability to learn flexible behaviours on a wide range of time scales. Previous studies have successfully build spiking network models that learn a variety of computational tasks. However, often the learning involved is not local. Here, we investigate a model using biological-plausible plasticity rules for a specific computational task: spatiotemporal sequence learning. The architecture separates time and space into two different parts and this allows learning to bind space to time. Importantly, the time component is encoded into a recurrent network which exhibits sequential dynamics on a behavioural time scale. This network is then used as an engine to drive spatial read-out neurons. We demonstrate that the model can learn complicated spatiotemporal spiking dynamics, such as the song of a bird, and replay the song robustly.


Author(s):  
Ann-Sophie Barwich

How much does stimulus input shape perception? The common-sense view is that our perceptions are representations of objects and their features and that the stimulus structures the perceptual object. The problem for this view concerns perceptual biases as responsible for distortions and the subjectivity of perceptual experience. These biases are increasingly studied as constitutive factors of brain processes in recent neuroscience. In neural network models the brain is said to cope with the plethora of sensory information by predicting stimulus regularities on the basis of previous experiences. Drawing on this development, this chapter analyses perceptions as processes. Looking at olfaction as a model system, it argues for the need to abandon a stimulus-centred perspective, where smells are thought of as stable percepts, computationally linked to external objects such as odorous molecules. Perception here is presented as a measure of changing signal ratios in an environment informed by expectancy effects from top-down processes.


Author(s):  
RONALD R. YAGER

We look at the issue of obtaining a variance like measure associated with probability distributions over ordinal sets. We call these dissonance measures. We specify some general properties desired in these dissonance measures. The centrality of the cumulative distribution function in formulating the concept of dissonance is pointed out. We introduce some specific examples of measures of dissonance.


2021 ◽  
Author(s):  
Hugh McGovern ◽  
Marte Otten

Bayesian processing has become a popular framework by which to understand cognitive processes. However, relatively little has been done to understand how Bayesian processing in the brain can be applied to understanding intergroup cognition. We assess how categorization and evaluation processes unfold based on priors about the ethnic outgroup being perceived. We then consider how the precision of prior knowledge about groups differentially influence perception depending on how the information about that group was learned affects the way in which it is recalled. Finally, we evaluate the mechanisms of how humans learn information about other ethnic groups and assess how the method of learning influences future intergroup perception. We suggest that a predictive processing framework for assessing prejudice could help accounting for seemingly disparate findings on intergroup bias from social neuroscience, social psychology, and evolutionary psychology. Such an integration has important implications for future research on prejudice at the interpersonal, intergroup, and societal levels.


2018 ◽  
Vol 146 (12) ◽  
pp. 4079-4098 ◽  
Author(s):  
Thomas M. Hamill ◽  
Michael Scheuerer

Abstract Hamill et al. described a multimodel ensemble precipitation postprocessing algorithm that is used operationally by the U.S. National Weather Service (NWS). This article describes further changes that produce improved, reliable, and skillful probabilistic quantitative precipitation forecasts (PQPFs) for single or multimodel prediction systems. For multimodel systems, final probabilities are produced through the linear combination of PQPFs from the constituent models. The new methodology is applied to each prediction system. Prior to adjustment of the forecasts, parametric cumulative distribution functions (CDFs) of model and analyzed climatologies are generated using the previous 60 days’ forecasts and analyses and supplemental locations. The CDFs, which can be stored with minimal disk space, are then used for quantile mapping to correct state-dependent bias for each member. In this stage, the ensemble is also enlarged using a stencil of forecast values from the 5 × 5 surrounding grid points. Different weights and dressing distributions are assigned to the sorted, quantile-mapped members, with generally larger weights for outlying members and broader dressing distributions for members with heavier precipitation. Probability distributions are generated from the weighted sum of the dressing distributions. The NWS Global Ensemble Forecast System (GEFS), the Canadian Meteorological Centre (CMC) global ensemble, and the European Centre for Medium-Range Weather Forecasts (ECMWF) ensemble forecast data are postprocessed for April–June 2016. Single prediction system postprocessed forecasts are generally reliable and skillful. Multimodel PQPFs are roughly as skillful as the ECMWF system alone. Postprocessed guidance was generally more skillful than guidance using the Gamma distribution approach of Scheuerer and Hamill, with coefficients generated from data pooled across the United States.


Author(s):  
Genís Prat-Ortega ◽  
Klaus Wimmer ◽  
Alex Roxin ◽  
Jaime de la Rocha

AbstractPerceptual decisions require the brain to make categorical choices based on accumulated sensory evidence. The underlying computations have been studied using either phenomenological drift diffusion models or neurobiological network models exhibiting winner-take-all attractor dynamics. Although both classes of models can account for a large body of experimental data, it remains unclear to what extent their dynamics are qualitatively equivalent. Here we show that, unlike the drift diffusion model, the attractor model can operate in different integration regimes: an increase in the stimulus fluctuations or the stimulus duration promotes transitions between decision-states leading to a crossover between weighting mostly early evidence (primacy regime) to weighting late evidence (recency regime). Between these two limiting cases, we found a novel regime, which we name flexible categorization, in which fluctuations are strong enough to reverse initial categorizations, but only if they are incorrect. This asymmetry in the reversing probability results in a non-monotonic psychometric curve, a novel and distinctive feature of the attractor model. Finally, we show psychophysical evidence for the crossover between integration regimes predicted by the attractor model and for the relevance of this new regime. Our findings point to correcting transitions as an important yet overlooked feature of perceptual decision making.


Author(s):  
Chi-Hua Chen ◽  
Fangying Song ◽  
Feng-Jang Hwang ◽  
Ling Wu

To generate a probability density function (PDF) for fitting probability distributions of real data, this study proposes a deep learning method which consists of two stages: (1) a training stage for estimating the cumulative distribution function (CDF) and (2) a performing stage for predicting the corresponding PDF. The CDFs of common probability distributions can be adopted as activation functions in the hidden layers of the proposed deep learning model for learning actual cumulative probabilities, and the differential equation of trained deep learning model can be used to estimate the PDF. To evaluate the proposed method, numerical experiments with single and mixed distributions are performed. The experimental results show that the values of both CDF and PDF can be precisely estimated by the proposed method.


2021 ◽  
Vol 15 ◽  
Author(s):  
Kevin Wen-Kai Tsai ◽  
Jui-Cheng Chen ◽  
Hui-Chin Lai ◽  
Wei-Chieh Chang ◽  
Takaomi Taira ◽  
...  

ObjectiveMagnetic resonance-guided focused ultrasound (MRgFUS) is a minimum-invasive surgical approach to non-incisionally cause the thermos-coagulation inside the human brain. The skull score (SS) has already been approved as one of the most dominant factors related to a successful MRgFUS treatment. In this study, we first reveal the SS distribution of the tremor patients, and correlate the SS with the image feature from customized skull density ratio (cSDR). This correlation might give a direction to future clinical studies for improving the SS.MethodsTwo hundred and forty-six patients received a computed tomography (CT) scan of the brain, and a bone-enhanced filter was applied and reconstructed to a high spatial resolution CT images. The SS of all patients would be estimated by the MRgFUS system after importing the reconstructed CT images into the MRgFUS system. The histogram and the cumulative distribution of the SS from all the patients were calculated to show the percentage of the patients whose SS lower than 0.3 and 0.4. The same CT images of all patients were utilized to calculated the cSDR by first segmented the trabecular bone and the cortical bone from the CT images and divided the average trabecular bone intensity (aTBI) by the average cortical bone intensity (aCBI). The Pearson’s correlations between the SS and the cSDR, aTBI, and the aCBI were calculated, respectively.ResultsThere were 19.19 and 50% of the patient who had the SS lower than the empirical threshold 0.3 and 0.4, respectively. The Pearson’s correlation between the SS and the cSDR, aCBI, and the aTBI were R = 0.8145, 0.5723, and 0.8842.ConclusionHalf of the patients were eligible for the MRgFUS thalamotomy based on the SS, and nearly 20% of patients were empirically difficult to achieve a therapeutic temperature during MRgFUS. The SS and our cSDR are highly correlated, and the SS had a higher correlation with aTBI than with aCBI. This is the first report to explicitly reveal the SS population and indicate a potential way to increase the chance to achieve a therapeutic temperature for those who originally have low SS.


2018 ◽  
Author(s):  
Seth W. Egger ◽  
Mehrdad Jazayeri

AbstractBayesian models of behavior have advanced the idea that humans combine prior beliefs and sensory observations to minimize uncertainty. How the brain implements Bayes-optimal inference, however, remains poorly understood. Simple behavioral tasks suggest that the brain can flexibly represent and manipulate probability distributions. An alternative view is that brain relies on simple algorithms that can implement Bayes-optimal behavior only when the computational demands are low. To distinguish between these alternatives, we devised a task in which Bayes-optimal performance could not be matched by simple algorithms. We asked subjects to estimate and reproduce a time interval by combining prior information with one or two sequential measurements. In the domain of time, measurement noise increases with duration. This property makes the integration of multiple measurements beyond the reach of simple algorithms. We found that subjects were able to update their estimates using the second measurement but their performance was suboptimal, suggesting that they were unable to update full probability distributions. Instead, subjects’ behavior was consistent with an algorithm that predicts upcoming sensory signals, and applies a nonlinear function to errors in prediction to update estimates. These results indicate that inference strategies humans deploy may deviate from Bayes-optimal integration when the computational demands are high.


Author(s):  
Md. Mahabubur Rahman ◽  
Bander Al-Zahrani ◽  
Saman Hanif Shahbaz ◽  
Muhammad Qaiser Shahbaz

Transmutation is the functional composition of the cumulative distribution function (cdf) of one distribution with the inverse cumulative distribution function (quantile function) of another. Shaw and Buckley(2007), first apply this concept and introduced quadratic transmuted family of distributions. In this article, we have presented a review about the transmuted families of distributions. We have also listed the transmuted distributions, available in the literature along with some concluding remarks.


Sign in / Sign up

Export Citation Format

Share Document