scholarly journals Free-Energy Model of Emotion Potential: Modeling Arousal Potential as Information Content Induced by Complexity and Novelty

2021 ◽  
Vol 15 ◽  
Author(s):  
Hideyoshi Yanagisawa

Appropriate levels of arousal potential induce hedonic responses (i.e., emotional valence). However, the relationship between arousal potential and its factors (e.g., novelty, complexity, and uncertainty) have not been formalized. This paper proposes a mathematical model that explains emotional arousal using minimized free energy to represent information content processed in the brain after sensory stimuli are perceived and recognized (i.e., sensory surprisal). This work mathematically demonstrates that sensory surprisal represents the summation of information from novelty and uncertainty, and that the uncertainty converges to perceived complexity with sufficient sampling from a stimulus source. Novelty, uncertainty, and complexity all act as collative properties that form arousal potential. Analysis using a Gaussian generative model shows that the free energy is formed as a quadratic function of prediction errors based on the difference between prior expectation and peak of likelihood. The model predicts two interaction effects on free energy: that between prediction error and prior uncertainty (i.e., prior variance) and that between prediction error and sensory variance. A discussion on the potential of free energy as a mathematical principle is presented to explain emotion initiators. The model provides a general mathematical framework for understanding and predicting the emotions caused by novelty, uncertainty, and complexity. The mathematical model of arousal can help predict acceptable novelty and complexity based on a target population under different uncertainty levels mitigated by prior knowledge and experience.

Entropy ◽  
2019 ◽  
Vol 21 (3) ◽  
pp. 257 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher Buckley

In the past few decades, probabilistic interpretations of brain functions have become widespread in cognitive science and neuroscience. In particular, the free energy principle and active inference are increasingly popular theories of cognitive functions that claim to offer a unified understanding of life and cognition within a general mathematical framework derived from information and control theory, and statistical mechanics. However, we argue that if the active inference proposal is to be taken as a general process theory for biological systems, it is necessary to understand how it relates to existing control theoretical approaches routinely used to study and explain biological systems. For example, recently, PID (Proportional-Integral-Derivative) control has been shown to be implemented in simple molecular systems and is becoming a popular mechanistic explanation of behaviours such as chemotaxis in bacteria and amoebae, and robust adaptation in biochemical networks. In this work, we will show how PID controllers can fit a more general theory of life and cognition under the principle of (variational) free energy minimisation when using approximate linear generative models of the world. This more general interpretation also provides a new perspective on traditional problems of PID controllers such as parameter tuning as well as the need to balance performances and robustness conditions of a controller. Specifically, we then show how these problems can be understood in terms of the optimisation of the precisions (inverse variances) modulating different prediction errors in the free energy functional.


2019 ◽  
Author(s):  
Manuel Baltieri ◽  
Christopher Buckley

The free energy principle describes cognitive functions such as perception, action, learning and attention in terms of surprisal minimisation. Under simplifying assumptions, agents are depicted as systems minimising a weighted sum of prediction errors encoding the mismatch between incoming sensations and an agent's predictions about such sensations. The ``dark room'' is defined as a state that an agent would occupy should it only look to minimise this sum of prediction errors. This (paradoxical) state emerges as the contrast between the attempts to describe the richness of human and animal behaviour in terms of surprisal minimisation and the trivial solution of a dark room, where the complete lack of sensory stimuli would provide the easiest way to minimise prediction errors, i.e., to be in a perfectly predictable state of darkness with no incoming stimuli. Using a process theory derived from the free energy principle, active inference, we investigate with an agent-based model the meaning of the dark room problem and discuss some of its implications for natural and artificial systems. In this set up, we propose that the presence of this paradox is primarily due to the long-standing belief that agents should encode accurate world models, typical of traditional (computational) theories of cognition.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Yibing Zhang ◽  
Tingyang Li ◽  
Aparna Reddy ◽  
Nambi Nallasamy

Abstract Objectives To evaluate gender differences in optical biometry measurements and lens power calculations. Methods Eight thousand four hundred thirty-one eyes of five thousand five hundred nineteen patients who underwent cataract surgery at University of Michigan’s Kellogg Eye Center were included in this retrospective study. Data including age, gender, optical biometry, postoperative refraction, implanted intraocular lens (IOL) power, and IOL formula refraction predictions were gathered and/or calculated utilizing the Sight Outcomes Research Collaborative (SOURCE) database and analyzed. Results There was a statistical difference between every optical biometry measure between genders. Despite lens constant optimization, mean signed prediction errors (SPEs) of modern IOL formulas differed significantly between genders, with predictions skewed more hyperopic for males and myopic for females for all 5 of the modern IOL formulas tested. Optimization of lens constants by gender significantly decreased prediction error for 2 of the 5 modern IOL formulas tested. Conclusions Gender was found to be an independent predictor of refraction prediction error for all 5 formulas studied. Optimization of lens constants by gender can decrease refraction prediction error for certain modern IOL formulas.


2012 ◽  
Vol 6-7 ◽  
pp. 428-433
Author(s):  
Yan Wei Li ◽  
Mei Chen Wu ◽  
Tung Shou Chen ◽  
Wien Hong

We propose a reversible data hiding technique to improve Hong and Chen’s (2010) method. Hong and Chen divide the cover image into pixel group, and use reference pixels to predict other pixel values. Data are then embedded by modifying the prediction errors. However, when solving the overflow and underflow problems, they employ a location map to record the position of saturated pixels, and these pixels will not be used to carry data. In their method, if the image has a plenty of saturated pixels, the payload is decreased significantly because a lot of saturated pixels will not joint the embedment. We improve Hong and Chen’s method such that the saturated pixels can be used to carry data. The positions of these saturated pixels are then recorded in a location map, and the location map is embedded together with the secret data. The experimental results illustrate that the proposed method has better payload, will providing a comparable image quality.


2018 ◽  
Vol 8 (12) ◽  
pp. 228 ◽  
Author(s):  
Akiko Mizuno ◽  
Maria Ly ◽  
Howard Aizenstein

Subjective Cognitive Decline (SCD) is possibly one of the earliest detectable signs of dementia, but we do not know which mental processes lead to elevated concern. In this narrative review, we will summarize the previous literature on the biomarkers and functional neuroanatomy of SCD. In order to extend upon the prevailing theory of SCD, compensatory hyperactivation, we will introduce a new model: the breakdown of homeostasis in the prediction error minimization system. A cognitive prediction error is a discrepancy between an implicit cognitive prediction and the corresponding outcome. Experiencing frequent prediction errors may be a primary source of elevated subjective concern. Our homeostasis breakdown model provides an explanation for the progression from both normal cognition to SCD and from SCD to advanced dementia stages.


1998 ◽  
Vol 120 (3) ◽  
pp. 489-495 ◽  
Author(s):  
S. J. Hu ◽  
Y. G. Liu

Autocorrelation in 100 percent measurement data results in false alarms when the traditional control charts, such as X and R charts, are applied in process monitoring. A popular approach proposed in the literature is based on prediction error analysis (PEA), i.e., using time series models to remove the autocorrelation, and then applying the control charts to the residuals, or prediction errors. This paper uses a step function type mean shift as an example to investigate the effect of prediction error analysis on the speed of mean shift detection. The use of PEA results in two changes in the 100 percent measurement data: (1) change in the variance, and (2) change in the magnitude of the mean shift. Both changes affect the speed of mean shift detection. These effects are model parameter dependent and are obtained quantitatively for AR(1) and ARMA(2,1) models. Simulations and examples from automobile body assembly processes are used to demonstrate these effects. It is shown that depending on the parameters of the AMRA models, the speed of detection could be increased or decreased significantly.


1983 ◽  
Vol 245 (5) ◽  
pp. R620-R623
Author(s):  
M. Berman ◽  
P. Van Eerdewegh

A measure is proposed for the information content of data with respect to models. A model, defined by a set of parameter values in a mathematical framework, is considered a point in a hyperspace. The proposed measure expresses the information content of experimental data as the contribution they make, in units of information bits, in defining a model to within a desired region of the hyperspace. This measure is then normalized to conventional statistical measures of uncertainty. It is shown how the measure can be used to estimate the information of newly planned experiments and help in decisions on data collection strategies.


Author(s):  
Michiel Van Elk ◽  
Harold Bekkering

We characterize theories of conceptual representation as embodied, disembodied, or hybrid according to their stance on a number of different dimensions: the nature of concepts, the relation between language and concepts, the function of concepts, the acquisition of concepts, the representation of concepts, and the role of context. We propose to extend an embodied view of concepts, by taking into account the importance of multimodal associations and predictive processing. We argue that concepts are dynamically acquired and updated, based on recurrent processing of prediction error signals in a hierarchically structured network. Concepts are thus used as prior models to generate multimodal expectations, thereby reducing surprise and enabling greater precision in the perception of exemplars. This view places embodied theories of concepts in a novel predictive processing framework, by highlighting the importance of concepts for prediction, learning and shaping categories on the basis of prediction errors.


2015 ◽  
Author(s):  
david miguez

The understanding of the regulatory processes that orchestrate stem cell maintenance is a cornerstone in developmental biology. Here, we present a mathematical model based on a branching process formalism that predicts average rates of proliferative and differentiative divisions in a given stem cell population. In the context of vertebrate spinal neurogenesis, the model predicts complex non-monotonic variations in the rates of pp, pd and dd modes of division as well as in cell cycle length, in agreement with experimental results. Moreover, the model shows that the differentiation probability follows a binomial distribution, allowing us to develop equations to predict the rates of each mode of division. A phenomenological simulation of the developing spinal cord informed with the average cell cycle length and division rates predicted by the mathematical model reproduces the correct dynamics of proliferation and differentiation in terms of average numbers of progenitors and differentiated cells. Overall, the present mathematical framework represents a powerful tool to unveil the changes in the rate and mode of division of a given stem cell pool by simply quantifying numbers of cells at different times.


2020 ◽  
Author(s):  
Dongjae Kim ◽  
Jaeseung Jeong ◽  
Sang Wan Lee

AbstractThe goal of learning is to maximize future rewards by minimizing prediction errors. Evidence have shown that the brain achieves this by combining model-based and model-free learning. However, the prediction error minimization is challenged by a bias-variance tradeoff, which imposes constraints on each strategy’s performance. We provide new theoretical insight into how this tradeoff can be resolved through the adaptive control of model-based and model-free learning. The theory predicts the baseline correction for prediction error reduces the lower bound of the bias–variance error by factoring out irreducible noise. Using a Markov decision task with context changes, we showed behavioral evidence of adaptive control. Model-based behavioral analyses show that the prediction error baseline signals context changes to improve adaptability. Critically, the neural results support this view, demonstrating multiplexed representations of prediction error baseline within the ventrolateral and ventromedial prefrontal cortex, key brain regions known to guide model-based and model-free learning.One sentence summaryA theoretical, behavioral, computational, and neural account of how the brain resolves the bias-variance tradeoff during reinforcement learning is described.


Sign in / Sign up

Export Citation Format

Share Document