scholarly journals Flexible and efficient simulation-based inference for models of decision-making

2021 ◽  
Author(s):  
Jan Boelts ◽  
Jan-Matthis Lueckmann ◽  
Richard Gao ◽  
Jakob H. Macke

Identifying parameters of computational models that capture experimental data is a central task in cognitive neuroscience. Bayesian statistical inference aims to not only find a single configuration of best-fitting parameters, but to recover all model parameters that are consistent with the data and prior knowledge. Statistical inference methods usually require the ability to evaluate the likelihood of the model—however, for many models of interest in cognitive neuroscience, the associated likelihoods cannot be computed efficiently. Simulation-based inference (SBI) offers a solution to this problem by only requiring access to simulations produced by the model. Here, we provide an efficient SBI method for models of decision-making. Our approach, Mixed Neural Likelihood Estimation (MNLE), trains neural density estimators on model simulations to emulate the simulator. The likelihoods of the emulator can then be used to perform Bayesian parameter inference on experimental data using standard approximate inference methods like Markov Chain Monte Carlo sampling. While most neural likelihood estimation methods target continuous data, MNLE works with mixed data types, as typically obtained in decision-making experiments (e.g., binary decisions and associated continuous reaction times). We demonstrate MNLE on the classical drift-diffusion model (DDM) and compare its performance to a recently proposed method for SBI on DDMs, called likelihood approximation networks (LAN, Fengler et al. 2021). We show that MNLE is substantially more efficient than LANs, requiring up to six orders of magnitudes fewer model simulations to achieve comparable likelihood accuracy and evaluation time while providing the same level of flexibility. We include an implementation of our algorithm in the user-friendly open source package sbi.

Author(s):  
Arun Kumar Chaudhary ◽  
Vijay Kumar

In this study, we have introduced a three-parameter probabilistic model established from type I half logistic-Generating family called half logistic modified exponential distribution. The mathematical and statistical properties of this distribution are also explored. The behavior of probability density, hazard rate, and quantile functions are investigated. The model parameters are estimated using the three well known estimation methods namely maximum likelihood estimation (MLE), least-square estimation (LSE) and Cramer-Von-Mises estimation (CVME) methods. Further, we have taken a real data set and verified that the presented model is quite useful and more flexible for dealing with a real data set. KEYWORDS— Half-logistic distribution, Estimation, CVME ,LSE, , MLE


2016 ◽  
Author(s):  
Stefano Palminteri ◽  
Valentin Wyart ◽  
Etienne Koechlin

AbstractCognitive neuroscience, especially in the fields of learning and decision-making, is witnessing the blossoming of computational model-based analyses. Several methodological and review papers have indicated how and why candidate models should be compared by trading off their ability to predict the data as a function of their complexity. However, the importance of simulating candidate models has been so far largely overlooked, which entails several drawbacks and leads to invalid conclusions. Here we argue that the analysis of model simulations is often necessary to support the specific claims about behavioral function that most of model-based studies make. We defend this argument both informally by providing a large-scale (N>300) review of recent studies, and formally by showing how model simulations are necessary to interpret model comparison results. Finally, we propose guidelines for future work, which combine model comparison and simulation.


2021 ◽  
Vol 2090 (1) ◽  
pp. 012143
Author(s):  
Corneliu Barbulescu ◽  
Toma-Leonida Dragomir

Abstract The real capacitors’ behaviour in electric circuits modelled by a single capacity deviates from the ideal one. In order to find better compromises between precision and simplicity, different C-R-L models are used. In these models, C, R, L are called equivalent parameters and take constant values. Under these assumptions, the capacitors are modelled as lumped parameter subsystems although it is well known that the real capacitors are essentially distributed parameter systems. As highlighted in this paper, the capacitors are also time-variant subsystems. To prove this, we use two types of experimental data: data measured during the capacitor’s discharge process and data obtained from frequency characteristics. The article proposes two estimation methods of equivalent values for the model parameters C and R based on their time variance highlighted by the experimental data. The estimation methods use a system of equations associated with the discharging of capacitors, respectively, with the frequency characteristics via polynomial regression. The experiments were carried out with an electrolytic polymer capacitor rated 220 μF, 25 V, 2.5 A rms, 85 °C, designed mainly for energy storage and filtering, the results being confirmed by experiments performed on other similar capacitors.


Author(s):  
Themistoklis Koutsellis ◽  
Zissimos P. Mourelatos

Abstract For many data-driven reliability problems, the population is not homogeneous; i.e., its statistics are not described by a unimodal distribution. Also, the interval of observation may not be long enough to capture the failure statistics. A limited failure population (LFP) consists of two subpopulations, a defective and a nondefective one, with well-separated modes of the two underlying distributions. In reliability and warranty forecasting applications, the estimation of the number of defective units and the estimation of the parameters of the underlying distribution are very important. Among various estimation methods, the maximum likelihood estimation (MLE) approach is the most widely used. Its likelihood function, however, is often incomplete, resulting in an erroneous statistical inference. In this paper, we estimate the parameters of a LFP analytically using a rational function fitting (RFF) method based on the Weibull probability plot (WPP) of observed data. We also introduce a censoring factor (CF) to assess how sufficient the number of collected data is for statistical inference. The proposed RFF method is compared with existing MLE approaches using simulated data and data related to automotive warranty forecasting.


2000 ◽  
Vol 16 (1) ◽  
pp. 131-138
Author(s):  
Torben G. Andersen

The accessibility of high-performance computing power has always influenced theoretical and applied econometrics. Gouriéroux and Monfort begin their recent offering, Simulation-Based Econometric Methods, with a stylized three-stage classification of the history of statistical econometrics. In the first stage, lasting through the 1960's, models and estimation methods were designed to produce closed-form expressions for the estimators. This spurred thorough investigation of the standard linear model, linear simultaneous equations with the associated instrumental variable techniques, and maximum likelihood estimation within the exponential family. During the 1970's and 1980's the development of powerful numerical optimization routines led to the exploration of procedures without closed-form solutions for the estimators. During this period the general theory of nonlinear statistical inference was developed, and nonlinear micro models such as limited dependent variable models and nonlinear time series models, e.g., ARCH, were explored. The associated estimation principles included maximum likelihood (beyond the exponential family), pseudo-maximum likelihood, nonlinear least squares, and generalized method of moments. Finally, the third stage considers problems without a tractable analytic criterion function. Such problems almost invariably arise from the need to evaluate high-dimensional integrals. The idea is to circumvent the associated numerical problems by a simulation-based approach. The main requirement is therefore that the model may be simulated given the parameters and the exogenous variables. The approach delivers simulated counterparts to standard estimation procedures and has inspired the development of entirely new procedures based on the principle of indirect inference.


2022 ◽  
Vol 18 (1) ◽  
pp. e1009634
Author(s):  
Georgy Antonov ◽  
Christopher Gagne ◽  
Eran Eldar ◽  
Peter Dayan

The replay of task-relevant trajectories is known to contribute to memory consolidation and improved task performance. A wide variety of experimental data show that the content of replayed sequences is highly specific and can be modulated by reward as well as other prominent task variables. However, the rules governing the choice of sequences to be replayed still remain poorly understood. One recent theoretical suggestion is that the prioritization of replay experiences in decision-making problems is based on their effect on the choice of action. We show that this implies that subjects should replay sub-optimal actions that they dysfunctionally choose rather than optimal ones, when, by being forgetful, they experience large amounts of uncertainty in their internal models of the world. We use this to account for recent experimental data demonstrating exactly pessimal replay, fitting model parameters to the individual subjects’ choices.


2018 ◽  
Vol 2 ◽  
pp. 239821281881059 ◽  
Author(s):  
Anthony G. Vaccaro ◽  
Stephen M. Fleming

Metacognition supports reflection upon and control of other cognitive processes. Despite metacognition occupying a central role in human psychology, its neural substrates remain underdetermined, partly due to study-specific differences in task domain and type of metacognitive judgement under study. It is also unclear how metacognition relates to other apparently similar abilities that depend on recursive thought such as theory of mind or mentalising. Now that neuroimaging studies of metacognition are more prevalent, we have an opportunity to characterise consistencies in neural substrates identified across different analysis types and domains. Here we used quantitative activation likelihood estimation methods to synthesise findings from 47 neuroimaging studies on metacognition, divided into categories based on the target of metacognitive evaluation (memory and decision-making), analysis type (judgement-related activation, confidence-related activation, and predictors of metacognitive sensitivity), and, for metamemory judgements, temporal focus (prospective and retrospective). A domain-general network, including medial and lateral prefrontal cortex, precuneus, and insula was associated with the level of confidence in self-performance in both decision-making and memory tasks. We found preferential engagement of right anterior dorsolateral prefrontal cortex in metadecision experiments and bilateral parahippocampal cortex in metamemory experiments. Results on metacognitive sensitivity were inconclusive, likely due to fewer studies reporting this contrast. Finally, by comparing our results to meta-analyses of mentalising, we obtain evidence for common engagement of the ventromedial and anterior dorsomedial prefrontal cortex in both metacognition and mentalising, suggesting that these regions may support second-order representations for thinking about the thoughts of oneself and others.


2021 ◽  
Author(s):  
Georgy K. Antonov ◽  
Christopher Gagne ◽  
Eran Eldar ◽  
Peter Dayan

ABSTRACTThe replay of task-relevant trajectories is known to contribute to memory consolidation and improved task performance. A wide variety of experimental data show that the content of replayed sequences is highly specific and can be modulated by reward as well as other prominent task variables. However, the rules governing the choice of sequences to be replayed still remain poorly understood. One recent theoretical suggestion is that the prioritization of replay experiences in decision-making problems is based on their effect on the choice of action. We exploit this to address recent experimental data showing in a particular task that human subjects tended to replay sub-optimal outcomes that they later chose to avoid. We show that pessimistic replay is of benefit to forgetful agents experiencing large amounts of uncertainty in their models of the world. Further, we fit our model parameters to the individual subjects’ choices and confirm that their replay choices were appropriate according to the proposed scheme.


1992 ◽  
Vol 23 (2) ◽  
pp. 89-104 ◽  
Author(s):  
Ole H. Jacobsen ◽  
Feike J. Leij ◽  
Martinus Th. van Genuchten

Breakthrough curves of Cl and 3H2O were obtained during steady unsaturated flow in five lysimeters containing an undisturbed coarse sand (Orthic Haplohumod). The experimental data were analyzed in terms of the classical two-parameter convection-dispersion equation and a four-parameter two-region type physical nonequilibrium solute transport model. Model parameters were obtained by both curve fitting and time moment analysis. The four-parameter model provided a much better fit to the data for three soil columns, but performed only slightly better for the two remaining columns. The retardation factor for Cl was about 10 % less than for 3H2O, indicating some anion exclusion. For the four-parameter model the average immobile water fraction was 0.14 and the Peclet numbers of the mobile region varied between 50 and 200. Time moments analysis proved to be a useful tool for quantifying the break through curve (BTC) although the moments were found to be sensitive to experimental scattering in the measured data at larger times. Also, fitted parameters described the experimental data better than moment generated parameter values.


Sign in / Sign up

Export Citation Format

Share Document