scholarly journals Modified leaky competing accumulator model of decision making with multiple alternatives: the Lie-algebraic approach

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chi-Fai Lo ◽  
Ho-Yan Ip

AbstractIn this communication, based upon the stochastic Gompertz law of population growth, we have reformulated the Leaky Competing Accumulator (LCA) model with multiple alternatives such that the positive-definiteness of evidence accumulation is automatically satisfied. By exploiting the Lie symmetry of the backward Kolmogorov equation (or Fokker–Planck equation) assoicated with the modified model and applying the Wei–Norman theorem, we have succeeded in deriving the N-dimensional joint probability density function (p.d.f.) and marginal p.d.f. for each alternative in closed form. With this joint p.d.f., a likelihood function can be constructed and thus model-fitting procedures become feasible. We have also demonstrated that the calibration of model parameters based upon the Monte Carlo simulated time series is indeed both efficient and accurate. Moreover, it should be noted that the proposed Lie-algebraic approach can also be applied to tackle the modified LCA model with time-varying parameters.

2006 ◽  
Vol 17 (04) ◽  
pp. 571-580 ◽  
Author(s):  
FATEMEH GHASEMI ◽  
J. PEINKE ◽  
M. REZA RAHIMI TABAR ◽  
MUHAMMAD SAHIMI

Statistical properties of interbeat intervals cascade in human hearts are evaluated by considering the joint probability distribution P (Δx2, τ2; Δx1, τ1) for two interbeat increments Δx1and Δx2of different time scales τ1and τ2. We present evidence that the conditional probability distribution P (Δx2, τ2| Δx1, τ1) may be described by a Chapman–Kolmogorov equation. The corresponding Kramers–Moyal (KM) coefficients are evaluated. The analysis indicates that while the first and second KM coefficients take on well-defined and significant values, the higher-order coefficients in the KM expansion are small. As a result, the joint probability distributions of the increments in the interbeat intervals are described by a Fokker–Planck equation, with the first two KM coefficients acting as the drift and diffusion coefficients. The method provides a novel technique for distinguishing two classes of subjects, namely, healthy ones and those with congestive heart failure, in terms of the drift and diffusion coefficients which behave differently for two classes of the subjects.


2018 ◽  
Author(s):  
Josephine Ann Urquhart ◽  
Akira O'Connor

Receiver operating characteristics (ROCs) are plots which provide a visual summary of a classifier’s decision response accuracy at varying discrimination thresholds. Typical practice, particularly within psychological studies, involves plotting an ROC from a limited number of discrete thresholds before fitting signal detection parameters to the plot. We propose that additional insight into decision-making could be gained through increasing ROC resolution, using trial-by-trial measurements derived from a continuous variable, in place of discrete discrimination thresholds. Such continuous ROCs are not yet routinely used in behavioural research, which we attribute to issues of practicality (i.e. the difficulty of applying standard ROC model-fitting methodologies to continuous data). Consequently, the purpose of the current article is to provide a documented method of fitting signal detection parameters to continuous ROCs. This method reliably produces model fits equivalent to the unequal variance least squares method of model-fitting (Yonelinas et al., 1998), irrespective of the number of data points used in ROC construction. We present the suggested method in three main stages: I) building continuous ROCs, II) model-fitting to continuous ROCs and III) extracting model parameters from continuous ROCs. Throughout the article, procedures are demonstrated in Microsoft Excel, using an example continuous variable: reaction time, taken from a single-item recognition memory. Supplementary MATLAB code used for automating our procedures is also presented in Appendix B, with a validation of the procedure using simulated data shown in Appendix C.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
C. F. Lo

The Lie-algebraic approach has been applied to solve the bond pricing problem in single-factor interest rate models. Four of the popular single-factor models, namely, the Vasicek model, Cox-Ingersoll-Ross model, double square-root model, and Ahn-Gao model, are investigated. By exploiting the dynamical symmetry of their bond pricing equations, analytical closed-form pricing formulae can be derived in a straightfoward manner. Time-varying model parameters could also be incorporated into the derivation of the bond price formulae, and this has the added advantage of allowing yield curves to be fitted. Furthermore, the Lie-algebraic approach can be easily extended to formulate new analytically tractable single-factor interest rate models.


2017 ◽  
Vol 14 (18) ◽  
pp. 4295-4314 ◽  
Author(s):  
Dan Lu ◽  
Daniel Ricciuto ◽  
Anthony Walker ◽  
Cosmin Safta ◽  
William Munger

Abstract. Calibration of terrestrial ecosystem models is important but challenging. Bayesian inference implemented by Markov chain Monte Carlo (MCMC) sampling provides a comprehensive framework to estimate model parameters and associated uncertainties using their posterior distributions. The effectiveness and efficiency of the method strongly depend on the MCMC algorithm used. In this work, a differential evolution adaptive Metropolis (DREAM) algorithm is used to estimate posterior distributions of 21 parameters for the data assimilation linked ecosystem carbon (DALEC) model using 14 years of daily net ecosystem exchange data collected at the Harvard Forest Environmental Measurement Site eddy-flux tower. The calibration of DREAM results in a better model fit and predictive performance compared to the popular adaptive Metropolis (AM) scheme. Moreover, DREAM indicates that two parameters controlling autumn phenology have multiple modes in their posterior distributions while AM only identifies one mode. The application suggests that DREAM is very suitable to calibrate complex terrestrial ecosystem models, where the uncertain parameter size is usually large and existence of local optima is always a concern. In addition, this effort justifies the assumptions of the error model used in Bayesian calibration according to the residual analysis. The result indicates that a heteroscedastic, correlated, Gaussian error model is appropriate for the problem, and the consequent constructed likelihood function can alleviate the underestimation of parameter uncertainty that is usually caused by using uncorrelated error models.


2016 ◽  
Author(s):  
Kassian Kobert ◽  
Alexandros Stamatakis ◽  
Tomáš Flouri

The phylogenetic likelihood function is the major computational bottleneck in several applications of evolutionary biology such as phylogenetic inference, species delimitation, model selection and divergence times estimation. Given the alignment, a tree and the evolutionary model parameters, the likelihood function computes the conditional likelihood vectors for every node of the tree. Vector entries for which all input data are identical result in redundant likelihood operations which, in turn, yield identical conditional values. Such operations can be omitted for improving run-time and, using appropriate data structures, reducing memory usage. We present a fast, novel method for identifying and omitting such redundant operations in phylogenetic likelihood calculations, and assess the performance improvement and memory saving attained by our method. Using empirical and simulated data sets, we show that a prototype implementation of our method yields up to 10-fold speedups and uses up to 78% less memory than one of the fastest and most highly tuned implementations of the phylogenetic likelihood function currently available. Our method is generic and can seamlessly be integrated into any phylogenetic likelihood implementation.


2000 ◽  
Vol 03 (04) ◽  
pp. 661-674 ◽  
Author(s):  
C. F. LO ◽  
P. H. YUEN ◽  
C. H. HUI

This paper provides a method for pricing options in the constant elasticity of variance (CEV) model environment using the Lie-algebraic technique when the model parameters are time-dependent. Analytical solutions for the option values incorporating time-dependent model parameters are obtained in various CEV processes with different elasticity factors. The numerical results indicate that option values are sensitive to volatility term structures. It is also possible to generate further results using various functional forms for interest rate and dividend term structures. Furthermore, the Lie-algebraic approach is very simple and can be easily extended to other option pricing models with well-defined algebraic structures.


Author(s):  
Ahmad Bani Younes ◽  
James Turner

In general, the behavior of science and engineering is predicted based on nonlinear math models. Imprecise knowledge of the model parameters alters the system response from the assumed nominal model data. We propose an algorithm for generating insights into the range of variability that can be the expected due to model uncertainty. An Automatic differentiation tool builds exact partial derivative models to develop State Transition Tensor Series-based (STTS) solution for mapping initial uncertainty models into instantaneous uncertainty models. Development of nonlinear transformations for mapping an initial probability distribution function into a current probability distribution function for computing fully nonlinear statistical system properties. This also demands the inverse mapping of the series. The resulting nonlinear probability distribution function (pdf) represents a Liouiville approximation for the stochastic Fokker Planck equation. Numerical examples are presented that demonstrate the effectiveness of the proposed methodology.


2017 ◽  
Vol 6 (3) ◽  
pp. 75
Author(s):  
Tiago V. F. Santana ◽  
Edwin M. M. Ortega ◽  
Gauss M. Cordeiro ◽  
Adriano K. Suzuki

A new regression model based on the exponentiated Weibull with the structure distribution and the structure of the generalized linear model, called the generalized exponentiated Weibull linear model (GEWLM), is proposed. The GEWLM is composed by three important structural parts: the random component, characterized by the distribution of the response variable; the systematic component, which includes the explanatory variables in the model by means of a linear structure; and a link function, which connects the systematic and random parts of the model. Explicit expressions for the logarithm of the likelihood function, score vector and observed and expected information matrices are presented. The method of maximum likelihood and a Bayesian procedure are adopted for estimating the model parameters. To detect influential observations in the new model, we use diagnostic measures based on the local influence and Bayesian case influence diagnostics. Also, we show that the estimates of the GEWLM are  robust to deal with the presence of outliers in the data. Additionally, to check whether the model supports its assumptions, to detect atypical observations and to verify the goodness-of-fit of the regression model, we define residuals based on the quantile function, and perform a Monte Carlo simulation study to construct confidence bands from the generated envelopes. We apply the new model to a dataset from the insurance area.


Sign in / Sign up

Export Citation Format

Share Document