scholarly journals Assessing model-based inferences in decision making with single-trial response time decomposition

Author(s):  
Gabriel Weindel ◽  
Royce anders ◽  
F.-Xavier Alario ◽  
Boris BURLE

Decision-making models based on evidence accumulation processes (the most prolific one being the drift-diffusion model – DDM) are widely used to draw inferences about latent psychological processes from chronometric data. While the observed goodness of fit in a wide range of tasks supports the model’s validity, the derived interpretations have yet to be sufficiently cross-validated with other measures that also reflect cognitive processing. To do so, we recorded electromyographic (EMG) activity along with response times (RT), and used it to decompose every RT into two components: a pre-motor (PMT) and motor time (MT). These measures were mapped to the DDM's parameters, thus allowing a test, beyond quality of fit, of the validity of the model’s assumptions and their usual interpretation. In two perceptual decision tasks, performed within a canonical task setting, we manipulated stimulus contrast, speed-accuracy trade-off, and response force, and assessed their effects on PMT, MT, and RT. Contrary to common assumptions, these three factors consistently affected MT. DDM parameter estimates of non-decision processes are thought to include motor execution processes, and they were globally linked to the recorded response execution MT. However, when the assumption of independence between decision and non-decision processes was not met, in the fastest trials, the link was weaker. Overall, the results show a fair concordance between model-based and EMG-based decompositions of RTs, but also establish some limits on the interpretability of decision model parameters linked to response execution.

2017 ◽  
Author(s):  
David P. McGovern ◽  
Aoife Hayes ◽  
Simon P. Kelly ◽  
Redmond O’Connell

Ageing impacts on decision making behaviour across a wide range of cognitive tasks and scenarios. Computational modeling has proven highly valuable in providing mechanistic interpretations of these age-related differences; however, the extent to which model parameter differences accurately reflect changes to the underlying neural computations has yet to be tested. Here, we measured neural signatures of decision formation as younger and older participants performed motion discrimination and contrast-change detection tasks, and compared the dynamics of these signals to key parameter estimates from fits of a prominent accumulation-to-bound model (drift diffusion) to behavioural data. Our results indicate marked discrepancies between the age-related effects observed in the model output and the neural data. Most notably, while the model predicted a higher decision boundary in older age for both tasks, the neural data indicated no such differences. To reconcile the model and neural findings, we used our neurophysiological observations as a guide to constrain and adapt the model parameters. In addition to providing better fits to behaviour on both tasks, the resultant neurally-informed models furnished novel predictions regarding other features of the neural data which were empirically validated. These included a slower mean rate of evidence accumulation amongst older adults during motion discrimination and a beneficial reduction in between-trial variability in accumulation rates on the contrast-change detection task, which was linked to more consistent attentional engagement. Our findings serve to highlight how combining human brain signal measurements with computational modelling can yield unique insights into group differences in neural mechanisms for decision making.


2003 ◽  
Vol 125 (1) ◽  
pp. 132-140 ◽  
Author(s):  
David C. Lin ◽  
T. Richard Nichols

Models of muscle crossbridge dynamics have great potential for understanding muscle contraction and having a wide range of application. However, the estimation of many model parameters, most of which are difficult to measure, limits their applicability. This study developed a method of estimating parameters in the Distribution Moment crossbridge model from measurements of force-length and force-velocity relationships in cat soleus single muscle fibers. Analysis of the parameter estimates showed that the detachment rate parameters had more uncertainty than the attachment rate parameter, which could reflect physiological variations in the contractile protein content and in the response of muscle to lengthenings.


2017 ◽  
Vol 1 ◽  
pp. 24-57 ◽  
Author(s):  
Woo-Young Ahn ◽  
Nathaniel Haines ◽  
Lei Zhang

Reinforcement learning and decision-making (RLDM) provide a quantitative framework and computational theories with which we can disentangle psychiatric conditions into the basic dimensions of neurocognitive functioning. RLDM offer a novel approach to assessing and potentially diagnosing psychiatric patients, and there is growing enthusiasm for both RLDM and computational psychiatry among clinical researchers. Such a framework can also provide insights into the brain substrates of particular RLDM processes, as exemplified by model-based analysis of data from functional magnetic resonance imaging (fMRI) or electroencephalography (EEG). However, researchers often find the approach too technical and have difficulty adopting it for their research. Thus, a critical need remains to develop a user-friendly tool for the wide dissemination of computational psychiatric methods. We introduce an R package called hBayesDM (hierarchical Bayesian modeling of Decision-Making tasks), which offers computational modeling of an array of RLDM tasks and social exchange games. The hBayesDM package offers state-of-the-art hierarchical Bayesian modeling, in which both individual and group parameters (i.e., posterior distributions) are estimated simultaneously in a mutually constraining fashion. At the same time, the package is extremely user-friendly: users can perform computational modeling, output visualization, and Bayesian model comparisons, each with a single line of coding. Users can also extract the trial-by-trial latent variables (e.g., prediction errors) required for model-based fMRI/EEG. With the hBayesDM package, we anticipate that anyone with minimal knowledge of programming can take advantage of cutting-edge computational-modeling approaches to investigate the underlying processes of and interactions between multiple decision-making (e.g., goal-directed, habitual, and Pavlovian) systems. In this way, we expect that the hBayesDM package will contribute to the dissemination of advanced modeling approaches and enable a wide range of researchers to easily perform computational psychiatric research within different populations.


2021 ◽  
Author(s):  
Miaorun Wang ◽  
Haojie Liu ◽  
Bernd Lennartz

<p>Hydrophysical soil properties play an important role in regulating the water balance of peatlands and are known to be a function of the status of peat degradation. The objective of this study was to revise multiple regression models (pedotransfer functions, PTFs) for the assessment of hydrophysical properties from readily available soil properties. We selected three study sites, each representing a different state of peat degradation (natural, degraded and extremely degraded). At each site, 72 undisturbed soil cores were collected. The saturated hydraulic conductivity (<em>K</em><sub>s</sub>), soil water retention curves, total porosity, macroporosity, bulk density (BD) and soil organic matter (SOM) content were determined for all sampling locations. The van Genuchten (VG) model parameters (<em>θ</em><sub>s</sub>, <em>α</em>, <em>n</em>) were optimized using the RETC software package. Macroporosity and the <em>K</em><sub>s</sub> were found to be highly correlated, but the obtained functions differ for differently degraded peatlands. The introduction of macroporosity into existing PTFs substantially improved the derivation of hydrophysical parameter values as compared to functions based on BD and SOM content alone. The obtained PTFs can be applied to a wide range of natural and degraded peat soils. We assume that the incorporation of macroposity helps to overcome effects possibly resulting from soil management. Our results suggest that the extra effort required to determine macroporosity is worth it, considering the quality of parameter estimates for hydraulic conductivity as well as the soil hydraulic VG model.</p>


Processes ◽  
2018 ◽  
Vol 6 (4) ◽  
pp. 27 ◽  
Author(s):  
René Schenkendorf ◽  
Xiangzhong Xie ◽  
Moritz Rehbein ◽  
Stephan Scholl ◽  
Ulrike Krewer

In the field of chemical engineering, mathematical models have been proven to be an indispensable tool for process analysis, process design, and condition monitoring. To gain the most benefit from model-based approaches, the implemented mathematical models have to be based on sound principles, and they need to be calibrated to the process under study with suitable model parameter estimates. Often, the model parameters identified by experimental data, however, pose severe uncertainties leading to incorrect or biased inferences. This applies in particular in the field of pharmaceutical manufacturing, where usually the measurement data are limited in quantity and quality when analyzing novel active pharmaceutical ingredients. Optimally designed experiments, in turn, aim to increase the quality of the gathered data in the most efficient way. Any improvement in data quality results in more precise parameter estimates and more reliable model candidates. The applied methods for parameter sensitivity analyses and design criteria are crucial for the effectiveness of the optimal experimental design. In this work, different design measures based on global parameter sensitivities are critically compared with state-of-the-art concepts that follow simplifying linearization principles. The efficient implementation of the proposed sensitivity measures is explicitly addressed to be applicable to complex chemical engineering problems of practical relevance. As a case study, the homogeneous synthesis of 3,4-dihydro-1H-1-benzazepine-2,5-dione, a scaffold for the preparation of various protein kinase inhibitors, is analyzed followed by a more complex model of biochemical reactions. In both studies, the model-based optimal experimental design benefits from global parameter sensitivities combined with proper design measures.


Processes ◽  
2019 ◽  
Vol 7 (8) ◽  
pp. 509 ◽  
Author(s):  
Xiangzhong Xie ◽  
René Schenkendorf

Model-based concepts have been proven to be beneficial in pharmaceutical manufacturing, thus contributing to low costs and high quality standards. However, model parameters are derived from imperfect, noisy measurement data, which result in uncertain parameter estimates and sub-optimal process design concepts. In the last two decades, various methods have been proposed for dealing with parameter uncertainties in model-based process design. Most concepts for robustification, however, ignore the batch-to-batch variations that are common in pharmaceutical manufacturing processes. In this work, a probability-box robust process design concept is proposed. Batch-to-batch variations were considered to be imprecise parameter uncertainties, and modeled as probability-boxes accordingly. The point estimate method was combined with the back-off approach for efficient uncertainty propagation and robust process design. The novel robustification concept was applied to a freeze-drying process. Optimal shelf temperature and chamber pressure profiles are presented for the robust process design under batch-to-batch variation.


2017 ◽  
Vol 21 (1) ◽  
pp. 65-81 ◽  
Author(s):  
David N. Dralle ◽  
Nathaniel J. Karst ◽  
Kyriakos Charalampous ◽  
Andrew Veenstra ◽  
Sally E. Thompson

Abstract. The study of single streamflow recession events is receiving increasing attention following the presentation of novel theoretical explanations for the emergence of power law forms of the recession relationship, and drivers of its variability. Individually characterizing streamflow recessions often involves describing the similarities and differences between model parameters fitted to each recession time series. Significant methodological sensitivity has been identified in the fitting and parameterization of models that describe populations of many recessions, but the dependence of estimated model parameters on methodological choices has not been evaluated for event-by-event forms of analysis. Here, we use daily streamflow data from 16 catchments in northern California and southern Oregon to investigate how combinations of commonly used streamflow recession definitions and fitting techniques impact parameter estimates of a widely used power law recession model. Results are relevant to watersheds that are relatively steep, forested, and rain-dominated. The highly seasonal mediterranean climate of northern California and southern Oregon ensures study catchments explore a wide range of recession behaviors and wetness states, ideal for a sensitivity analysis. In such catchments, we show the following: (i) methodological decisions, including ones that have received little attention in the literature, can impact parameter value estimates and model goodness of fit; (ii) the central tendencies of event-scale recession parameter probability distributions are largely robust to methodological choices, in the sense that differing methods rank catchments similarly according to the medians of these distributions; (iii) recession parameter distributions are method-dependent, but roughly catchment-independent, such that changing the choices made about a particular method affects a given parameter in similar ways across most catchments; and (iv) the observed correlative relationship between the power-law recession scale parameter and catchment antecedent wetness varies depending on recession definition and fitting choices. Considering study results, we recommend a combination of four key methodological decisions to maximize the quality of fitted recession curves, and to minimize bias in the related populations of fitted recession parameters.


2021 ◽  
Author(s):  
Michael Cole ◽  
Christina Yap ◽  
Christopher Buckley ◽  
Wan-Fai Ng ◽  
Iain McInnes ◽  
...  

Abstract Background: Adaptive model-based dose-finding designs have demonstrated advantages over traditional rule-based designs but have increased statistical complexity resulting in slow uptake especially outside of cancer trials. TRAFIC is a multi-centre, early phase trial in Rheumatoid Arthritis incorporating a model-based design.Methods: A Bayesian adaptive dose-finding phase I trial rolling into a single arm, single stage phase II trial. Model parameters for phase I were chosen via Monte Carlo simulation evaluating objective performance measures under clinically relevant scenarios and incorporated stopping rules for early termination. Potential designs were further calibrated utilising dose transition pathways.Discussion: TRAFIC is an MRC funded trial of a re-purposed treatment demonstrating that it is possible to design, fund and implement a model-based phase I trial in a non-cancer population within conventional research funding tracks and regulatory constraints. The phase I design allows borrowing of information from previous trials; all accumulated data to be utilised in decision-making; verification of operating characteristics through simulation; improved understanding for management and oversight teams through dose transition pathways. The rolling phase II design brings efficiencies in trial conduct including site and monitoring activities, and cost.TRAFIC is the first funded model-based dose-finding trial in inflammatory disease demonstrating that small phase I/II trials can have an underlying statistical basis for decision-making and interpretation.Trial Registration: ISRCTN 36667085


2021 ◽  
Vol 4 (3(112)) ◽  
pp. 56-65
Author(s):  
Konstantin Petrov ◽  
Igor Kobzev ◽  
Oleksandr Orlov ◽  
Victor Kosenko ◽  
Alisa Kosenko ◽  
...  

An approach to constructing mathematical models of individual multicriterial estimation was proposed based on information about the ordering relations established by the expert for a set of alternatives. Structural identification of the estimation model using the additive utility function of alternatives was performed within axiomatics of the multi-attribute utility theory (MAUT). A method of parametric identification of the model based on the ideas of the theory of comparative identification has been developed. To determine the model parameters, it was proposed to use the midpoint method that has resulted in the possibility of obtaining a uniform stable solution of the problem. It was shown that in this case, the problem of parametric identification of the estimation model can be reduced to a standard linear programming problem. The scalar multicriterial estimates of alternatives obtained on the basis of the synthesized mathematical model make it possible to compare them among themselves according to the degree of efficiency and, thus, choose "the best" or rank them. A significant advantage of the proposed approach is the ability to use only non-numerical information about the decisions already made by experts to solve the problem of identifying the model parameters. This enables partial reduction of the degree of expert’s subjective influence on the outcome of decision-making and reduces the cost of the expert estimation process. A method of verification of the estimation model based on the principles of cross-validation has been developed. The results of computer modeling were presented. They confirmed the effectiveness of using the proposed method of parametric model identification to solve problems related to automation of the process of intelligent decision making.


Sign in / Sign up

Export Citation Format

Share Document