Computational Brain & Behavior
Latest Publications


TOTAL DOCUMENTS

123
(FIVE YEARS 102)

H-INDEX

7
(FIVE YEARS 3)

Published By Springer-Verlag

2522-087x, 2522-0861

Author(s):  
Paul M. Garrett ◽  
Murray Bennett ◽  
Yu-Tzu Hsieh ◽  
Zachary L. Howard ◽  
Cheng-Ta Yang ◽  
...  

Author(s):  
Bonan Zhao ◽  
Christopher G. Lucas ◽  
Neil R. Bramley

AbstractHow do people decide how general a causal relationship is, in terms of the entities or situations it applies to? What features do people use to decide whether a new situation is governed by a new causal law or an old one? How can people make these difficult judgments in a fast, efficient way? We address these questions in two experiments that ask participants to generalize from one (Experiment 1) or several (Experiment 2) causal interactions between pairs of objects. In each case, participants see an agent object act on a recipient object, causing some changes to the recipient. In line with the human capacity for few-shot concept learning, we find systematic patterns of causal generalizations favoring simpler causal laws that extend over categories of similar objects. In Experiment 1, we find that participants’ inferences are shaped by the order of the generalization questions they are asked. In both experiments, we find an asymmetry in the formation of causal categories: participants preferentially identify causal laws with features of the agent objects rather than recipients. To explain this, we develop a computational model that combines program induction (about the hidden causal laws) with non-parametric category inference (about their domains of influence). We demonstrate that our modeling approach can both explain the order effect in Experiment 1 and the causal asymmetry, and outperforms a naïve Bayesian account while providing a computationally plausible mechanism for real-world causal generalization.


Author(s):  
Daniel W. Heck ◽  
Florence Bockting

AbstractBayes factors allow researchers to test the effects of experimental manipulations in within-subjects designs using mixed-effects models. van Doorn et al. (2021) showed that such hypothesis tests can be performed by comparing different pairs of models which vary in the specification of the fixed- and random-effect structure for the within-subjects factor. To discuss the question of which model comparison is most appropriate, van Doorn et al. compared three corresponding Bayes factors using a case study. We argue that researchers should not only focus on pairwise comparisons of two nested models but rather use Bayesian model selection for the direct comparison of a larger set of mixed models reflecting different auxiliary assumptions regarding the heterogeneity of effect sizes across individuals. In a standard one-factorial, repeated measures design, the comparison should include four mixed-effects models: fixed-effects H0, fixed-effects H1, random-effects H0, and random-effects H1. Thereby, one can test both the average effect of condition and the heterogeneity of effect sizes across individuals. Bayesian model averaging provides an inclusion Bayes factor which quantifies the evidence for or against the presence of an average effect of condition while taking model selection uncertainty about the heterogeneity of individual effects into account. We present a simulation study showing that model averaging among a larger set of mixed models performs well in recovering the true, data-generating model.


Author(s):  
Jami Pekkanen ◽  
Oscar Terence Giles ◽  
Yee Mun Lee ◽  
Ruth Madigan ◽  
Tatsuru Daimon ◽  
...  

AbstractHuman behavior and interaction in road traffic is highly complex, with many open scientific questions of high applied importance, not least in relation to recent development efforts toward automated vehicles. In parallel, recent decades have seen major advances in cognitive neuroscience models of human decision-making, but these models have mainly been applied to simplified laboratory tasks. Here, we demonstrate how variable-drift extensions of drift diffusion (or evidence accumulation) models of decision-making can be adapted to the mundane yet non-trivial scenario of a pedestrian deciding if and when to cross a road with oncoming vehicle traffic. Our variable-drift diffusion models provide a mechanistic account of pedestrian road-crossing decisions, and how these are impacted by a variety of sensory cues: time and distance gaps in oncoming vehicle traffic, vehicle deceleration implicitly signaling intent to yield, as well as explicit communication of such yielding intentions. We conclude that variable-drift diffusion models not only hold great promise as mechanistic models of complex real-world decisions, but that they can also serve as applied tools for improving road traffic safety and efficiency.


Author(s):  
Maximilian Linde ◽  
Don van Ravenzwaaij

AbstractNested data structures, in which conditions include multiple trials and are fully crossed with participants, are often analyzed using repeated-measures analysis of variance or mixed-effects models. Typically, researchers are interested in determining whether there is an effect of the experimental manipulation. These kinds of analyses have different appropriate specifications for the null and alternative models, and a discussion on which is to be preferred and when is sorely lacking. van Doorn et al. (2021) performed three types of Bayes factor model comparisons on a simulated data set in order to examine which model comparison is most suitable for quantifying evidence for or against the presence of an effect of the experimental manipulation. Here, we extend their results by simulating multiple data sets for various scenarios and by using different prior specifications. We demonstrate how three different Bayes factor model comparison types behave under changes in different parameters, and we make concrete recommendations on which model comparison is most appropriate for different scenarios.


Author(s):  
Andrew Heathcote ◽  
Dora Matzke

AbstractThe “marginality principle” for linear regression models states that when a higher order term is included, its constituent terms must also be included. The target article relies on this principle for the fixed-effects part of linear mixed models of ANOVA designs and considers the implication that if extended to combined fixed-and-random-effects models, model selection tests specific to some fixed-effects ANOVA terms are not possible. We review the basis for this principle for fixed-effects models and delineate its limits. We then consider its extension to combined fixed-and-random-effects models. We conclude that we have been unable to find in the literature, including the target article, and have ourselves been unable to construct any satisfactory argument against the use of incomplete ANOVA models. The only basis we could find requires one to assume that it is not possible to test point-null hypotheses, something we disagree with, and which we believe is incompatible with the Bayesian model-selection methods that are the basis of the target article.


Author(s):  
Šimon Kucharský ◽  
N.-Han Tran ◽  
Karel Veldkamp ◽  
Maartje Raijmakers ◽  
Ingmar Visser

AbstractSpeeded decision tasks are usually modeled within the evidence accumulation framework, enabling inferences on latent cognitive parameters, and capturing dependencies between the observed response times and accuracy. An example is the speed-accuracy trade-off, where people sacrifice speed for accuracy (or vice versa). Different views on this phenomenon lead to the idea that participants may not be able to control this trade-off on a continuum, but rather switch between distinct states (Dutilh et al., Cognitive Science 35(2):211–250, 2010). Hidden Markov models are used to account for switching between distinct states. However, combining evidence accumulation models with a hidden Markov structure is a challenging problem, as evidence accumulation models typically come with identification and computational issues that make them challenging on their own. Thus, an integration of hidden Markov models with evidence accumulation models has still remained elusive, even though such models would allow researchers to capture potential dependencies between response times and accuracy within the states, while concomitantly capturing different behavioral modes during cognitive processing. This article presents a model that uses an evidence accumulation model as part of a hidden Markov structure. This model is considered as a proof of principle that evidence accumulation models can be combined with Markov switching models. As such, the article considers a very simple case of a simplified Linear Ballistic Accumulation. An extensive simulation study was conducted to validate the model’s implementation according to principles of robust Bayesian workflow. Example reanalysis of data from Dutilh et al. (Cognitive Science 35(2):211–250, 2010) demonstrates the application of the new model. The article concludes with limitations and future extensions or alternatives to the model and its application.


Sign in / Sign up

Export Citation Format

Share Document