scholarly journals Racing Against The Clock: Evidence-Based Vs. Time-Based Decisions

2020 ◽  
Author(s):  
Guy Hawkins ◽  
Andrew Heathcote

Classical dynamic theories of decision making assume that responses are triggered by accumulating a threshold amount of information. Recently, there has been a growing appreciation that the passage of time also plays a role in triggering responses. We propose that decision processes are composed of two diffusive accumulation mechanisms---one evidence-based and one time-based---that compete in an independent race architecture. We show that this Timed Racing Diffusion Model (TRDM) provides a unified, comprehensive and quantitatively accurate explanation of key decision phenomena---including the effects of implicit and explicit deadlines and the relative speed of correct and error responses under speed-accuracy tradeoffs---without requiring additional mechanisms that have been criticized as being ad-hoc in theoretical motivation and difficult to estimate, such as trial-to-trial variability parameters, collapsing thresholds, or urgency signals. In contrast, our addition is grounded in a widely validated account of time-estimation performance, enabling the same mechanism to simultaneously account for interval estimation and decision making with an explicit deadline.

2020 ◽  
Author(s):  
Gabriel Weindel ◽  
Royce anders ◽  
F.-Xavier Alario ◽  
Boris BURLE

Decision-making models based on evidence accumulation processes (the most prolific one being the drift-diffusion model – DDM) are widely used to draw inferences about latent psychological processes from chronometric data. While the observed goodness of fit in a wide range of tasks supports the model’s validity, the derived interpretations have yet to be sufficiently cross-validated with other measures that also reflect cognitive processing. To do so, we recorded electromyographic (EMG) activity along with response times (RT), and used it to decompose every RT into two components: a pre-motor (PMT) and motor time (MT). These measures were mapped to the DDM's parameters, thus allowing a test, beyond quality of fit, of the validity of the model’s assumptions and their usual interpretation. In two perceptual decision tasks, performed within a canonical task setting, we manipulated stimulus contrast, speed-accuracy trade-off, and response force, and assessed their effects on PMT, MT, and RT. Contrary to common assumptions, these three factors consistently affected MT. DDM parameter estimates of non-decision processes are thought to include motor execution processes, and they were globally linked to the recorded response execution MT. However, when the assumption of independence between decision and non-decision processes was not met, in the fastest trials, the link was weaker. Overall, the results show a fair concordance between model-based and EMG-based decompositions of RTs, but also establish some limits on the interpretability of decision model parameters linked to response execution.


Author(s):  
Laura Ponisio ◽  
Pascal van Eck ◽  
Lourens Riemens

Professionals in decision making roles are often faced with the problem of choosing partners for closer cooperation, for instance, to start new joint IT development projects or for harvesting best practices. The large amounts of information involved in these decision processes obscure possibilities, and therefore choices are made ad hoc. In this article, the authors present an approach that uses concrete data and network analysis to support decision makers in processing and understanding this information. Central in the authors’ approach are questionnaires capturing aspired and current development levels of the processes of the cooperating organizations and graphs generated using network analysis techniques. The advantage of the authors’ approach, which they validated via expert interviews, is that results are semi-automatically translated to visualizations; which in turn offer an overall view of the current and aspired situation in the network without losing the ability to pinpoint particular, individual processes of interest. This, in turn, enables IT professionals to make better decisions.


2020 ◽  
Vol 43 ◽  
Author(s):  
Valerie F. Reyna ◽  
David A. Broniatowski

Abstract Gilead et al. offer a thoughtful and much-needed treatment of abstraction. However, it fails to build on an extensive literature on abstraction, representational diversity, neurocognition, and psychopathology that provides important constraints and alternative evidence-based conceptions. We draw on conceptions in software engineering, socio-technical systems engineering, and a neurocognitive theory with abstract representations of gist at its core, fuzzy-trace theory.


2011 ◽  
Vol 20 (4) ◽  
pp. 121-123
Author(s):  
Jeri A. Logemann

Evidence-based practice requires astute clinicians to blend our best clinical judgment with the best available external evidence and the patient's own values and expectations. Sometimes, we value one more than another during clinical decision-making, though it is never wise to do so, and sometimes other factors that we are unaware of produce unanticipated clinical outcomes. Sometimes, we feel very strongly about one clinical method or another, and hopefully that belief is founded in evidence. Some beliefs, however, are not founded in evidence. The sound use of evidence is the best way to navigate the debates within our field of practice.


2009 ◽  
Vol 20 (9) ◽  
pp. 2574-2586 ◽  
Author(s):  
Yu-Xing SUN ◽  
Song-Hua HUANG ◽  
Li-Jun CHEN ◽  
Li XIE

Author(s):  
John Hunsley ◽  
Eric J. Mash

Evidence-based assessment relies on research and theory to inform the selection of constructs to be assessed for a specific assessment purpose, the methods and measures to be used in the assessment, and the manner in which the assessment process unfolds. An evidence-based approach to clinical assessment necessitates the recognition that, even when evidence-based instruments are used, the assessment process is a decision-making task in which hypotheses must be iteratively formulated and tested. In this chapter, we review (a) the progress that has been made in developing an evidence-based approach to clinical assessment in the past decade and (b) the many challenges that lie ahead if clinical assessment is to be truly evidence-based.


2014 ◽  
Vol 67 (5) ◽  
pp. 790-794 ◽  
Author(s):  
Iván Arribas ◽  
Irene Comeig ◽  
Amparo Urbano ◽  
José Vila

Sign in / Sign up

Export Citation Format

Share Document