scholarly journals Competitive guided search: Meeting the challenge of benchmark RT distributions

2013 ◽  
Vol 13 (8) ◽  
pp. 24-24 ◽  
Author(s):  
R. Moran ◽  
M. Zehetleitner ◽  
H. J. Muller ◽  
M. Usher
2019 ◽  
Vol 118 ◽  
pp. 440-458 ◽  
Author(s):  
Vahid Riahi ◽  
M.A. Hakim Newton ◽  
M.M.A. Polash ◽  
Kaile Su ◽  
Abdul Sattar
Keyword(s):  

2017 ◽  
Vol 11 (2) ◽  
pp. 185-200
Author(s):  
Yota Ueda ◽  
Hiroyuki Ebara ◽  
Koki Nakayama ◽  
Syuhei Iida
Keyword(s):  

Author(s):  
John R. Dixon

The goal of this paper is to raise awareness and generate discussion about research methodology in engineering design. Design researchers are viewed as a single communicating community searching for scientific theories of engineering design; that is, theories that can be tested by formal methods of hypothesis testing. In the paper, the scientific method for validating theories is reviewed, and the need for operational definitions and for experiments to identify variables and meaningful abstractions is stressed. The development of a design problem taxonomy is advocated. Generating theories is viewed as guided search. Three types of design theories are described: prescriptive, cognitive descriptive, and computational. It is argued that to seek prescriptions is premature and that, unless the human and institutional variables are reduced to knowledge and control, cognitive descriptive theories will be impossibly complex. A case is made for a computational approach, though it also shown that computational and cognitive research approaches can be mutually supportive.


2018 ◽  
pp. 307-308
Author(s):  
Anne-Marie Brouwer ◽  
Maarten A.J. Hogervorst ◽  
Bob Oudejans ◽  
Anthony J. Ries ◽  
Jonathan Touryan
Keyword(s):  

2020 ◽  
Author(s):  
Jeff Miller

Contrary to the warning of Miller (1988), Rousselet and Wilcox (2020) argued that it is better to summarize each participant’s single-trial reaction times (RTs) in a given condition with the median than with the mean when comparing the central tendencies of RT distributions across experimental conditions. They acknowledged that median RTs can produce inflated Type I error rates when conditions differ in the number of trials tested, consistent with Miller’s warning, but they showed that the bias responsible for this error rate inflation could be eliminated with a bootstrap bias correction technique. The present simulations extend their analysis by examining the power of bias-corrected medians to detect true experimental effects and by comparing this power with the power of analyses using means and regular medians. Unfortunately, although bias-corrected medians solve the problem of inflated Type I error rates, their power is lower than that of means or regular medians in many realistic situations. In addition, even when conditions do not differ in the number of trials tested, the power of tests (e.g., t-tests) is generally lower using medians rather than means as the summary measures. Thus, the present simulations demonstrate that summary means will often provide the most powerful test for differences between conditions, and they show what aspects of the RT distributions determine the size of the power advantage for means.


Sign in / Sign up

Export Citation Format

Share Document