Validation of a detector of response bias on a forced-choice test of nonverbal ability.

1994 ◽  
Vol 8 (1) ◽  
pp. 118-125 ◽  
Author(s):  
Richard I. Frederick ◽  
Stephen D. Sarfaty ◽  
J. Dennis Johnston ◽  
Jeffrey Powel
1999 ◽  
Vol 14 (1) ◽  
pp. 97-97
Author(s):  
R. Denney ◽  
T. Feaster ◽  
M. Hughes ◽  
S. Estees ◽  
D. McKay ◽  
...  

2021 ◽  
Vol 183 ◽  
pp. 111114
Author(s):  
Goran Pavlov ◽  
Dexin Shi ◽  
Alberto Maydeu-Olivares ◽  
Amanda Fairchild

2018 ◽  
Author(s):  
D. Samuel Schwarzkopf ◽  
Nonie J Finlayson ◽  
Benjamin de Haas

Perceptual bias is inherent to all our senses, particularly in the form of visual illusionsand aftereffects. However, many experiments measuring perceptual biases may besusceptible to non-perceptual factors, such as response bias and decision criteria. Here wequantify how robust Multiple Alternative Perceptual Search (MAPS) is for disentanglingestimates of perceptual biases from these confounding factors. First our results show thatwhile there are considerable response biases in our four-alternative forced choice design,these are unrelated to perceptual biases estimates, and these response biases are notproduced by the response modality (keyboard versus mouse). We also show that perceptualbias estimates are reduced when feedback is given on each trial, likely due to feedbackenabling observers to partially (and actively) correct for perceptual biases. However, thisdoes not impact the reliability with which MAPS detects the presence of perceptual biases.Finally, our results show that MAPS can detect actual perceptual biases and is not adecisional bias towards choosing the target in the middle of the candidate stimulusdistribution. In summary, researchers conducting a MAPS experiment should use a constantreference stimulus, but consider varying the mean of the candidate distribution. Ideally,they should not employ trial-wise feedback if the magnitude of perceptual biases is ofinterest.


Author(s):  
Matthew Nanes ◽  
Dotan Haim

Abstract Research on sensitive topics uses a variety of methods to combat response bias on in-person surveys. Increasingly, researchers allow respondents to self-administer responses using electronic devices as an alternative to more complicated experimental approaches. Using an experiment embedded in a survey in the rural Philippines, we test the effects of several such methods on response rates and falsification. We asked respondents a sensitive question about reporting insurgents to the police alongside a nonsensitive question about school completion. We randomly assigned respondents to answer these questions either verbally, through a “forced choice” experiment, or through self-enumeration. We find that self-enumeration significantly reduced nonresponse compared to direct questioning, but find little evidence of differential rates of falsification. Forced choice yielded highly unlikely estimates, which we attribute to nonstrategic falsification. These results suggest that self-administered surveys can be effective for measuring sensitive topics on surveys when response rates are a priority.


1968 ◽  
Vol 28 (4) ◽  
pp. 1103-1110 ◽  
Author(s):  
Leonard V. Gordon ◽  
Richard J. Hofmann
Keyword(s):  

1991 ◽  
Vol 3 (4) ◽  
pp. 596-602 ◽  
Author(s):  
Richard I. Frederick ◽  
Hilliard G. Foster

Sign in / Sign up

Export Citation Format

Share Document