scholarly journals Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research

PLoS ONE ◽  
2013 ◽  
Vol 8 (3) ◽  
pp. e57410 ◽  
Author(s):  
Matthew J. C. Crump ◽  
John V. McDonnell ◽  
Todd M. Gureckis
2020 ◽  
Author(s):  
Aaron J Moss ◽  
Cheskie Rosenzweig ◽  
Jonathan Robinson ◽  
Leib Litman

To understand human behavior, social scientists need people and data. In the last decade, Amazon’s Mechanical Turk (MTurk) emerged as a flexible, affordable, and reliable source of human participants and was widely adopted by academics. Yet despite MTurk’s utility, some have questioned whether researchers should continue using the platform on ethical grounds. The brunt of their concern is that people on MTurk are financially insecure, subjected to abuse, and earning inhumane wages. We investigated these issues with two random and representative surveys of the U.S. MTurk population (N = 4,094). The surveys revealed: 1) the financial situation of people on MTurk mirrors the general population, 2) the vast majority of people do not find MTurk stressful or requesters abusive, and 3) MTurk offers flexibility and benefits that most people value above more traditional work. In addition, people reported it is possible to earn about 9 dollars per hour and said they would not trade the flexibility of MTurk for less than 25 dollars per hour. Altogether, our data are important for assessing whether MTurk is an ethical place for behavioral research. We close with ways researchers can promote wage equity, ensuring MTurk is a place for affordable, high-quality, and ethical data.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.


2021 ◽  
pp. 003435522110142
Author(s):  
Deniz Aydemir-Döke ◽  
James T. Herbert

Microaggressions are daily insults to minority individuals such as people with disabilities (PWD) that communicate messages of exclusion, inferiority, and abnormality. In this study, we developed a new scale, the Ableist Microaggressions Impact Questionnaire (AMIQ), which assesses ableist microaggression experiences of PWD. Data from 245 PWD were collected using Amazon’s Mechanical Turk (MTurk) platform. An exploratory factor analysis of the 25-item AMIQ revealed a three-factor structure with internal consistency reliability ranging between .87 and .92. As a more economical and psychometrically sound instrument assessing microaggression impact as it pertains to disability, the AMIQ offers promise for rehabilitation counselor research and practice.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Jon Agley ◽  
Yunyu Xiao ◽  
Esi E. Thompson ◽  
Lilian Golzarri-Arroyo

Abstract Objective This study describes the iterative process of selecting an infographic for use in a large, randomized trial related to trust in science, COVID-19 misinformation, and behavioral intentions for non-pharmaceutical prevenive behaviors. Five separate concepts were developed based on underlying subcomponents of ‘trust in science and scientists’ and were turned into infographics by media experts and digital artists. Study participants (n = 100) were recruited from Amazon’s Mechanical Turk and randomized to five different arms. Each arm viewed a different infographic and provided both quantitative (narrative believability scale and trust in science and scientists inventory) and qualitative data to assist the research team in identifying the infographic most likely to be successful in a larger study. Results Data indicated that all infographics were perceived to be believable, with means ranging from 5.27 to 5.97 on a scale from one to seven. No iatrogenic outcomes were observed for within-group changes in trust in science. Given equivocal believability outcomes, and after examining confidence intervals for data on trust in science and then the qualitative responses, we selected infographic 3, which addressed issues of credibility and consensus by illustrating changing narratives on butter and margarine, as the best candidate for use in the full study.


Sign in / Sign up

Export Citation Format

Share Document