Is Mechanical Turk the Answer to Our Sampling Woes?

2016 ◽  
Vol 9 (1) ◽  
pp. 162-167 ◽  
Author(s):  
Melissa G. Keith ◽  
Peter D. Harms

Although we share Bergman and Jean's (2016) concerns about the representativeness of samples in the organizational sciences, we are mindful of the ever changing nature of the job market. New jobs are created from technological innovation while others become obsolete and disappear or are functionally transformed. These shifts in employment patterns produce both opportunities and challenges for organizational researchers addressing the problem of the representativeness in our working population samples. On one hand, it is understood that whatever we do, we will always be playing catch-up with the market. On the other hand, it is possible that we can leverage new technologies in order to react to such changes more quickly. As an example, in Bergman and Jean's commentary, they suggested making use of crowdsourcing websites or Internet panels in order to gain access to undersampled populations. Although we agree there is an opportunity to conduct much research of interest to organizational scholars in these settings, we also would point out that these types of samples come with their own sampling challenges. To illustrate these challenges, we examine sampling issues for Amazon's Mechanical Turk (MTurk), which is currently the most used portal for psychologists and organizational scholars collecting human subjects data online. Specifically, we examine whether MTurk workers are “workers” as defined by Bergman and Jean, whether MTurk samples are WEIRD (Western, educated, industrialized, rich, and democratic; Henrich, Heine, & Norenzayan, 2010), and how researchers may creatively utilize the sample characteristics.

2020 ◽  
Vol 8 (4) ◽  
pp. 614-629 ◽  
Author(s):  
Ryan Kennedy ◽  
Scott Clifford ◽  
Tyler Burleigh ◽  
Philip D. Waggoner ◽  
Ryan Jewell ◽  
...  

AbstractAmazon's Mechanical Turk is widely used for data collection; however, data quality may be declining due to the use of virtual private servers to fraudulently gain access to studies. Unfortunately, we know little about the scale and consequence of this fraud, and tools for social scientists to detect and prevent this fraud are underdeveloped. We first analyze 38 studies and show that this fraud is not new, but has increased recently. We then show that these fraudulent respondents provide particularly low-quality data and can weaken treatment effects. Finally, we provide two solutions: an easy-to-use application for identifying fraud in the existing datasets and a method for blocking fraudulent respondents in Qualtrics surveys.


PLoS ONE ◽  
2021 ◽  
Vol 16 (2) ◽  
pp. e0246526
Author(s):  
John Duffy ◽  
Ted Loch-Temzelides

We study a sequence of “double-slit” experiments designed to perform repeated measurements of an attribute in a large pool of subjects using Amazon’s Mechanical Turk. Our findings contrast the prescriptions of decision theory in novel and interesting ways. The response to an identical sequel measurement of the same attribute can be at significant variance with the initial measurement. Furthermore, the response to the sequel measurement depends on whether the initial measurement has taken place. In the absence of the initial measurement, the sequel measurement reveals additional variability, leading to a multimodal frequency distribution which is largely absent if the first measurement has taken place.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.


2020 ◽  
Vol 25 (3) ◽  
pp. 505-525 ◽  
Author(s):  
Seeram Ramakrishna ◽  
Alfred Ngowi ◽  
Henk De Jager ◽  
Bankole O. Awuzie

Growing consumerism and population worldwide raises concerns about society’s sustainability aspirations. This has led to calls for concerted efforts to shift from the linear economy to a circular economy (CE), which are gaining momentum globally. CE approaches lead to a zero-waste scenario of economic growth and sustainable development. These approaches are based on semi-scientific and empirical concepts with technologies enabling 3Rs (reduce, reuse, recycle) and 6Rs (reuse, recycle, redesign, remanufacture, reduce, recover). Studies estimate that the transition to a CE would save the world in excess of a trillion dollars annually while creating new jobs, business opportunities and economic growth. The emerging industrial revolution will enhance the symbiotic pursuit of new technologies and CE to transform extant production systems and business models for sustainability. This article examines the trends, availability and readiness of fourth industrial revolution (4IR or industry 4.0) technologies (for example, Internet of Things [IoT], artificial intelligence [AI] and nanotechnology) to support and promote CE transitions within the higher education institutional context. Furthermore, it elucidates the role of universities as living laboratories for experimenting the utility of industry 4.0 technologies in driving the shift towards CE futures. The article concludes that universities should play a pivotal role in engendering CE transitions.


2021 ◽  
pp. 003435522110142
Author(s):  
Deniz Aydemir-Döke ◽  
James T. Herbert

Microaggressions are daily insults to minority individuals such as people with disabilities (PWD) that communicate messages of exclusion, inferiority, and abnormality. In this study, we developed a new scale, the Ableist Microaggressions Impact Questionnaire (AMIQ), which assesses ableist microaggression experiences of PWD. Data from 245 PWD were collected using Amazon’s Mechanical Turk (MTurk) platform. An exploratory factor analysis of the 25-item AMIQ revealed a three-factor structure with internal consistency reliability ranging between .87 and .92. As a more economical and psychometrically sound instrument assessing microaggression impact as it pertains to disability, the AMIQ offers promise for rehabilitation counselor research and practice.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Jon Agley ◽  
Yunyu Xiao ◽  
Esi E. Thompson ◽  
Lilian Golzarri-Arroyo

Abstract Objective This study describes the iterative process of selecting an infographic for use in a large, randomized trial related to trust in science, COVID-19 misinformation, and behavioral intentions for non-pharmaceutical prevenive behaviors. Five separate concepts were developed based on underlying subcomponents of ‘trust in science and scientists’ and were turned into infographics by media experts and digital artists. Study participants (n = 100) were recruited from Amazon’s Mechanical Turk and randomized to five different arms. Each arm viewed a different infographic and provided both quantitative (narrative believability scale and trust in science and scientists inventory) and qualitative data to assist the research team in identifying the infographic most likely to be successful in a larger study. Results Data indicated that all infographics were perceived to be believable, with means ranging from 5.27 to 5.97 on a scale from one to seven. No iatrogenic outcomes were observed for within-group changes in trust in science. Given equivocal believability outcomes, and after examining confidence intervals for data on trust in science and then the qualitative responses, we selected infographic 3, which addressed issues of credibility and consensus by illustrating changing narratives on butter and margarine, as the best candidate for use in the full study.


2021 ◽  
pp. 027507402110488
Author(s):  
Mark Benton

Policing in the United States has a racist history, with negative implications for its legitimacy among African Americans today. Legitimacy is important for policing's effective operations. Community policing may improve policing's legitimacy but is difficult to implement with fidelity and does not address history. An apology for policing's racist history may work as a legitimizing supplement to community policing. On the other hand, an apology may be interpreted as words without changes in practices. Using a survey vignette experiment on Amazon's Mechanical Turk to sample African Americans, this research tests the legitimizing effect of a supplemental apology for historical police racism during a community policing policy announcement. Statistical findings suggest that supplementing the communication with an apology imparted little to no additional legitimacy on policing among respondents. Qualitative data suggested a rationale: Apologies need not indicate future equitable behavior or policy implementation, with implementation itself seeming crucial for police legitimacy improvements.


Sign in / Sign up

Export Citation Format

Share Document