scholarly journals Workers’ task choice heuristics as a source of emergent structure in digital microwork

2019 ◽  
Author(s):  
Otto Kässi ◽  
Vili Lehdonvirta ◽  
Jean-Michel Dalle

Digital labor markets are structured around tasks and not around fixed- or long-term employment contracts. We study the consequences of the granularization of work for digital micro workers. To address this question, we combine interview data from active online micro workers and online data on open projects scraped from Amazon's Mechanical Turk platform to study how the digital micro workers choose which tasks they work on. We find evidence for preferential attachment: workers prefer to attach themselves to experienced employers who are known to offer high quality projects. In addition, workers also clearly prefer long series of repeatable tasks over one-off tasks, even when one-off tasks pay considerably more. We thus see a re-emergence of certain types of organizational structure.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.



2020 ◽  
Author(s):  
Brian Bauer ◽  
Kristy L. Larsen ◽  
Nicole Caulfield ◽  
Domynic Elder ◽  
Sara Jordan ◽  
...  

Our ability to make scientific progress is dependent upon our interpretation of data. Thus, analyzing only those data that are an honest representation of a sample is imperative for drawing accurate conclusions that allow for robust, generalizable, and replicable scientific findings. Unfortunately, a consistent line of evidence indicates the presence of inattentive/careless responders who provide low-quality data in surveys, especially on popular online crowdsourcing platforms such as Amazon’s Mechanical Turk (MTurk). Yet, the majority of psychological studies using surveys only conduct outlier detection analyses to remove problematic data. Without carefully examining the possibility of low-quality data in a sample, researchers risk promoting inaccurate conclusions that interfere with scientific progress. Given that knowledge about data screening methods and optimal online data collection procedures are scattered across disparate disciplines, the dearth of psychological studies using more rigorous methodologies to prevent and detect low-quality data is likely due to inconvenience, not maleficence. Thus, this review provides up-to-date recommendations for best practices in collecting online data and data screening methods. In addition, this article includes resources for worked examples for each screening method, a collection of recommended measures, and a preregistration template for implementing these recommendations.





Author(s):  
Amber Chauncey Strain ◽  
Lucille M. Booker

One of the major challenges of ANLP research is the constant balancing act between the need for large samples, and the excessive time and monetary resources necessary for acquiring those samples. Amazon’s Mechanical Turk (MTurk) is a web-based data collection tool that has become a premier resource for researchers who are interested in optimizing their sample sizes and minimizing costs. Due to its supportive infrastructure, diverse participant pool, quality of data, and time and cost efficiency, MTurk seems particularly suitable for ANLP researchers who are interested in gathering large, high quality corpora in relatively short time frames. In this chapter, the authors first provide a broad description of the MTurk interface. Next, they describe the steps for acquiring IRB approval of MTurk experiments, designing experiments using the MTurk dashboard, and managing data. Finally, the chapter concludes by discussing the potential benefits and limitations of using MTurk for ANLP experimentation.



2020 ◽  
Author(s):  
Aaron J Moss ◽  
Cheskie Rosenzweig ◽  
Jonathan Robinson ◽  
Leib Litman

To understand human behavior, social scientists need people and data. In the last decade, Amazon’s Mechanical Turk (MTurk) emerged as a flexible, affordable, and reliable source of human participants and was widely adopted by academics. Yet despite MTurk’s utility, some have questioned whether researchers should continue using the platform on ethical grounds. The brunt of their concern is that people on MTurk are financially insecure, subjected to abuse, and earning inhumane wages. We investigated these issues with two random and representative surveys of the U.S. MTurk population (N = 4,094). The surveys revealed: 1) the financial situation of people on MTurk mirrors the general population, 2) the vast majority of people do not find MTurk stressful or requesters abusive, and 3) MTurk offers flexibility and benefits that most people value above more traditional work. In addition, people reported it is possible to earn about 9 dollars per hour and said they would not trade the flexibility of MTurk for less than 25 dollars per hour. Altogether, our data are important for assessing whether MTurk is an ethical place for behavioral research. We close with ways researchers can promote wage equity, ensuring MTurk is a place for affordable, high-quality, and ethical data.



2021 ◽  
pp. 193896552110254
Author(s):  
Lu Lu ◽  
Nathan Neale ◽  
Nathaniel D. Line ◽  
Mark Bonn

As the use of Amazon’s Mechanical Turk (MTurk) has increased among social science researchers, so, too, has research into the merits and drawbacks of the platform. However, while many endeavors have sought to address issues such as generalizability, the attentiveness of workers, and the quality of the associated data, there has been relatively less effort concentrated on integrating the various strategies that can be used to generate high-quality data using MTurk samples. Accordingly, the purpose of this research is twofold. First, existing studies are integrated into a set of strategies/best practices that can be used to maximize MTurk data quality. Second, focusing on task setup, selected platform-level strategies that have received relatively less attention in previous research are empirically tested to further enhance the contribution of the proposed best practices for MTurk usage.



2018 ◽  
Author(s):  
Jonathan Schwabish

The American Economic Review (AER) is one of the most prestigious journals in the field of economics. First published in 1911, the journal has published articles covering every aspect and topic in the field. AER articles are not just in-depth prose; they might also include tables, diagrams, and graphs. In this paper I ask two primary questions: First, do most graphs in the AER use data or are they somc kind of diagram or illustration of a theory or concept? Second, what kinds of graphs—lines, bars, pies, etc.—do economists use to help visualize their arguments in the AER? And third, are those graphs of generally high quality? To help shed some light on those questions, I collect, catalog, and—using Amazon’s Mechanical Turk platform—rate every graph in the first volume of the AER from 1911 to 2017. I find that the share of graphs that use data fell over the first half of the century and then increased from about the early 1980s to today. I also find that economists use a lot of line charts—of the more than 2,600 graphs in total, more than 80% are line charts. Finally, I find a U-shaped curve in perceived graph quality, falling to a low in the early-1960s and rising over the past several decades, on average reaching a level only slightly higher than in the first issues. This research is the first step in understanding how economists use data visualization to communicate their work and can help provide a basis for effective strategies that will enable better communication of that work.



2019 ◽  
pp. 75-112
Author(s):  
James N. Stanford

This is the first of the two chapters (Chapters 4 and 5) that present the results of the online data collection project using Amazon’s Mechanical Turk system. These projects provide a broad-scale “bird’s eye” view of New England dialect features across large distances. This chapter examines the results from 626 speakers who audio-recorded themselves reading 12 sentences two times each. The recordings were analyzed acoustically and then modeled statistically and graphically. The results are presented in the form of maps and statistical analyses, with the goal of providing a large-scale geographic overview of modern-day patterns of New England dialect features.



2015 ◽  
Vol 8 (2) ◽  
pp. 183-190 ◽  
Author(s):  
P. D. Harms ◽  
Justin A. DeSimone

Landers and Behrend (2015) are the most recent in a long line of researchers who have suggested that online samples generated from sources such as Amazon's Mechanical Turk (MTurk) are as good as or potentially even better than the typical samples found in psychology studies. It is important that the authors caution that researchers and reviewers need to carefully reflect on the goals of research when evaluating the appropriateness of samples. However, although they argue that certain types of samples should not be dismissed out of hand, they note that there is only scant evidence demonstrating that online sources can provide usable data for organizational research and that there is a need for further research evaluating the validity of these new sources of data. Because the target article does not directly address the potential problems with such samples, we will review what is known about collecting online data (with a particular focus on MTurk) and illustrate some potential problems using data derived from such sources.





Sign in / Sign up

Export Citation Format

Share Document