Results from the Mechanical Turk Online Written Questionnaires

2019 ◽  
pp. 113-138
Author(s):  
James N. Stanford

This is the second of the two chapters (Chapters 4 and 5) that present the results of the author’s online data collection project using Mechanical Turk. This chapter analyzes the results of the online written questionnaires; 534 people responded to online questions about New England dialect features, including phonological features and lexical items. The author maps the results in terms of regional features in different parts of New England, comparing them to prior surveys and to the acoustic analyses of the prior chapter. The chapter also analyzes 100 free-response answers where New Englanders gave further insights into the current state of New England English.

2019 ◽  
pp. 75-112
Author(s):  
James N. Stanford

This is the first of the two chapters (Chapters 4 and 5) that present the results of the online data collection project using Amazon’s Mechanical Turk system. These projects provide a broad-scale “bird’s eye” view of New England dialect features across large distances. This chapter examines the results from 626 speakers who audio-recorded themselves reading 12 sentences two times each. The recordings were analyzed acoustically and then modeled statistically and graphically. The results are presented in the form of maps and statistical analyses, with the goal of providing a large-scale geographic overview of modern-day patterns of New England dialect features.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.


2021 ◽  
Vol 111 (12) ◽  
pp. 2167-2175
Author(s):  
Stephen J. Blumberg ◽  
Jennifer D. Parker ◽  
Brian C. Moyer

High-quality data are accurate, relevant, and timely. Large national health surveys have always balanced the implementation of these quality dimensions to meet the needs of diverse users. The COVID-19 pandemic shifted these balances, with both disrupted survey operations and a critical need for relevant and timely health data for decision-making. The National Health Interview Survey (NHIS) responded to these challenges with several operational changes to continue production in 2020. However, data files from the 2020 NHIS were not expected to be publicly available until fall 2021. To fill the gap, the National Center for Health Statistics (NCHS) turned to 2 online data collection platforms—the Census Bureau’s Household Pulse Survey (HPS) and the NCHS Research and Development Survey (RANDS)—to collect COVID-19‒related data more quickly. This article describes the adaptations of NHIS and the use of HPS and RANDS during the pandemic in the context of the recently released Framework for Data Quality from the Federal Committee on Statistical Methodology. (Am J Public Health. 2021;111(12):2167–2175. https://doi.org/10.2105/AJPH.2021.306516 )


Sign in / Sign up

Export Citation Format

Share Document