Results from the Mechanical Turk Online Audio Recordings

2019 ◽  
pp. 75-112
Author(s):  
James N. Stanford

This is the first of the two chapters (Chapters 4 and 5) that present the results of the online data collection project using Amazon’s Mechanical Turk system. These projects provide a broad-scale “bird’s eye” view of New England dialect features across large distances. This chapter examines the results from 626 speakers who audio-recorded themselves reading 12 sentences two times each. The recordings were analyzed acoustically and then modeled statistically and graphically. The results are presented in the form of maps and statistical analyses, with the goal of providing a large-scale geographic overview of modern-day patterns of New England dialect features.

2019 ◽  
pp. 113-138
Author(s):  
James N. Stanford

This is the second of the two chapters (Chapters 4 and 5) that present the results of the author’s online data collection project using Mechanical Turk. This chapter analyzes the results of the online written questionnaires; 534 people responded to online questions about New England dialect features, including phonological features and lexical items. The author maps the results in terms of regional features in different parts of New England, comparing them to prior surveys and to the acoustic analyses of the prior chapter. The chapter also analyzes 100 free-response answers where New Englanders gave further insights into the current state of New England English.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.


Author(s):  
Irene Messina ◽  
Salvatore Gullo ◽  
Omar Carlo Gioacchino Gelo ◽  
Cecilia Giordano ◽  
Silvia Salcuni

The Interest Section on Therapist Training and Development of the Society for Psychotherapy Research (SPRISTAD) has launched a multisite collaborative longitudinal study of psychotherapy trainees’ development, a large-scale study involving a number of countries all over the world. In the present article, we present an overview of the early Italian contribution to the SPRISTAD study, based on preliminary paper-and-pencil data collection. Our preliminary findings showed cross-sectional differences at different years of training and two-years longitudinal changes in trainees’ perceived development. Moreover, trainees’ characteristics such as their motivation, relational manner, current life, and personal background have shown to deserve attention in research on trainees’ development. These findings encourage the continuation of the SPRISTAD online data collection.


2018 ◽  
Author(s):  
Mark Sheskin ◽  
Frank Keil

Over the past decade, the internet has become an important platform for many types of psychology research, especially research with adult participants on Amazon’s Mechanical Turk. More recently, developmental researchers have begun to explore how online studies might be conducted with infants and children. Here, we introduce a new platform for online developmental research that includes live interaction with a researcher, and use it to replicate classic results in the literature. We end by discussing future research, including the potential for large-scale cross-cultural and longitudinal research.


2021 ◽  
Vol 111 (12) ◽  
pp. 2167-2175
Author(s):  
Stephen J. Blumberg ◽  
Jennifer D. Parker ◽  
Brian C. Moyer

High-quality data are accurate, relevant, and timely. Large national health surveys have always balanced the implementation of these quality dimensions to meet the needs of diverse users. The COVID-19 pandemic shifted these balances, with both disrupted survey operations and a critical need for relevant and timely health data for decision-making. The National Health Interview Survey (NHIS) responded to these challenges with several operational changes to continue production in 2020. However, data files from the 2020 NHIS were not expected to be publicly available until fall 2021. To fill the gap, the National Center for Health Statistics (NCHS) turned to 2 online data collection platforms—the Census Bureau’s Household Pulse Survey (HPS) and the NCHS Research and Development Survey (RANDS)—to collect COVID-19‒related data more quickly. This article describes the adaptations of NHIS and the use of HPS and RANDS during the pandemic in the context of the recently released Framework for Data Quality from the Federal Committee on Statistical Methodology. (Am J Public Health. 2021;111(12):2167–2175. https://doi.org/10.2105/AJPH.2021.306516 )


Sign in / Sign up

Export Citation Format

Share Document