scholarly journals The Number of Choice Tasks and Survey Satisficing in Conjoint Experiments

2018 ◽  
Vol 26 (1) ◽  
pp. 112-119 ◽  
Author(s):  
Kirk Bansak ◽  
Jens Hainmueller ◽  
Daniel J. Hopkins ◽  
Teppei Yamamoto

In recent years, political and social scientists have made increasing use of conjoint survey designs to study decision-making. Here, we study a consequential question which researchers confront when implementing conjoint designs: How many choice tasks can respondents perform before survey satisficing degrades response quality? To answer the question, we run a set of experiments where respondents are asked to complete as many as 30 conjoint tasks. Experiments conducted through Amazon’s Mechanical Turk and Survey Sampling International demonstrate the surprising robustness of conjoint designs, as there are detectable but quite limited increases in survey satisficing as the number of tasks increases. Our evidence suggests that in similar study contexts researchers can assign dozens of tasks without substantial declines in response quality.

Author(s):  
Kirk Bansak ◽  
Jens Hainmueller ◽  
Daniel J. Hopkins ◽  
Teppei Yamamoto

AbstractRecent years have seen a renaissance of conjoint survey designs within social science. To date, however, researchers have lacked guidance on how many attributes they can include within conjoint profiles before survey satisficing leads to unacceptable declines in response quality. This paper addresses that question using pre-registered, two-stage experiments examining choices among hypothetical candidates for US Senate or hotel rooms. In each experiment, we use the first stage to identify attributes which are perceived to be uncorrelated with the attribute of interest, so that their effects are not masked by those of the core attributes. In the second stage, we randomly assign respondents to conjoint designs with varying numbers of those filler attributes. We report the results of these experiments implemented via Amazon's Mechanical Turk and Survey Sampling International. They demonstrate that our core quantities of interest are generally stable, with relatively modest increases in survey satisficing when respondents face large numbers of attributes.


2020 ◽  
Author(s):  
Aaron J Moss ◽  
Cheskie Rosenzweig ◽  
Jonathan Robinson ◽  
Leib Litman

To understand human behavior, social scientists need people and data. In the last decade, Amazon’s Mechanical Turk (MTurk) emerged as a flexible, affordable, and reliable source of human participants and was widely adopted by academics. Yet despite MTurk’s utility, some have questioned whether researchers should continue using the platform on ethical grounds. The brunt of their concern is that people on MTurk are financially insecure, subjected to abuse, and earning inhumane wages. We investigated these issues with two random and representative surveys of the U.S. MTurk population (N = 4,094). The surveys revealed: 1) the financial situation of people on MTurk mirrors the general population, 2) the vast majority of people do not find MTurk stressful or requesters abusive, and 3) MTurk offers flexibility and benefits that most people value above more traditional work. In addition, people reported it is possible to earn about 9 dollars per hour and said they would not trade the flexibility of MTurk for less than 25 dollars per hour. Altogether, our data are important for assessing whether MTurk is an ethical place for behavioral research. We close with ways researchers can promote wage equity, ensuring MTurk is a place for affordable, high-quality, and ethical data.


2020 ◽  
Vol 8 (4) ◽  
pp. 614-629 ◽  
Author(s):  
Ryan Kennedy ◽  
Scott Clifford ◽  
Tyler Burleigh ◽  
Philip D. Waggoner ◽  
Ryan Jewell ◽  
...  

AbstractAmazon's Mechanical Turk is widely used for data collection; however, data quality may be declining due to the use of virtual private servers to fraudulently gain access to studies. Unfortunately, we know little about the scale and consequence of this fraud, and tools for social scientists to detect and prevent this fraud are underdeveloped. We first analyze 38 studies and show that this fraud is not new, but has increased recently. We then show that these fraudulent respondents provide particularly low-quality data and can weaken treatment effects. Finally, we provide two solutions: an easy-to-use application for identifying fraud in the existing datasets and a method for blocking fraudulent respondents in Qualtrics surveys.


2018 ◽  
Vol 13 (2) ◽  
pp. 149-154 ◽  
Author(s):  
Michael D. Buhrmester ◽  
Sanaz Talaifar ◽  
Samuel D. Gosling

Over the past 2 decades, many social scientists have expanded their data-collection capabilities by using various online research tools. In the 2011 article “Amazon’s Mechanical Turk: A new source of inexpensive, yet high-quality, data?” in Perspectives on Psychological Science, Buhrmester, Kwang, and Gosling introduced researchers to what was then considered to be a promising but nascent research platform. Since then, thousands of social scientists from seemingly every field have conducted research using the platform. Here, we reflect on the impact of Mechanical Turk on the social sciences and our article’s role in its rise, provide the newest data-driven recommendations to help researchers effectively use the platform, and highlight other online research platforms worth consideration.


2017 ◽  
Vol 30 (1) ◽  
pp. 111-122 ◽  
Author(s):  
Steve Buchheit ◽  
Marcus M. Doxey ◽  
Troy Pollard ◽  
Shane R. Stinson

ABSTRACT Multiple social science researchers claim that online data collection, mainly via Amazon's Mechanical Turk (MTurk), has revolutionized the behavioral sciences (Gureckis et al. 2016; Litman, Robinson, and Abberbock 2017). While MTurk-based research has grown exponentially in recent years (Chandler and Shapiro 2016), reasonable concerns have been raised about online research participants' ability to proxy for traditional research participants (Chandler, Mueller, and Paolacci 2014). This paper reviews recent MTurk research and provides further guidance for recruiting samples of MTurk participants from populations of interest to behavioral accounting researchers. First, we provide guidance on the logistics of using MTurk and discuss the potential benefits offered by TurkPrime, a third-party service provider. Second, we discuss ways to overcome challenges related to targeted participant recruiting in an online environment. Finally, we offer suggestions for disclosures that authors may provide about their efforts to attract participants and analyze responses.


2021 ◽  
pp. 003435522110142
Author(s):  
Deniz Aydemir-Döke ◽  
James T. Herbert

Microaggressions are daily insults to minority individuals such as people with disabilities (PWD) that communicate messages of exclusion, inferiority, and abnormality. In this study, we developed a new scale, the Ableist Microaggressions Impact Questionnaire (AMIQ), which assesses ableist microaggression experiences of PWD. Data from 245 PWD were collected using Amazon’s Mechanical Turk (MTurk) platform. An exploratory factor analysis of the 25-item AMIQ revealed a three-factor structure with internal consistency reliability ranging between .87 and .92. As a more economical and psychometrically sound instrument assessing microaggression impact as it pertains to disability, the AMIQ offers promise for rehabilitation counselor research and practice.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Jon Agley ◽  
Yunyu Xiao ◽  
Esi E. Thompson ◽  
Lilian Golzarri-Arroyo

Abstract Objective This study describes the iterative process of selecting an infographic for use in a large, randomized trial related to trust in science, COVID-19 misinformation, and behavioral intentions for non-pharmaceutical prevenive behaviors. Five separate concepts were developed based on underlying subcomponents of ‘trust in science and scientists’ and were turned into infographics by media experts and digital artists. Study participants (n = 100) were recruited from Amazon’s Mechanical Turk and randomized to five different arms. Each arm viewed a different infographic and provided both quantitative (narrative believability scale and trust in science and scientists inventory) and qualitative data to assist the research team in identifying the infographic most likely to be successful in a larger study. Results Data indicated that all infographics were perceived to be believable, with means ranging from 5.27 to 5.97 on a scale from one to seven. No iatrogenic outcomes were observed for within-group changes in trust in science. Given equivocal believability outcomes, and after examining confidence intervals for data on trust in science and then the qualitative responses, we selected infographic 3, which addressed issues of credibility and consensus by illustrating changing narratives on butter and margarine, as the best candidate for use in the full study.


Sign in / Sign up

Export Citation Format

Share Document