scholarly journals Lie for a Dime: When most prescreening responses are honest but most "eligible" respondents are lies

Author(s):  
Jesse J. Chandler ◽  
Gabriele Paolacci

AbstractThe Internet has enabled recruitment of large samples with specificcharacteristics. However, when researchers rely on participant self-reportto determine eligibility, data quality depends on participant honesty.Across four studies on Amazon Mechanical Turk, we show that a substantialnumber of participants misrepresent theoretically relevant characteristics(e.g., demographics, product ownership) to meet eligibility criteriaexplicit in the studies or inferred by exclusion from the study on a firstattempt or in previous experiences with similar studies. When recruitingrare populations, a large proportion of responses can be deceptive. Weconclude with recommendations about how to ensure that ineligibleparticipants are excluded that are applicable to a wide variety of datacollection efforts that rely on self-report.

2021 ◽  
Vol 8 (2) ◽  
pp. 205316802110169
Author(s):  
William O’Brochta ◽  
Sunita Parikh

What can researchers do to address anomalous survey and experimental responses on Amazon Mechanical Turk (MTurk)? Much of the anomalous response problem has been traced to India, and several survey and technological techniques have been developed to detect foreign workers accessing US-specific surveys. We survey Indian MTurkers and find that 26% pass survey questions used to detect foreign workers, and 3% claim to be located in the United States. We show that restricting respondents to Master Workers and removing the US location requirement encourages Indian MTurkers to correctly self-report their location, helping to reduce anomalous responses among US respondents and to improve data quality. Based on these results, we outline key considerations for researchers seeking to maximize data quality while keeping costs low.


2020 ◽  
Author(s):  
Joseph Smith ◽  
Heather Kempton ◽  
Matt Williams ◽  
Clifford van Ommen

ObjectiveBy committing to latent variable models, mindfulness research has aimed to transform observable practices into an identifiable real ‘mindfulness’ experience which is claimed to exist beyond what is directly observed. Recently, an alternative methodology has been developed which allows mindfulness to be modelled as a complex system or network at the level of self-report. This study hypothesised that a more densely connected network of observable practices is indicative of a greater level of development of mindfulness. MethodsMindfulness networks were estimated for practitioners and non-practitioners using the Friedberg Mindfulness Inventory (FMI). A total of 371 regular mindfulness practitioners, 224 non-practitioners and 59 irregular practitioners were recruited online from the Amazon Mechanical Turk database. ResultsComparisons of practitioners’ and non-practitioners’ networks indicated that network density did not significantly differ, whereas evidence was found in support of a significant difference in network structure. An exploratory analysis revealed that the FMI item representing the mindfulness practice of Acceptance was substantially more central in the Practitioners FMI network, relative to its position in the Non-practitioners FMI network. FMI items representing the mindfulness practices of Self-kindness and Returning to the Present were substantially more peripheral to the practitioners FMI network relative to their position in the non-practitioners FMI network. Conclusions.The study provides proof-of-principle support for investigating mindfulness as a complex network at the level of self-report. However, the lack of difference in network density indicates that future research is needed to examine network dynamics in the context of regular mindfulness practice.


Author(s):  
Melissa D. Pike ◽  
Deborah M. Powell ◽  
Joshua S. Bourdage ◽  
Eden-Raye Lukacik

Abstract. Honesty-Humility is a valuable predictor in personnel selection; however, problems with self-report measures create a need for new tools to judge this trait. Therefore, this research examines the interview as an alternative for assessing Honesty-Humility and how to improve judgments of Honesty-Humility in the interview. Using trait activation theory, we examined the impact of interview question type on Honesty-Humility judgment accuracy. We hypothesized that general personality-tailored questions and probes would increase the accuracy of Honesty-Humility judgments. Nine hundred thirty-three Amazon Mechanical Turk workers watched and rated five interviews. Results found that general questions with probes and specific questions without probes led to the best Honesty-Humility judgments. These findings support the realistic accuracy model and provide implications for Honesty-Humility-based interviews.


2021 ◽  
pp. 193896552110254
Author(s):  
Lu Lu ◽  
Nathan Neale ◽  
Nathaniel D. Line ◽  
Mark Bonn

As the use of Amazon’s Mechanical Turk (MTurk) has increased among social science researchers, so, too, has research into the merits and drawbacks of the platform. However, while many endeavors have sought to address issues such as generalizability, the attentiveness of workers, and the quality of the associated data, there has been relatively less effort concentrated on integrating the various strategies that can be used to generate high-quality data using MTurk samples. Accordingly, the purpose of this research is twofold. First, existing studies are integrated into a set of strategies/best practices that can be used to maximize MTurk data quality. Second, focusing on task setup, selected platform-level strategies that have received relatively less attention in previous research are empirically tested to further enhance the contribution of the proposed best practices for MTurk usage.


2013 ◽  
Vol 46 (4) ◽  
pp. 1023-1031 ◽  
Author(s):  
Eyal Peer ◽  
Joachim Vosgerau ◽  
Alessandro Acquisti

2021 ◽  
Author(s):  
David Hauser ◽  
Aaron J Moss ◽  
Cheskie Rosenzweig ◽  
Shalom Noach Jaffe ◽  
Jonathan Robinson ◽  
...  

Maintaining data quality on Amazon Mechanical Turk (MTurk) has always been a concern for researchers. CloudResearch, a third-party website that interfaces with MTurk, assessed ~100,000 MTurkers and categorized them into those that provide high- (~65,000, Approved) and low-(~35,000, Blocked) quality data. Here, we examined the predictive validity of CloudResearch’s vetting. Participants (N = 900) from the Approved and Blocked groups, along with a Standard MTurk sample, completed an array of data quality measures. Approved participants had better reading comprehension, reliability, honesty, and attentiveness scores, were less likely to cheat and satisfice, and replicated classic experimental effects more reliably than Blocked participants who performed at chance on multiple outcomes. Data quality of the Standard sample was generally in between the Approved and Blocked groups. We discuss the implications of using the Approved group for scientific studies conducted on Mechanical Turk.


2021 ◽  
Vol 74 ◽  
pp. 101728
Author(s):  
Carolyn M. Ritchey ◽  
Toshikazu Kuroda ◽  
Jillian M. Rung ◽  
Christopher A. Podlesnik

2011 ◽  
Vol 37 (2) ◽  
pp. 413-420 ◽  
Author(s):  
Karën Fort ◽  
Gilles Adda ◽  
K. Bretonnel Cohen

2015 ◽  
Vol 16 (S1) ◽  
Author(s):  
John WG Seamons ◽  
Marconi S Barbosa ◽  
Jonathan D Victor ◽  
Dominique Coy ◽  
Ted Maddess

Sign in / Sign up

Export Citation Format

Share Document