Lie for a Dime: When most prescreening responses are honest but most "eligible" respondents are lies
AbstractThe Internet has enabled recruitment of large samples with specificcharacteristics. However, when researchers rely on participant self-reportto determine eligibility, data quality depends on participant honesty.Across four studies on Amazon Mechanical Turk, we show that a substantialnumber of participants misrepresent theoretically relevant characteristics(e.g., demographics, product ownership) to meet eligibility criteriaexplicit in the studies or inferred by exclusion from the study on a firstattempt or in previous experiences with similar studies. When recruitingrare populations, a large proportion of responses can be deceptive. Weconclude with recommendations about how to ensure that ineligibleparticipants are excluded that are applicable to a wide variety of datacollection efforts that rely on self-report.