Promised Incentives in Media Research: A Look at Data Quality, Sample Representativeness, and Response Rate

1984 ◽  
Vol 21 (2) ◽  
pp. 148 ◽  
Author(s):  
Edward G. Goetz ◽  
Tom R. Tyler ◽  
Fay Lomax Cook
1984 ◽  
Vol 21 (2) ◽  
pp. 148-154 ◽  
Author(s):  
Edward G. Goetz ◽  
Tom R. Tyler ◽  
Fay Lomax Cook

The authors examine the effects of using promised incentives to increase respondent compliance in media research. The impact of promised incentives on data quality, sample representativeness, and response rate is studied. The use of promised incentives is found to increase response rates without lessening sample representativeness or response quality. In fact, the data suggest that incentives heighten response quality because they increase the attention respondents devote to the task for which they are being paid.


Author(s):  
David Dutwin ◽  
Trent D Buskirk

Abstract Telephone surveys have become much maligned in the past few years, considering recent failures to correctly predict elections worldwide, response rates declining into the single digits, and the rise of low-cost, nonprobabilistic alternatives. Yet there is no study assessing the degree to which data attained via modern-day telephone interviewing has or has not significantly declined in terms of data quality. Utilizing an elemental approach, we evaluate the bias of various cross-tabulations of core demographics from a collection of surveys collected over the past two decades. Results indicate that (1) there has been a modest increase in bias over the past two decades but a downward trend in the past five years; (2) the share of cell phone interviews in samples has a significant impact on the bias; (3) traditional weighting largely mitigates the linear trend in bias; and (4), once weighted, telephone samples are nearly on par in data quality to higher response rate unweighted in-person data. Implications for the “fit for purpose” of telephone data and its general role in the future of survey research are discussed given our findings.


2012 ◽  
Vol 54 (2) ◽  
pp. 241-260 ◽  
Author(s):  
Jared M. Hansen ◽  
Scott M. Smith

Increasing both survey completion rates and data quality remains an important topic for fields as diverse as sociology, marketing, medicine and history. Thousands of studies have made response quality their central topic of examination, but their focus has largely been to measure response bias through the comparison of early–late wave responses. In this study, an innovative online field experiment tests a two-staged highly interesting question to produce an 8% better survey completion rate and to change sample representativeness by 12% over a usual one-stage highly interesting question appearing at the beginning of the questionnaire. In addition to these substantive findings, a distributional and probability analysis is developed that further refines methods for identifying the extent of non-response bias.


Author(s):  
S. Okazaki ◽  
A. Katsukura ◽  
M. Nishiyama

The aim of this article is to propose a framework of mobile-based survey methodology. Specifically, we attempt to establish guidelines for a questionnaire survey via the mobile device, in terms of cost, questionnaire format, incentives, target respondents, response rate, and data quality.


2020 ◽  
pp. 016327872095818
Author(s):  
Jennifer Dykema ◽  
John Stevenson ◽  
Nadia Assad ◽  
Chad Kniss ◽  
Catherine A. Taylor

While collecting high quality data from physicians is critical, response rates for physician surveys are frequently low. A proven method for increasing response in mail surveys is to provide a small, prepaid monetary incentive in the initial mailing. More recently, researchers have begun experimenting with adding a second cash incentive in a follow-up contact in order to increase participation among more reluctant respondents. To assess the effects of sequential incentives on response rates, data quality, sample representativeness, and costs, physicians (N = 1,500) were randomly assigned to treatments that crossed the amount of a first ($5 or $10) and second ($0, $5, or $10) incentive to form the following groups: Group $5/$5; Group $5/$10; Group $10/$0; Group $10/$5; and Group $10/$10. Overall, second incentives were associated with higher response rates and lower costs per completed survey, and while they had no effect on item nonresponse, they increased sample representativeness.


2020 ◽  
Author(s):  
Jacob Brauner

Introduction: Survey research is often designed based on multiple-choice questions although many other formats, also referred to as innovative item formats (IIF) exist, such as ranking, sorting, questions with pictures or smileys as response options. Research has suggested that IIF in a broad sense can strengthen data quality, but research is needed on a more specific level. Therefore, the purpose of this article is to present research for separate IIFs about the data quality for that type of item. Method: A literature study was conducted to identify articles that test the data quality of IIF. For each IIF research was discussed regarding aspects of data quality, such as reliability, validity and response rate. Results: A total of 166 research articles were identified with data from 218,532 participants revealing aspects of 22 IIFs with 13 subcategories. The type of evidence on data quality is quite varied and for some IIFs the evidence is supportive, for some it is not and for some it is inconclusive. With 6 IIFs the evidence was estimated in favour hereof, for 11 IIFs the evidence was inconclusive, 1 unfavoured and for 3 there was no evidence. With 6 IIFs potential confounders were identified. Discussion: The study suggests further research is needed where evidence is scarce. The present study could initiate more extensive systematic reviews within specific categories of IIF.


Sign in / Sign up

Export Citation Format

Share Document