scholarly journals The effect of a web-push survey on physician survey response rates: a randomized experiment.

Author(s):  
Cristine D. Delnevo ◽  
Binu Singh

Abstract Background: Achieving a high response rate for physicians has been challenging and with response rates declining in recent years, innovative methods are needed to increase rates. An emerging concept in survey methodology has been web-push survey delivery. In this delivery method, contact is made by mail to request a response by web. This study explored the feasibility of a web-push survey on a national sample of physicians. Methods: 1000 physicians across six specialties were randomly assigned to a mail only or web-push survey delivery. Each mode consisted of four contacts including an initial mailing, reminder postcard, and two additional follow-ups. Response rates were calculated using AAPOR’s response rate 3 calculation. Data collection occurred between Febuary – April 2018 and analyzed March 2019. Results: Overall reponse rates for the mail only vs. web-push survey delivery were comparable (51.2% vs. 52.8%). Higher response rates across all demographics were seen in the web-push delivery with the exception of pulmonary/critical care and physicians over the age of 65. The web-push survey yielded a greater response after the first mailing requiring fewer follow-up contacts resulting in a more cost-effective delivery. Conclusions: A web-push mail survey is effective in achieveing a comparable response rate to traditional mail only delivery for physicians. The web-push survey was more efficient in terms of cost and in receiving responses in a more timely manner. Future research should explore the efficiency of a web-push survey delivery across various health care provider populations.

2019 ◽  
Vol 8 (5) ◽  
pp. 821-831
Author(s):  
Matthew Debell ◽  
Natalya Maisel ◽  
Brad Edwards ◽  
Michelle Amsbary ◽  
Vanessa Meldener

Abstract In mail surveys and in advance letters for surveys in other modes, it is common to include a prepaid incentive of a small amount such as $5. However, when letters are addressed generically (such as to “Resident”), advance letters may be thrown away without being opened, so the enclosed cash is wasted and the invitation or advance letter is ineffective. This research note describes results of an experiment using a nationally representative sample of 4,725 residential addresses to test a new way of letting mail recipients know their letter contains cash and is therefore worth opening: an envelope with a window revealing $5, so the cash is clearly visible from outside the sealed envelope. We also tested the USPS for evidence of theft, and we compared First Class and Priority Mail postage. We found no evidence of theft. We found no difference in response rates between Priority Mail and First Class, making First Class much more cost-effective, and we found that visible money increased the response rate to a mail survey from 42.6 to 46.9 percent, at no significant cost.


Field Methods ◽  
2017 ◽  
Vol 29 (3) ◽  
pp. 281-298 ◽  
Author(s):  
Milton G. Newberry ◽  
Glenn D. Israel

Recent research has shown mixed-mode surveys are advantageous for organizations to use in collecting data. Previous research explored web/mail mode effects for four-contact waves. This study explores the effect of web/mail mixed-mode systems over a series of contacts on the customer satisfaction data from the Florida Cooperative Extension Service during 2012–2013. The experimental design involved a group of clients who provided e-mail and mail contact information randomly assigned to two mixed-mode treatment groups and a mail-only control. Demographic and service utilization data were compared to assess response rates and nonresponse bias. Logistic regression showed the treatment groups had similar response rates and nonresponse bias. The fifth contact was statistically significant in increasing response rates but did not reduce nonresponse bias. Future research should continue exploring optimizing the number of contacts in mixed-mode survey methodology.


2018 ◽  
Vol 43 (5) ◽  
pp. 307-330 ◽  
Author(s):  
Richard Hendra ◽  
Aaron Hill

Background: Federally funded evaluation research projects typically strive for an 80% survey response rate, but the increasing difficulty and expense in reaching survey respondents raises the question of whether such a threshold is necessary for reducing bias and increasing the accuracy of survey estimates. Objectives: This analysis focuses on a particular component of survey methodology: the survey response rate and its relationship to nonresponse bias. Following a review of the literature, new analysis of data from a large, multisite random assignment experiment explores the relationship between survey response rates and measured nonresponse bias. Research Design: With detailed survey disposition data, the analysis simulates nonresponse bias at lower response rates. The subjects included 12,000 individuals who were fielded for 16 identical surveys as part of the Employment Retention and Advancement evaluation. Results: The results suggest scant relationship between survey nonresponse bias and response rates. The results also indicate that the pursuit of high response rates lengthens the fielding period, which can create other measurement problems. Conclusions: The costly pursuit of a high response rate may offer little or no reduction of nonresponse bias. Achieving such a high rate of response requires considerable financial resources that might be better applied to methods and techniques shown to have a greater effect on the reduction of nonresponse bias.


Author(s):  
T.N. TRAN ◽  
G. VAN HAL ◽  
M. PEETERS ◽  
S. JIDKOVA ◽  
S. HOECK

Municipal characteristics associated with response rate to organised colorectal cancer screening in Flanders Introduction In Flanders (Belgium), the response rate to organised colorectal cancer (CRC) screening is still suboptimal (~ 50%). We studied the characteristics of municipalities in the Flemish provinces with the highest and lowest response rates to explore factors that might be associated with the response rate to organised CRC screening. Methods The response rates of municipalities in 5 Flemish provinces and the characteristics of municipalities in the provinces with the highest and lowest response rates were compared to the average measures of Flanders (data 2017) using an unpaired two-sample Wilcoxon test. Results The municipal response rates in Limburg and Antwerp were significantly higher, and those in West Flanders and Flemish Brabant significantly lower compared to Flanders. Further analyses of Limburg (highest response rate) and Flemish Brabant (lowest response rate) suggested that municipalities with higher response rates had more men and people aged 60-64 in the target population, more jobseekers and more people who contacted GPs/specialists frequently, but fewer people aged 70-74 in the target population and with a lower average income compared to Flanders. In contrast, municipalities with lower response rates had fewer men in the target population, fewer people having a partner, fewer jobseekers and fewer people having a global medical file, but more people with a non-Dutch or non-Belgian nationality and a higher average income (p-values < 0.01). Conclusion This exploratory study identifies certain demographic, socioeconomic and health‑related municipal characteristics that may be related to the response rate to CRC screening in Flanders. These findings can guide future research and investigations with the aim to improve the response rate to CRC screening.


1980 ◽  
Vol 17 (4) ◽  
pp. 498-502 ◽  
Author(s):  
Chris T. Allen ◽  
Charles D. Schewe ◽  
Gösta Wijk

A field experiment conducted in Sweden compared the effectiveness of two types of telephone pre-calls in influencing response rates in a mail survey. Response rates for a questioning foot-in-the-door manipulation were evaluated against responses produced by a simple solicitation call and a blind mailing control. The results demonstrate that pre-calling in general enhances response rate. However, the results furnish, at best, qualified support for a self-perception theory prediction. Alternative explanations for the lack of the self-perception foot effect are offered. Conclusions are drawn for the practitioner and academic researcher.


2008 ◽  
Vol 2 (1) ◽  
pp. 94-103
Author(s):  
Leslie A. McCallister ◽  
Bobette Otto

What techniques effectively and consistently impact response rates to a mail survey? No clear answer to this question exists, largely because variability in response rates occurs depending on the population of interest, questionnaire type, and procedures used by researchers. This article examines the impact of e-mail and postcard prenotification on response rates to a mail survey by using a population of university full-time faculty and staff. Comparisons were made among respondents who received a postcard prenotification, those who received an e-mail prenotification, and those who received no prenotification prior to the initial mailing of a questionnaire. Data show that e-mail prenotification had the largest impact on response rate, while postcard prenotification had the least impact. In addition, the use of e-mail prenotification reduced overall project costs (both time and money). We suggest that the uses and applicability of e-mail prenotification be further explored to examine both its initial and overall impact on response rate in populations utilizing an electronic environment.


2003 ◽  
Vol 28 (2) ◽  
pp. 163-172 ◽  
Author(s):  
Rick Newby ◽  
John Watson ◽  
David Woodliff

Cost effective data collection is an important methodological issue for small and medium enterprise (SME) researchers. There is a generally held view that mail surveys are the most efficient means of collecting empirical data, despite the potential difficulties associated with low response rates. To enhance the usefulness of mail surveys, researchers have suggested a variety of strategies aimed at improving response rates. While previous studies have examined the effect on response rates of many of these strategies, their impact on data quality and on the cost effectiveness of data collection is less well understood. This study evaluates four response–inducing strategies (printing the survey instrument on colored paper, telephone pre–notification, payment of a monetary incentive, and a follow–up mailing) in terms of their effect on data quality, response rates, and cost effectiveness for a population of SMEs.


2021 ◽  
Author(s):  
◽  
Rachel Margaret Esson

<p>Introduction: Medical libraries very often base the decisions they make about library services on information gathered from user surveys. Is the quality of information obtained in this way sufficient to enable evidence-based practice? Aim: To determine what aspects of user survey design and presentation obtain the best response rates and therefore high external validity. Also to provide guidance for medical librarians who may wish to carry out user surveys. Methods: Library and information studies databases and Medline were searched to identify studies that reported the results of library user surveys that measured user perceptions of an existing library service or potential service. Studies that evaluated information skills training or clinical librarianship interventions were excluded as they have been looked in separate systematic reviews. Also studies that reported the results of LibQUAL or SERVQUAL were excluded. Results: 54 studies were included. The quality of the majority of the surveys was not clear as the reporting of the methodology of the user surveys was poor. However, it was determined that, as demonstrated in previous research, paper format surveys reported higher response rates than online-only surveys. It was not possible to extract any relevant data from the identified studies to draw any conclusions relating to presentation of the survey instrument. Conclusions: Unless survey methodology is reported in detail it is not possible to judge the quality of the evidence surveys contain. Good survey design is key to obtaining a good response rate and a good response rate means the results can be used for evidence-based practice. A Reporting Survey results Guideline (Resurge) is recommended to help improve the reporting quality of medical library survey research.</p>


2020 ◽  
pp. 193896552094309 ◽  
Author(s):  
Faizan Ali ◽  
Olena Ciftci ◽  
Luana Nanu ◽  
Cihan Cobanoglu ◽  
Kisang Ryu

In this paper, we examine published research in six top-tier hospitality journals to explore response rates for different survey distribution methods across specific characteristics like research context, respondents, and geographical regions. Data were analyzed from 1,389 papers published from January 2001 to December 2019. By looking at a large set of published response rates, distribution and enhancing methods and type of respondents, findings from this study will aid researchers in designing more effective surveys and successfully collecting necessary data. The implications for response rate in hospitality research are also presented.


2019 ◽  
Vol 12 (2) ◽  
pp. 205979911986210 ◽  
Author(s):  
Emily Grubert

Mail surveys remain a popular method of eliciting attitudinal information, but declining response rates motivate inquiry into new, lower cost methods of contacting potential respondents. This work presents methodological findings from a medium-sized (~12,000 addresses) mail survey testing a United States Postal Service direct mail product called Every Door Direct Mail as a low-cost approach to anonymous mail survey distribution. The results suggest that under certain conditions, Every Door Direct Mail can be a useful approach for mail survey distribution, with response rates similar to those observed with analogous first-class mailing approaches but lower cost per response. As a tool for postal carrier-route saturation mailing that does not use names or addresses, Every Door Direct Mail is potentially useful for researchers who work in small, specific geographies or value or require anonymity. The results from this work suggest good performance on demographics and socially undesirable answers for Every Door Direct Mail relative to addressed mailings. The major disadvantages include an inability to conduct household-level probability sampling, an inability to customize nonresponse follow-up, and minimum mailing sizes associated with the postal carrier route saturation requirement. Every Door Direct Mail is unlikely to become a major tool for survey researchers, but it could be useful in niche applications. This study introduces Every Door Direct Mail to the survey methodology literature and presents empirical data intended to help researchers considering using Every Door Direct Mail.


Sign in / Sign up

Export Citation Format

Share Document