An analysis of methodologic quality in survey research reported in the Journal of Clinical Oncology.

2013 ◽  
Vol 31 (15_suppl) ◽  
pp. 6616-6616
Author(s):  
Carrie M Tamarelli ◽  
David D. Howell

6616 Background: Survey research (SR) has been increasing as a percentage of published manuscripts in medical journals. SR plays an important role in studies of quality of life and patient preferences in treatment. Appropriate quality of methodology in SR is critical both to assure reliability and validity of survey results as well as to derive sound generalizations for larger populations from the subsets surveyed. Surveys that have deficient methodological criteria may suffer from significant flaws. A complete description and discussion of quality survey methodology, analysis, and results is essential for a thorough understanding and evaluation of published SR. Methods: Between January 2006 and December 2010, 227 articles in JCO were identified to have either “survey” or “questionnaire” in either the title or abstract. The most recent 52 consecutive articles fulfilling criteria from that time period were reviewed for reporting of survey methodology. A modification of Bennett et al.’s checklist for reporting SR was used for this analysis (Bennett C., et al. Reporting guidelines for survey research: An analysis of published guidance and reporting practices. PLoS Med 8(8): e1001069, 2011.). 35 metrics were used to analyze each survey. These metrics were grouped in the following categories: 1) title and abstract, 2) introduction, 3) methods (research tool, sample selection, survey administration and analysis), 4) results, 5) discussion, and 6) ethical quality indicators. Results: Of the 52 survey articles reviewed, the top quartile reported greater than 72% of the analyzed metrics. Half of the articles contained 63% or more of the desired metrics, and half of the articles had between 42% and 62% of the desired metrics. Some metrics were usually reported, such as ethics board review (reported in 85% of articles), but others were not consistently reported, such as calculation or justification of sample size (neither were reported in 71% of articles). Conclusions: A substantial number of articles reviewed reporting on survey research in JCO did not report critical components of survey methodology. More rigorous quality recommendations should be offered to guide authors in the report of survey research results.

HortScience ◽  
1998 ◽  
Vol 33 (3) ◽  
pp. 554c-554
Author(s):  
Sonja M. Skelly ◽  
Jennifer Campbell Bradley

Survey research has a long precedence of use in the social sciences. With a growing interest in the area of social science research in horticulture, survey methodology needs to be explored. In order to conduct proper and accurate survey research, a valid and reliable instrument must be used. In many cases, however, an existing measurement tool that is designed for specific research variables is unavailable thus, an understanding of how to design and evaluate a survey instrument is necessary. Currently, there are no guidelines in horticulture research for developing survey instruments for use with human subjects. This presents a problem when attempting to compare and reference similar research. This workshop will explore the methodology involved in preparing a survey instrument; topics covered will include defining objectives for the survey, constructing questions, pilot testing the survey, and obtaining reliability and validity information. In addition to these topics some examples will be provided which will illustrate how to complete these steps. At the conclusion of this session a discussion will be initiated for others to share information and experiences dealing with creating survey instruments.


2017 ◽  
Vol 7 (1) ◽  
Author(s):  
Gopi K Khanal

This descripto-analytical paper on ensuring quality in survey research discusses the management of errors in administering survey. This paper aims to help the social science researchers to ensure the quality in the process and outcomes of survey research. It begins with the brief conceptual underpinnings of survey research, discusses about reliability and validity tests in survey, elaborates the notion of total survey error approach, and suggests some measures on handling survey errors. Given the wider applications and substantial costs associated with survey research, the issues of sampling and non-sampling errors have always been major concerns in the quality of survey research. Survey research can be instrumental in generating knowledge provided survey errors are handled properly. Though a variety of measures are in practices to ensure quality of survey data, this paper gives importance on total survey approach that gives emphasis on total quality management in the collection, analysis, and interpretation of data. Dealing survey data from the perspective of total survey approach would yield fruitful results from survey research.


2010 ◽  
Vol 6 (1) ◽  
Author(s):  
Mieke Beckers ◽  
Jaak Billiet

Direct democratic participation through referenda is often contested because one faces the problem of determining referendum questions which avoid confusion or subjectivity. However, detailed knowledge concerning so-called ‘question wording effects' is available within the domain of survey research. In this body of literature, several wording effects such as the use of suggestive wordings, the ambiguity of yes/no questions etc., have been well documented. Yet, despite the similarities between referendum and survey questions, knowledge from survey methodology is rarely employed within the literature on referenda. The present study discusses a number of question wording effects studied in survey research and shows their relevance in referendum settings. Moreover, this article explores these effects in twelve local referenda in Flanders. Building on this empirical evidence, we conclude with a number of precise guidelines regarding the quality of referendum questions.


PLoS ONE ◽  
2021 ◽  
Vol 16 (9) ◽  
pp. e0257344
Author(s):  
Rafael Saltos-Rivas ◽  
Pavel Novoa-Hernández ◽  
Rocío Serrano Rodríguez

In this study, we report on a Systematic Mapping Study (SMS) on how the quality of the quantitative instruments used to measure digital competencies in higher education is assured. 73 primary studies were selected from the published literature in the last 10 years in order to 1) characterize the literature, 2) evaluate the reporting practice of quality assessments, and 3) analyze which variables explain such reporting practices. The results indicate that most of the studies focused on medium to large samples of European university students, who attended social science programs. Ad hoc, self-reported questionnaires measuring various digital competence areas were the most commonly used method for data collection. The studies were mostly published in low tier journals. 36% of the studies did not report any quality assessment, while less than 50% covered both groups of reliability and validity assessments at the same time. In general, the studies had a moderate to high depth of evidence on the assessments performed. We found that studies in which several areas of digital competence were measured were more likely to report quality assessments. In addition, we estimate that the probability of finding studies with acceptable or good reporting practices increases over time.


2019 ◽  
Vol 35 (3) ◽  
pp. 413-417 ◽  
Author(s):  
Helen L. Ball

There is an established methodology for conducting survey research that aims to ensure rigorous research and robust outputs. With the advent of easy-to-use online survey platforms, however, the quality of survey studies has declined. This article summarizes the pros and cons of online surveys and emphasizes the key principles of survey research, for example questionnaire validation and sample selection. Numerous texts are available to guide researchers in conducting robust survey research online, however this is neither a quick nor easy undertaking. While online survey websites and software are useful for assisting in questionnaire design and delivery, they can also introduce sources of bias. Researchers considering conducting online surveys are encouraged to read carefully about how the principles of survey research can be applied to online formats in order to reduce bias and enhance rigor. In addition to alerting researchers to the pitfalls of online surveys this article also aims to equip readers of this journal with the knowledge of how to critically appraise publications based on online surveys.


Author(s):  
Mitchell Seligson ◽  
Daniel E. Moreno Morales

Controlling field interview quality is a major challenge in survey research. Even in high-quality surveys, interviewers often make mistakes that ultimately result in added error in results, including visiting the wrong locations, skipping questions or entire pages, failing to read the complete wording of the questions, or even committing fraud while filling out responses. Survey research conducted in developing countries has to deal with these problems more frequently than research conducted in advanced industrial countries. Computer assisted personal interview (CAPI) systems provide an ideal opportunity for improving the quality of the data by eliminating many sources of error and allowing unprecedented control of the field process. The Latin American Public Opinion Project’s (LAPOP) experience using ADGYS, an Android-based CAPI system, provides useful information on how this technology reduces interviewer-related error, offers opportunities to control the field process, and ultimately significantly improves the reliability and validity of survey data.


2019 ◽  
Vol 118 (11) ◽  
pp. 552-562
Author(s):  
Nguyen Thi Ngan ◽  
Bui Huy Khoi

This research aims to assess the service quality of industrial parks (IP) in the view of FDI (foreign direct investment) firms in Vietnam. Data was collected from 270 FDI firms in Vietnam - Singapore Industrial Parks (VSIP) in Vietnam. The proposed research model was based on researches on service quality. Cronbach's Alpha Average Variance Extracted (Pvc),rho (ρA), and Composite Reliability (Pc) tested the reliability and validity of the scale. The analysis results showed that four factors were affecting the servicequality of industrial park in Vietnam being tangibleof VSIP, reliability of VSIP, the empathyof FDI investors, and their assurance. The responsivenessof VSIP did not affect the servicequality of the industrial park. Contents of the article focus on two main issues: the analysis framework of the quantitative model and implicating results todevelop the industrial park services. The limitation of the research was only in VSIP in Vietnam.


2012 ◽  
Vol 127 (1) ◽  
pp. 15-19 ◽  
Author(s):  
A Mirza ◽  
L McClelland ◽  
M Daniel ◽  
N Jones

AbstractBackground:Many ENT conditions can be treated in the emergency clinic on an ambulatory basis. Our clinic traditionally had been run by foundation year two and specialty trainee doctors (period one). However, with perceived increasing inexperience, a dedicated registrar was assigned to support the clinic (period two). This study compared admission and discharge rates for periods one and two to assess if greater registrar input affected discharge rate; an increase in discharge rate was used as a surrogate marker of efficiency.Method:Data was collected prospectively for patients seen in the ENT emergency clinic between 1 August 2009 and 31 July 2011. Time period one included data from patients seen between 1 August 2009 and 31 July 2010, and time period two included data collected between 1 August 2010 and 31 July 2011.Results:The introduction of greater registrar support increased the number of patients that were discharged, and led to a reduction in the number of children requiring the operating theatre.Conclusion:The findings, which were determined using clinic outcomes as markers of the quality of care, highlighted the benefits of increasing senior input within the ENT emergency clinic.


Sign in / Sign up

Export Citation Format

Share Document