scholarly journals Multitarea en una encuesta online: prevalencia, predictores e impacto en la calidad de los datos / Multitasking during an Online Survey: Prevalence, Predictors and Impact on Data Quality

Author(s):  
Carmen Mª León ◽  
Eva Aizpurua ◽  
Vidal Díaz de Rada
Keyword(s):  
10.2196/18956 ◽  
2021 ◽  
Vol 7 (1) ◽  
pp. e18956
Author(s):  
Zeinab Gura Roka ◽  
Elvis Omondi Oyugi ◽  
Jane Njoki Githuku ◽  
Evalyne Kanyina ◽  
Mark Obonyo ◽  
...  

Background In 2014, Kenya’s Field Epidemiology and Laboratory Training Program (FELTP) initiated a 3-month field-based frontline training, Field Epidemiology Training Program (FETP-F), for local public health workers. Objective This study aimed to measure the effect of FETP-F on participant workplace practices regarding quality and consistency of public health data, critical interaction with public health data, and improvements in on-time reporting (OTR). Methods Between February and April 2017, FELTP conducted a mixed methods evaluation via online survey to examine outcomes achieved among all 215 graduates from 2014 and 2015. Data quality assessment (DQA) and data consistency assessment (DCA) scores, OTR percentages, and ratings of the training experience were the quantitative measures tracked from baseline and then at 6-month intervals up to 18 months postcompletion of the training. The qualitative component consisted of semistructured face-to-face interviews and observations. Quantitative data were analyzed using descriptive statistics and one-way analysis of variance (ANOVA). Qualitative data were transcribed and analyzed to identify key themes and dimensions. Results In total, 103 (47%) graduates responded to the survey. Quantitative analyses showed that the training significantly increased the mean DQA and OTR scores but there was a nonsignificant increase in mean DCA scores. Qualitative analyses found that 68% of respondents acquired new skills, 83% applied those skills to their day-to-day work, and 91% improved work methods. Conclusions FETP-F improved overall data quality and OTR at the agency level but had minimal impact on data consistency between local, county, and national public health agencies. Participants reported that they acquired practical skills that improved data collation and analysis and OTR.


2020 ◽  
Author(s):  
Zeinab Gura Roka ◽  
Elvis Omondi Oyugi ◽  
Jane Njoki Githuku ◽  
Evalyne Kanyina ◽  
Mark Obonyo ◽  
...  

BACKGROUND In 2014, Kenya’s Field Epidemiology and Laboratory Training Program (FELTP) initiated a 3-month field-based frontline training, Field Epidemiology Training Program (FETP-F), for local public health workers. OBJECTIVE This study aimed to measure the effect of FETP-F on participant workplace practices regarding quality and consistency of public health data, critical interaction with public health data, and improvements in on-time reporting (OTR). METHODS Between February and April 2017, FELTP conducted a mixed methods evaluation via online survey to examine outcomes achieved among all 215 graduates from 2014 and 2015. Data quality assessment (DQA) and data consistency assessment (DCA) scores, OTR percentages, and ratings of the training experience were the quantitative measures tracked from baseline and then at 6-month intervals up to 18 months postcompletion of the training. The qualitative component consisted of semistructured face-to-face interviews and observations. Quantitative data were analyzed using descriptive statistics and one-way analysis of variance (ANOVA). Qualitative data were transcribed and analyzed to identify key themes and dimensions. RESULTS In total, 103 (47%) graduates responded to the survey. Quantitative analyses showed that the training significantly increased the mean DQA and OTR scores but there was a nonsignificant increase in mean DCA scores. Qualitative analyses found that 68% of respondents acquired new skills, 83% applied those skills to their day-to-day work, and 91% improved work methods. CONCLUSIONS FETP-F improved overall data quality and OTR at the agency level but had minimal impact on data consistency between local, county, and national public health agencies. Participants reported that they acquired practical skills that improved data collation and analysis and OTR.


2020 ◽  
Vol 13 (1) ◽  
pp. 1-10 ◽  
Author(s):  
Bingbing Zhang ◽  
Sherice Gearhart

2016 ◽  
Vol 25 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Catherine A. Roster ◽  
Gerald Albaum ◽  
Scott M. Smith
Keyword(s):  

Author(s):  
Alexandru Cernat ◽  
Melanie Revilla

Abstract Time and cost pressures, the availability of alternative sources of data, and societal changes are leading to a move from traditional face-to-face surveys to web or mixed-mode data collection. While we know that there are mode differences between web and face-to-face (presence of an interviewer or not, type of stimuli, etc.), it is not clear to what extent these differences could threaten the comparability of data collected in face-to-face and web surveys. In this article, we investigate the differences in measurement quality between the European Social Survey (ESS) Round 8 and the CROss-National Online Survey (CRONOS) panel. We address three main research questions: (1) Do we observe differences in terms of measurement quality across face-to-face and web for the same people and questions? (2) Can we explain individual-level differences in data quality using respondents’ characteristics? and (3) Does measurement equivalence (metric and scalar) hold across the ESS Round 8 and the CRONOS panel? The results suggest that: (1) in terms of data quality, the measurement mode effect between web and face-to-face as implemented in the ESS (i.e., using show cards) is not very large, (2) none of the variables considered consistently explain individual differences in mode effects, and (3) measurement equivalence often holds for the topics studied.


2018 ◽  
Vol 37 (3) ◽  
pp. 435-445
Author(s):  
Rebecca Hofstein Grady ◽  
Rachel Leigh Greenspan ◽  
Mingnan Liu

Across two studies, we aimed to determine the row and column size in matrix-style questions that best optimizes participant experience and data quality for computer and mobile users. In Study 1 ( N = 2,492), respondents completed 20 questions (comprising four short scales) presented in a matrix grid (converted to item-by-item format on mobile phones). We varied the number of rows (5, 10, or 20) and columns (3, 5, or 7) of the matrix on each page. Outcomes included both data quality (straightlining, item skip rate, and internal reliability of scales) and survey experience measures (dropout rate, rating of survey experience, and completion time). Results for row size revealed dropout rate and reported survey difficulty increased as row size increased. For column size, seven columns increased the completion time of the survey, while three columns produced lower scale reliability. There was no interaction between row and column size. The best overall size tested was a 5 × 5 matrix. In Study 2 ( N = 2,570), we tested whether the effects of row size replicated when using a single 20-item scale that crossed page breaks and found that participant survey ratings were still best in the five-row condition. These results suggest that having around five rows or potentially fewer per page, and around five columns for answer options, gives the optimal survey experience, with equal or better data quality, when using matrix-style questions in an online survey. These recommendations will help researchers gain the benefits of using matrices in their surveys with the least downsides of the format.


2020 ◽  
Vol 25 (4) ◽  
pp. 489-503
Author(s):  
Vitaly Brazhkin

Purpose The purpose of this paper is to provide a comprehensive review of the respondents’ fraud phenomenon in online panel surveys, delineate data quality issues from surveys of broad and narrow populations, alert fellow researchers about higher incidence of respondents’ fraud in online panel surveys of narrow populations, such as logistics professionals and recommend ways to protect the quality of data received from such surveys. Design/methodology/approach This general review paper has two parts, namely, descriptive and instructional. The current state of online survey and panel data use in supply chain research is examined first through a survey method literature review. Then, a more focused understanding of the phenomenon of fraud in surveys is provided through an analysis of online panel industry literature and psychological academic literature. Common survey design and data cleaning recommendations are critically assessed for their applicability to narrow populations. A survey of warehouse professionals is used to illustrate fraud detection techniques and glean additional, supply chain specific data protection recommendations. Findings Surveys of narrow populations, such as those typically targeted by supply chain researchers, are much more prone to respondents’ fraud. To protect and clean survey data, supply chain researchers need to use many measures that are different from those commonly recommended in methodological survey literature. Research limitations/implications For the first time, the need to distinguish between narrow and broad population surveys has been stated when it comes to data quality issues. The confusion and previously reported “mixed results” from literature reviews on the subject have been explained and a clear direction for future research is suggested: the two categories should be considered separately. Practical implications Specific fraud protection advice is provided to supply chain researchers on the strategic choices and specific aspects for all phases of surveying narrow populations, namely, survey preparation, administration and data cleaning. Originality/value This paper can greatly benefit researchers in several ways. It provides a comprehensive review and analysis of respondents’ fraud in online surveys, an issue poorly understood and rarely addressed in academic research. Drawing from literature from several fields, this paper, for the first time in literature, offers a systematic set of recommendations for narrow population surveys by clearly contrasting them with general population surveys.


2017 ◽  
Vol 50 (4) ◽  
pp. 1005-1036 ◽  
Author(s):  
Charles Breton ◽  
Fred Cutler ◽  
Sarah Lachance ◽  
Alex Mierke-Zatwarnicki

AbstractElection studies must optimize on sample size, cost and data quality. The 2015 Canadian Election Study was the first CES to employ a full mixed-mode design, aiming to take advantage of the opportunities of each mode while preserving enough commonality to compare them. This paper examines the phone interviews conducted by ISR-York and the online questionnaires from panellists purchased from a sample provider. We compare data quality and representativeness. We conduct a comprehensive comparison of the distributions of responses across modes and a comparative analysis of inferences about voting. We find that the cost/power advantages of the online mode will likely make it the mode of choice for subsequent election studies.


Sign in / Sign up

Export Citation Format

Share Document