careless responding
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 33)

H-INDEX

7
(FIVE YEARS 2)

2022 ◽  
pp. 001316442110694
Author(s):  
Chet Robie ◽  
Adam W. Meade ◽  
Stephen D. Risavy ◽  
Sabah Rasheed

The effects of different response option orders on survey responses have been studied extensively. The typical research design involves examining the differences in response characteristics between conditions with the same item stems and response option orders that differ in valence—either incrementally arranged (e.g., strongly disagree to strongly agree) or decrementally arranged (e.g., strongly agree to strongly disagree). The present study added two additional experimental conditions—randomly incremental or decremental and completely randomized. All items were presented in an item-by-item format. We also extended previous studies by including an examination of response option order effects on: careless responding, correlations between focal predictors and criteria, and participant reactions, all the while controlling for false discovery rate and focusing on the size of effects. In a sample of 1,198 university students, we found little to no response option order effects on a recognized personality assessment vis-à-vis measurement equivalence, scale mean differences, item-level distributions, or participant reactions. However, the completely randomized response option order condition differed on several careless responding indices suggesting avenues for future research.


2021 ◽  
Author(s):  
Charlotte Rebecca Pennington ◽  
Andrew Jones ◽  
Loukia Tzavella ◽  
Christopher D Chambers ◽  
Katherine Susan Button

Participant crowdsourcing platforms (e.g., MTurk, Prolific) offer numerous advantages to addiction science, permitting access to hard-to-reach populations and enhancing the feasibility of complex experimental, longitudinal and intervention studies. Yet these are met with equal concerns about participant non-naivety, motivation, and careless responding, which if not considered can greatly compromise data quality. In this article, we discuss an alternative crowdsourcing avenue that overcomes these issues whilst presenting its own unique advantages – crowdsourcing researchers through big team science. First, we review several contemporary efforts within psychology (e.g., ManyLabs, Psychological Science Accelerator) and the benefits these would yield if they were more widely implemented in addiction science. We then outline our own consortium-based approach to empirical dissertations: a grassroots initiative that trains students in reproducible big team addiction science. In doing so, we discuss potential challenges and their remedies, as well as providing resources to help addiction researchers develop these initiatives. Through researcher crowdsourcing, together we can answer fundamental scientific questions about substance use and addiction, build a literature that is representative of a diverse population of researchers and participants, and ultimately achieve our goal of promoting better global health.


2021 ◽  
Author(s):  
Andrew Jones ◽  
Charlotte Rebecca Pennington

Crowdsourcing — the process of using the internet to outsource research participation to ‘workers’ — has considerable benefits, enabling research to be conducted quickly, efficiently, and responsively, diversifying participant recruitment, and allowing access to hard-to-reach samples. One of the biggest threats to this method of online data collection however is the prevalence of careless responders who can significantly affect data quality. The aims of this preregistered systematic review and meta-analysis were to: i), examine the prevalence of screening for careless responding in crowdsourced alcohol-related studies; ii), examine the pooled prevalence of careless responding; and iii) identify any potential moderators of careless responding across studies. Our review identified 96 eligible studies (~126,130 participants), of which 51 utilised at least one measure of careless responding (53.2%: 95% CI 42.7% to 63.3%; ~75,334 participants). Of these, 48 reported the number of participants identified by careless responding method(s) and the pooled prevalence rate was ~11.7% [95% CI: 7.6% to 16.5%]. Studies using the MTurk platform identified more careless responders compared to other platforms, and the number of careless response items was positively associated with prevalence rates. The most common measure of careless responding was an attention check question, followed by implausible response times. We suggest that researchers plan for such attrition when crowdsourcing participants and provide practical recommendations for handling and reporting careless responding in alcohol research.


2021 ◽  
Vol 50 (5) ◽  
pp. 1401-1434
Author(s):  
Won-Woo Park ◽  
Yoowoo Lee ◽  
Sunghyuck Mah ◽  
Jayoung Kim ◽  
Suhyun Bae ◽  
...  

Author(s):  
Jason L. Huang ◽  
Zhonghao Wang

Careless responding, also known as insufficient effort responding, refers to survey/test respondents providing random, inattentive, or inconsistent answers to question items due to lack of effort in conforming to instructions, interpreting items, and/or providing accurate responses. Researchers often use these two terms interchangeably to describe deviant behaviors in survey/test responding that threaten data quality. Careless responding threatens the validity of research findings by bringing in random and systematic errors. Specifically, careless responding can reduce measurement reliability, while under specific circumstances it can also inflate the substantive relations between variables. Numerous factors can explain why careless responding happens (or does not happen), such as individual difference characteristics (e.g., conscientiousness), survey characteristics (e.g., survey length), and transient psychological states (e.g., positive and negative affect). To identify potential careless responding, researchers can use procedural detection methods and post hoc statistical methods. For example, researchers can insert detection items (e.g., infrequency items, instructed response items) into the questionnaire, monitor participants’ response time, and compute statistical indices, such as psychometric antonym/synonym, Mahalanobis distance, individual reliability, individual response variability, and model fit statistics. Application of multiple detection methods would be better able to capture careless responding given convergent evidence. Comparison of results based on data with and without careless respondents can help evaluate the degree to which the data are influenced by careless responding. To handle data contaminated by careless responding, researchers may choose to filter out identified careless respondents, recode careless responses as missing data, or include careless responding as a control variable in the analysis. To prevent careless responding, researchers have tried utilizing various deterrence methods developed from motivational and social interaction theories. These methods include giving warning, rewarding, or educational messages, proctoring the process of responding, and designing user-friendly surveys. Interest in careless responding has been growing not only in business and management but also in other related disciplines. Future research and practice on careless responding in the business and management areas can also benefit from findings in other related disciplines.


2021 ◽  
Vol 51 (2) ◽  
pp. 231-256
Author(s):  
Luka Mandić ◽  
Ksenija Klasnić

It is often assumed that survey results reflect only the quality of the sample and the underlying measuring instruments used in the survey. However, various phenomena can affect the results, but these influences are often neglected when conducting surveys. This study aimed to test the influences of various method effects on survey results. We tested the influences of the following method effects: item wording, confirmatory bias, careless responding, and acquiescence bias. Using a split-ballot survey design with online questionnaires, we collected data from 791 participants. We tested if these method effects had an influence on mean values, item correlations, construct correlations, model fits, and construct measurement invariance. The instruments used to test these influences were from the domain of personality and gender inequality, and their items were adapted based on the method effect tested. All tested method effects, except careless responding, had a statistically significant effect on at least one component of the analysis. Item wording and confirmatory bias affected mean values, model fit, and measurement invariance. Controlling for acquiescence bias improved the fit of the model. This paper confirms that the tested method effects should be carefully considered when using surveys in research, and suggests some guidelines on how to do so.


Mathematics ◽  
2021 ◽  
Vol 9 (17) ◽  
pp. 2035
Author(s):  
Álvaro Briz-Redón

The respondent burden refers to the effort required by a respondent to answer a questionnaire. Although this concept was introduced decades ago, few studies have focused on the quantitative detection of such a burden. In this paper, a face-to-face survey and a telephone survey conducted in Valencia (Spain) are analyzed. The presence of burden is studied in terms of both item non-response rates and careless response rates. In particular, two moving-window statistics based on the coefficient of unalikeability and the average longstring index are proposed for characterizing careless responding. Item non-response and careless response rates are modeled for each survey by using mixed-effects models, including respondent-level and question-level covariates and also temporal random effects to assess the existence of respondent burden during the questionnaire. The results suggest that the sociodemographic characteristics of the respondents and the typology of the question impact item non-response and careless response rates. Moreover, the estimates of the temporal random effects indicate that item non-response and careless response rates are time-varying, suggesting the presence of respondent burden. In particular, an increasing trend in item non-response rates in the telephone survey has been found, which supports the hypothesis of the burden. Regarding careless responding, despite the presence of some temporal variation, no clear trend has been identified.


2021 ◽  
pp. 001316442110047
Author(s):  
Ulrich Schroeders ◽  
Christoph Schmidt ◽  
Timo Gnambs

Careless responding is a bias in survey responses that disregards the actual item content, constituting a threat to the factor structure, reliability, and validity of psychological measurements. Different approaches have been proposed to detect aberrant responses such as probing questions that directly assess test-taking behavior (e.g., bogus items), auxiliary or paradata (e.g., response times), or data-driven statistical techniques (e.g., Mahalanobis distance). In the present study, gradient boosted trees, a state-of-the-art machine learning technique, are introduced to identify careless respondents. The performance of the approach was compared with established techniques previously described in the literature (e.g., statistical outlier methods, consistency analyses, and response pattern functions) using simulated data and empirical data from a web-based study, in which diligent versus careless response behavior was experimentally induced. In the simulation study, gradient boosting machines outperformed traditional detection mechanisms in flagging aberrant responses. However, this advantage did not transfer to the empirical study. In terms of precision, the results of both traditional and the novel detection mechanisms were unsatisfactory, although the latter incorporated response times as additional information. The comparison between the results of the simulation and the online study showed that responses in real-world settings seem to be much more erratic than can be expected from the simulation studies. We critically discuss the generalizability of currently available detection methods and provide an outlook on future research on the detection of aberrant response patterns in survey research.


Sign in / Sign up

Export Citation Format

Share Document