Adjusting Self-Reported Attitudinal Data for Mischievous Respondents

2012 ◽  
Vol 54 (1) ◽  
pp. 129-145 ◽  
Author(s):  
Michael R. Hyman ◽  
Jeremy J. Sierra

For various reasons, survey participants may submit phoney attitudinal self-reports meant to bypass researcher scepticism. After suggesting reasons for this new category of problematic survey participant - the mischievous respondent (MR) - and reviewing related response bias, faking, inattentive respondent and outlier literatures, an initial algorithm for removing such respondents from polychotomous attitudinal data sets is posited. Applied to four data sets, this algorithm marginally reduced EFA cross loadings and improved CFA model fit. Although purging subtly suspicious cases is not standard practice, the extant literature indicates that such algorithms can reduce artifactual statistical findings substantially.

Author(s):  
Alicia A. Stachowski ◽  
John T. Kulas

Abstract. The current paper explores whether self and observer reports of personality are properly viewed through a contrasting lens (as opposed to a more consonant framework). Specifically, we challenge the assumption that self-reports are more susceptible to certain forms of response bias than are informant reports. We do so by examining whether selves and observers are similarly or differently drawn to socially desirable and/or normative influences in personality assessment. Targets rated their own personalities and recommended another person to also do so along shared sets of items diversely contaminated with socially desirable content. The recommended informant then invited a third individual to additionally make ratings of the original target. Profile correlations, analysis of variances (ANOVAs), and simple patterns of agreement/disagreement consistently converged on a strong normative effect paralleling item desirability, with all three rater types exhibiting a tendency to reject socially undesirable descriptors while also endorsing desirable indicators. These tendencies were, in fact, more prominent for informants than they were for self-raters. In their entirety, our results provide a note of caution regarding the strategy of using non-self informants as a comforting comparative benchmark within psychological measurement applications.


2021 ◽  
Vol 115 (3) ◽  
pp. 204-214
Author(s):  
Michele C. McDonnall ◽  
Zhen S. McKnight

Introduction: The purpose of this study was to investigate the effect of visual impairment and correctable visual impairment (i.e., uncorrected refractive errors) on being out of the labor force and on unemployment. The effect of health on labor force status was also investigated. Method: National Health and Nutrition Examination Survey (NHANES) data from 1999 to 2008 ( N = 15,650) was used for this study. Participants were classified into three vision status groups: normal, correctable visual impairment, and visual impairment. Statistical analyses utilized were chi-square and logistic regression. Results: Having a visual impairment was significantly associated with being out of the labor force, while having a correctable visual impairment was not. Conversely, having a correctable visual impairment was associated with unemployment, while having a visual impairment was not. Being out of the labor force was not significantly associated with health for those with a visual impairment, although it was for those with correctable visual impairments and normal vision. Discussion: Given previous research, it was surprising to find that health was not associated with being out of the labor force for those with visual impairments. Perhaps other disadvantages for the people with visual impairments identified in this study contributed to their higher out-of-the-labor-force rates regardless of health. Implications for practitioners: Researchers utilizing national data sets that rely on self-reports to identify visual impairments should realize that some of those who self-identify as being visually impaired may actually have correctable visual impairments. Current research is needed to understand why a majority of people with visual impairments are not seeking employment and have removed themselves from the labor force.


2020 ◽  
Vol 12 (7) ◽  
pp. 1148
Author(s):  
Rui Guo ◽  
Jingyu Cui ◽  
Guobin Jing ◽  
Shuangxi Zhang ◽  
Mengdao Xing

The spaceborne synthetic aperture radar (SAR) is quite powerful in worldwide ocean observation, especially for ship monitoring, as a hot topic in ocean surveillance. The launched Gaofen-3 (GF3) satellite of China can provide C-band and multi-polarization SAR data, and one of its scientific applications is ocean ship detection. Compared with the single polarization system, polarimetric systems can be used for more effective ship detection. In this paper, a generalized extreme value (GEV)-based constant false alarm rate (CFAR) detector is proposed for ship detection in the ocean by using the reflection symmetry metric of dual-polarization. The reflection symmetry property shows big differences between the metallic targets at sea and the sea surface. In addition, the GEV statistical model is employed for reflection symmetry statistical distribution, which fits the reflection symmetry probability density function (pdf) well. Five dual-polarimetric GF3 stripmap ocean data sets are introduced in the paper, to show the contrast in enhancement by using reflection symmetry and to investigate the GEV model fit to the reflection symmetry metric. Additionally, with the detection experiments on the real GF3 datasets, the effectiveness and efficiency of the GEV model for reflection symmetry and the model-based ocean ship detector are verified.


2021 ◽  
Vol 17 (1) ◽  
pp. 103
Author(s):  
Maria da Glória Lima Leonardo ◽  
Michelle Morelo Pereira ◽  
Felipe Valentini ◽  
Clarissa Pinto Pizarro de Freitas ◽  
Michael F. Steger

AbstractResponse biases are issues in inventories in positive organizational psychology. The study aims to control the response bias in the assessment of meaning of work through two methods: reversed key items and forced-choice format. The sample consisted of 351 professionals; women constituted 60.0 % of the sample. The participants answered two versions of the instrument for meaning of work: Likert-type items and forced-choice. For both versions, the unifactorial model was the most appropriate for the data available. The results indicate that the random intercepts model fit the Likert data (CFI = .92), as well as the forced-choice model (CFI = .97). Besides, the latent dimension of the forced-choice version did not correlate with acquiescence index (r < .08; p > .05), and approximately 20 % of the variance of the items might be due to the method (Likert or forced-choice). The present study illustrates the importance of response bias control in self-report instruments. ResumenLos sesgos de respuesta son problemas en los inventarios de la psicología organizacional positiva. El estudio tiene como objetivo controlar el sesgo de respuesta en la eva­luación del trabajo significativo a través de dos métodos: ítems clave invertidos y formato de elección forzosa. La muestra estuvo formada por 351 profesionales; las muje­res constituyeron el 60.0 % de la muestra. Los participan­tes respondieron dos versiones del instrumento de signifi­cado del trabajo: ítems tipo Likert y elección forzosa. Para ambas versiones, el modelo unifactorial fue el más apro­piado para los datos disponibles. Los resultados indican que el modelo de intersecciones aleatorias se ajusta a los datos Likert (CFI = .92), así como al modelo de elección forzada (CFI = .97). Además, la dimensión latente de la versión de elección forzada no se correlacionó con el ín­dice de aquiescencia (r < .08; p > .05), y aproximada­mente el 20 % de la varianza de los ítems podría deberse al método (Likert o forzado). elección). El presente estu­dio ilustra la importancia del control del sesgo de res­puesta en los instrumentos de autoinforme.


2021 ◽  
Author(s):  
Jessica Röhner ◽  
Philipp Thoss ◽  
Astrid Schütz

Research has shown that even experts cannot detect faking above chance, but recent studies have suggested that machine learning may help in this endeavor. However, faking differs between faking conditions, previous efforts have not taken these differences into account, and faking indices have yet to be integrated into such approaches. We reanalyzed seven data sets (N = 1,039) with various faking conditions (high and low scores, different constructs, naïve and informed faking, faking with and without practice, different measures [self-reports vs. implicit association tests; IATs]). We investigated the extent to which and how machine learning classifiers could detect faking under these conditions and compared different input data (response patterns, scores, faking indices) and different classifiers (logistic regression, random forest, XGBoost). We also explored the features that classifiers used for detection. Our results show that machine learning has the potential to detect faking, but detection success varies between conditions from chance levels to 100%. There were differences in detection (e.g., detecting low-score faking was better than detecting high-score faking). For self-reports, response patterns and scores were comparable with regard to faking detection, whereas for IATs, faking indices and response patterns were superior to scores. Logistic regression and random forest worked about equally well and outperformed XGBoost. In most cases, classifiers used more than one feature (faking occurred over different pathways), and the features varied in their relevance. Our research supports the assumption of different faking processes and explains why detecting faking is a complex endeavor.


Author(s):  
David Izydorczyk ◽  
Arndt Bröder

AbstractExemplar models are often used in research on multiple-cue judgments to describe the underlying process of participants’ responses. In these experiments, participants are repeatedly presented with the same exemplars (e.g., poisonous bugs) and instructed to memorize these exemplars and their corresponding criterion values (e.g., the toxicity of a bug). We propose that there are two possible outcomes when participants judge one of the already learned exemplars in some later block of the experiment. They either have memorized the exemplar and their respective criterion value and are thus able to recall the exact value, or they have not learned the exemplar and thus have to judge its criterion value, as if it was a new stimulus. We argue that psychologically, the judgments of participants in a multiple-cue judgment experiment are a mixture of these two qualitatively distinct cognitive processes: judgment and recall. However, the cognitive modeling procedure usually applied does not make any distinction between these processes and the data generated by them. We investigated potential effects of disregarding the distinction between these two processes on the parameter recovery and the model fit of one exemplar model. We present results of a simulation as well as the reanalysis of five experimental data sets showing that the current combination of experimental design and modeling procedure can bias parameter estimates, impair their validity, and negatively affect the fit and predictive performance of the model. We also present a latent-mixture extension of the original model as a possible solution to these issues.


HortScience ◽  
1996 ◽  
Vol 31 (3) ◽  
pp. 349-352 ◽  
Author(s):  
P.R. Fisher ◽  
J.H. Lieth ◽  
R.D. Heins

A model was developed to quantify the response of Easter lily (`Nellie White') flower bud elongation to average air temperature. Plants were grown in greenhouses set at 15, 18, 21, 24, or 27C after they had reached the visible bud stage. An exponential model fit the data with an R2 of 0.996. The number of days until open flowering could be predicted using the model because buds consistently opened when they were 16 cm long. The model was validated against data sets of plants grown under constant and varying greenhouse temperatures at three locations, and it was more accurate and mathematically simpler than a previous bud elongation model. Bud length can be used by lily growers to predict the average temperature required to achieve a target flowering date, or the flowering date at a given average temperature. The model can be implemented in a computer decision-support system or in a tool termed a bud development meter.


2018 ◽  
pp. 1271-1293
Author(s):  
Rameshwar Dubey ◽  
Surajit Bag

The purpose of this chapter is to identify green supply chain practices and study their impact on firm performance. In this study, the authors have adopted a two-pronged strategy. First, they reviewed extant literature published in academic journals and reports published by reputed agencies. They identified key variables through literature review and developed an instrument to measure the impact of GSCM practices on firm performance. The authors pretested this instrument using five experts drawn from industry having expertise in GSCM implementation and two academicians who have published their articles in reputed journals in the field of GSCM and sustainable manufacturing practice. After finalizing the instrument, the study then randomly targeted 175 companies from CII Institute of Manufacturing database and obtained response from 54 which represent 30.85% response rate. The authors also performed non-response bias test to ensure that non-response bias is not a major issue. They further performed PLSR analysis to test our hypotheses. The results of the study are very encouraging and provide further motivation to explore other constructs which are important for successful implementation of GSCM practices.


Sign in / Sign up

Export Citation Format

Share Document