The Susceptibility of Overt and Covert Integrity Tests to Coaching and Faking

1996 ◽  
Vol 7 (1) ◽  
pp. 32-39 ◽  
Author(s):  
George M. Alliger ◽  
Scott O. Lilienfeld ◽  
Krystin E. Mitchell

Although previous research has indicated that faking can affect integrity test scores, the effects of coaching on integrity test scores have never been examined We conducted a between-subjects experiment to assess the effects of coaching and faking instructions on an overt and a covert integrity test Coaching provided simple rules to follow when answering test items and instructions on how to avoid elevated validity scale scores There were five instruction conditions “just take,” “fake good,” “coach overt,” “coach covert,” and coach both All subjects completed both overt and covert tests and a measure of intelligence Results provided strong evidence for the coachability of the overt integrity test, over and above the much smaller elevation in the faking condition The covert test apparently could be neither coached nor faked successfully Scores on both integrity tests tended to be positively correlated with intelligence in the coaching and faking conditions We discuss the generalizability of these results to other samples and other integrity tests, and the relevance of the coachability of integrity tests to the ongoing debate concerning the prediction of counterproductive behavior

2002 ◽  
Vol 91 (3) ◽  
pp. 691-702 ◽  
Author(s):  
Reagan D. Brown ◽  
Christopher M. Cothern

The present study assessed whether success at faking a commercially available integrity test relates to individual differences among the test takers. We administered the Reid Report, an overt integrity test, twice to a sample of college students with instructions to answer honestly on one administration and “fake good” on the other. These participants also completed a measure of general cognitive ability, the Raven Advanced Progressive Matrices. Integrity test scores were 1.3 standard deviations higher in the faking condition ( p < .05). There was a weak, but significant, positive relation between general cognitive ability and faking success, calculated as the difference in scores between the honest and faked administrations of the Reid Report ( r = .17, p < .05). An examination of the correlations between faking success and general cognitive ability by item type suggested that the relation is due to the items that pose hypothetical scenarios, e.g., “Should an employee be fired for stealing a few office supplies?” ( r = .22, p < .05) and not the items that ask for admissions of undesirable past behaviors, e.g., “Have you ever stolen office supplies?” ( r = .02, p > .05; t = 2.06, p < .05) for the difference between correlations. These results suggest that general cognitive ability is indeed an individual difference relevant to success at faking an overt integrity test.


2007 ◽  
Vol 6 (2) ◽  
pp. 85-90 ◽  
Author(s):  
Rüdiger Hossiep ◽  
Sabine Bräutigam

Zusammenfassung. Vorgestellt wird das IBES (Inventar berufsbezogener Einstellungen und Selbsteinschätzungen), das erste publizierte deutschsprachige Verfahren der Gattung “Integrity Test”. Ziel des Instrumentes ist die Vorhersage kontraproduktiven Verhaltens in Unternehmen (z. B. Absentismus, Diebstahl, Aggression). Die Konstruktion erfolgte in enger Anlehnung an Inhalte prominenter amerikanischer Integrity Tests. Das IBES besteht aus einem einstellungsorientierten Teil mit 60 Items, die den vier Skalen “Vertrauen”, “Geringe Verbreitung unerwünschten Verhaltens”, “Nicht-Rationalisierung” und “Verhaltensabsichten” zugeordnet sind sowie einem eigenschaftsorientierten Teil mit 55 Items, unterteilt in die fünf Skalen “Gelassenheit/Selbstwertgefühl”, “Zuverlässigkeit/Voraussicht”, “Vorsicht”, “Zurückhaltung” und “Konfliktmeidung”. Die Anwendbarkeit des Verfahrens in der Praxis wird insbesondere vor dem Hintergrund der Datenbasis und des Itemmaterials kritisch diskutiert.


2020 ◽  
Vol 2 (1) ◽  
pp. 34-46
Author(s):  
Siti Fatimah ◽  
Achmad Bernhardo Elzamzami ◽  
Joko Slamet

This research was conducted by focusing on the formulated question regarding the test scores validity, reliability and item analysis involving the discrimination power and index difficulty in order to provide detail information leading to the improvement of test items construction. The quality of each particular item was analyzed in terms of item difficulty, item discrimination and distractor analysis. The statistical tests were used to compute the reliability of the test by applying The Kuder-Richardson Formula (KR20). The analysis of 50 test items was computed using Microsoft Office Excel. A descriptive method was applied to describe and examined the data. The research findings showed the test fulfilled the criteria of having content validity which was categorized as a low validity. Meanwhile, the reliability value of the test scores was 0.521010831 (0.52) categorized as lower reliability and revision of test. Through the 50 items examined, there were 21 items that were in need of improvement which were classified into “easy” for the index difficulty and “poor” category for the discriminability by the total 26 items (52%). It means more than 50% of the test items need to be revised as the items do not meet the criteria. It is suggested that in order to measure students’ performance effectively, essential improvement need to be evaluated where items with “poor” discrimination index should be reviewed.    


2018 ◽  
Vol 122 (4) ◽  
pp. 1529-1549 ◽  
Author(s):  
Zdravko Marjanovic ◽  
Lisa Bajkov ◽  
Jennifer MacDonald

The Conscientious Responders Scale is a five-item embeddable validity scale that differentiates between conscientious and indiscriminate responding in personality-questionnaire data (CR & IR). This investigation presents further evidence of its validity and generalizability across two experiments. Study 1 tests its sensitivity to questionnaire length, a known cause of IR, and tries to provoke IR by manipulating psychological reactance. As expected, short questionnaires produced higher Conscientious Responders Scale scores than long questionnaires, and Conscientious Responders Scale scores were unaffected by reactance manipulations. Study 2 tests concerns that the Conscientious Responders Scale’s unusual item content could potentially irritate and baffle responders, ironically increasing rates of IR. We administered two nearly identical questionnaires: one with an embedded Conscientious Responders Scale and one without the Conscientious Responders Scale. Psychometric comparisons revealed no differences across questionnaires’ means, variances, interitem response consistencies, and Cronbach’s alphas. In sum, the Conscientious Responders Scale is highly sensitive to questionnaire length—a known correlate of IR—and can be embedded harmlessly in questionnaires without provoking IR or changing the psychometrics of other measures.


1994 ◽  
Vol 79 (3) ◽  
pp. 1383-1389 ◽  
Author(s):  
Fred J. Thumin

To gain better understanding of a new personality test (The Self-perception Test), scores on its 11 scales were correlated with age, education, and intelligence among 76 candidates for hire or promotion—and with the MMPI-2 among 45 additional candidates. As age increased, subjects perceived themselves to be less wild and sexy but more logical, thorough, and honest. With increasing intelligence, subjects appeared somewhat less inclined to “fake good.” The L and K scales of the MMPI-2 correlated negatively with the unfavorable Self-perception Test scales (Depressed, Crabby, and Shy), but positively with the favorable scales (Good-looking, Sociable, Thorough, Logical, Considerate, and Honest). The reverse was true of the F, Pt, Sc, and Si scales. The Depression scales of the MMPI-2 and the new test were not significantly correlated, probably because they measure depression differently ( viz., indirectly vs directly) and because subjects were job applicants rather than clinical patients.


1997 ◽  
Vol 12 (4) ◽  
pp. 291-291
Author(s):  
R. Bowler ◽  
C. Hartney ◽  
D. Strongin ◽  
S. Muzio ◽  
S. Tarango

1992 ◽  
Vol 70 (2) ◽  
pp. 467-476 ◽  
Author(s):  
Donald A. Murk ◽  
John A. Addleman

This study was conducted to examine the relationships among Rest's Defining Issues Test, Rotter's Internal-External Locus of Control Scale, and demographic variables. 205 undergraduates from two secular universities and one religious liberal arts college from the Middle Atlantic states were given the Defining Issues Test, the Internal-External Locus of Control Scale, and a demographic questionnaire. The Pearson correlations indicated significant associations between the Defining Issues Test scored for percentage of principled reasoning about moral dilemmas and five demographic variables. Analysis of variance indicated significant differences between the group means for the Defining Issues Test scores on three demographic variables and between the group means for the Internal-External Locus of Control Scale scores on two demographic variables. A stepwise multiple regression analysis using five variables predicted a significant amount of the variance (25%) in the Defining Issues Test scores and two variables that predicted a significant amount of the variance (7%) in the Internal-External Locus of Control Scale scores. The Defining Issues Test is both a developmental and cognitive measure. In addition, the Internal-External Locus of Control Scale scores showed a significant relationship with religious affiliation and with Defining Issues Test scores.


Sign in / Sign up

Export Citation Format

Share Document