applicant faking
Recently Published Documents


TOTAL DOCUMENTS

49
(FIVE YEARS 5)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Goran Pavlov ◽  
Dexin Shi

The forced-choice response format has been proposed as a method for preventing applicant faking on self-report non-cognitive measures. This potential benefit of the format depends on how closely the items comprising each forced-choice block are matched in terms of desirability for the job. Current desirability matching procedures rely on differences in items’ mean desirability ratings to quantify similarity in items’ desirability. We argue that relying on means, while ignoring individual differences in desirability ratings, may yield inaccurate similarity values and result in inferior item matches. As an alternative, we propose a distance-based measure that considers differences in desirability ratings at the individual level and may thus yield accurate similarity values and optimal matches. We support our arguments on a set of desirability ratings obtained with an explicit instruction to rate desirability of items.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Christopher Huber ◽  
Nathan Kuncel ◽  
Katie Huber ◽  
Anthony Boyce

Despite the established validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. However, the lack of studies that combine generalizability with experimental control makes it difficult to determine the effects of applicant faking. This study addressed this deficit in two ways. First, we compared a subtle incentive to fake with the explicit “fake-good” instructions used in most faking experiments. Second, we compared standard Likert scales to multidimensional forced choice (MFC) scales designed to resist deception, including more and less fakable versions of the same MFC inventory. MFC scales substantially reduced motivated score elevation but also appeared to elicit selective faking on work-relevant dimensions. Despite reducing the effectiveness of impression management attempts, MFC scales did not retain more validity than Likert scales when participants faked. However, results suggested that faking artificially bolstered the criterion-related validity of Likert scales while diminishing their construct validity.


2020 ◽  
Vol 28 (2) ◽  
pp. 123-142 ◽  
Author(s):  
Klaus G. Melchers ◽  
Nicolas Roulin ◽  
Anne‐Kathrin Buehl

2018 ◽  
Author(s):  
Jeromy Anglim ◽  
Filip Lievens ◽  
Lisa Everton ◽  
Sharon L Grant ◽  
Andrew Marty

This study examined the degree to which the predictive validity of personality declines in job applicant settings. Participants completed the 200-item HEXACO Personality Inventory-Revised, either as part of confidential research (347 non- applicants) or an actual job application (260 job applicants). Approximately 18- months later, participants completed a confidential survey measuring organizational citizenship behavior (OCB) and counterproductive work behavior (CWB). There was evidence for a small drop in predictive validity among job applicants, however honesty-humility, extraversion, agreeableness, and conscientiousness predicted lower levels of CWB and higher levels of OCB in both job applicants and non-applicants. The study also informs the use of the HEXACO model of personality in selection settings, reporting typical levels of applicant faking and facet-level predictive validity.


2018 ◽  
Vol 17 (3) ◽  
pp. 143-154 ◽  
Author(s):  
Nicolas Roulin ◽  
Deborah M. Powell

Abstract. Applicants’ use of faking tactics could threaten the validity of employment interviews. We examined criterion-based content analysis (CBCA), an approach used in legal contexts, as a potential indicator of interviewee faking. We also examined the moderating role of storytelling in the faking-CBCA relationship. We conducted one experimental study, with 100 interviewees receiving instructions to respond honestly versus to exaggerate/invent responses, and one mock interview study, with self-reported faking from 111 interviewees. Responses were recorded, transcribed, and coded for CBCA and storytelling. Faking was associated with CBCA when interviewees freely engaged in faking tactics, an overall CBCA indicator was used, and interviewees’ responses contained story features. Additional analyses highlight that CBCA-based assessments of faking/honesty could reach up to 63.4% accuracy.


2018 ◽  
Vol 22 (3) ◽  
pp. 710-739 ◽  
Author(s):  
Goran Pavlov ◽  
Alberto Maydeu-Olivares ◽  
Amanda J. Fairchild

2017 ◽  
Vol 32 (6) ◽  
pp. 460-468 ◽  
Author(s):  
Gary N. Burns ◽  
Elizabeth A. Shoda ◽  
Mark A. Roebke

Purpose Estimates of the effects of faking on personality scores typically represent the difference from one sample mean to another sample mean in terms of standard deviations. While this is technically accurate, it does put faking effects into the context of the individuals actually engaging in faking behavior. The purpose of this paper is to address this deficiency. Design/methodology/approach This paper provides a mathematical proof and a computational simulation manipulating faking effect size, prevalence of faking, and the size of the applicant pool. Findings The paper illustrates that reported effects of faking are underestimates of the amount of faking that individual test takers are engaging in. Results provide researchers and practitioners with more accurate estimates of how to interpret faking effects sizes. Practical implications To understand the impact of faking on personality testing, it is important to consider both faking effect sizes as well as the prevalence of faking. Originality/value Researchers and practitioners do not often consider the real implications of faking effect sizes. The current paper presents those results in a new light.


Sign in / Sign up

Export Citation Format

Share Document