Personnel Assessment and Decisions
Latest Publications


TOTAL DOCUMENTS

74
(FIVE YEARS 48)

H-INDEX

4
(FIVE YEARS 2)

Published By Bowling Green State University Libraries

2377-8822

2021 ◽  
Vol 7 (2) ◽  
Author(s):  
Jordan Ho ◽  
Deborah Powell

Job applicants vary in the extent to which they fake or stay honest in employment interviews, yet the contextual and demographic factors underlying these behaviors are unclear. To help answer this question, we drew on Ellingson and McFarland’s (2011) framework of faking based in valence-instrumentality-expectancy theory. Study 1 collected normative data and established baseline distributions for instrumentality-expectancy beliefs from a Canadian municipality. Results indicated that most respondents had low levels of instrumentality-expectancy beliefs for faking, but high levels for honesty. Moreover, income, education, and age were antecedents of instrumentality-expectancy beliefs. Study 2 extended these findings with a United States sample and sought to determine if they could be explained by individual differences. Results demonstrated that financial insecurity predicted instrumentality of faking, whereas age predicted expectancy of faking. Finally, valence-instrumentality-expectancy beliefs were all predictors of self-reported faking in a past interview.


2021 ◽  
Vol 7 (2) ◽  
Author(s):  
Nancy Tippins ◽  
Frederick Oswald ◽  
S. Morton McPhail

Organizations are increasingly turning toward personnel selection tools that rely on artificial intelligence (AI) technologies and machine learning algorithms that, together, intend to predict the future success of employees better than traditional tools. These new forms of assessment include online games, video-based interviews, and big data pulled from many sources, including test responses, test-taking behavior, applications, resumes, and social media. Speedy processing, lower costs, convenient access, and applicant engagement are often and rightfully cited as the practical advantages for using these selection tools. At the same time, however, these tools raise serious concerns about their effectiveness in terms their conceptual relevance to the job, their basis in a job analysis to ensure job relevancy, their measurement characteristics (reliability and stability), their validity in predicting employee-relevant outcomes, their evidence and normative information being updated appropriately, and the associated ethical concerns around what information is being represented to employers and told to job candidates. This paper explores these concerns, concluding with an urgent call to industrial and organizational psychologists to extend existing professional standards for employment testing to these new AI and machine learning based forms of testing, including standards and requirements for their documentation.


2021 ◽  
Vol 7 (2) ◽  
Author(s):  
Jacob Fischer ◽  
James Breaugh

Although a key component of a structured interview is note taking, relatively few studies have investigated the effects of note taking. To address this lack of research, we conducted a study that examined the effects of note taking in a work setting. As predicted, we found that the total number of notes taken by interviewers and the level of detail of these notes were positively related to the ratings these interviewers gave to job applicants, that interviewer ratings of applicants who were hired were predictive of their job performance ratings, and that interviewer ratings mediated the relationships between note taking and performance ratings (i.e., the number of notes and their level of detail did not have a direct effect on performance ratings). We also showed that, if uncontrolled, interviewer nesting can result in misleading conclusions about the value of taking detailed notes.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Christopher Huber ◽  
Nathan Kuncel ◽  
Katie Huber ◽  
Anthony Boyce

Despite the established validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. However, the lack of studies that combine generalizability with experimental control makes it difficult to determine the effects of applicant faking. This study addressed this deficit in two ways. First, we compared a subtle incentive to fake with the explicit “fake-good” instructions used in most faking experiments. Second, we compared standard Likert scales to multidimensional forced choice (MFC) scales designed to resist deception, including more and less fakable versions of the same MFC inventory. MFC scales substantially reduced motivated score elevation but also appeared to elicit selective faking on work-relevant dimensions. Despite reducing the effectiveness of impression management attempts, MFC scales did not retain more validity than Likert scales when participants faked. However, results suggested that faking artificially bolstered the criterion-related validity of Likert scales while diminishing their construct validity.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Philseok Lee ◽  
Seang-Hwane Joo

To address faking issues associated with Likert-type personality measures, multidimensional forced-choice (MFC) measures have recently come to light as important components of personnel assessment systems. Despite various efforts to investigate the fake resistance of MFC measures, previous research has mainly focused on the scale mean differences between honest and faking conditions. Given the recent psychometric advancements in MFC measures (e.g., Brown & Maydeu-Olivares, 2011; Stark et al., 2005; Lee et al., 2019; Joo et al., 2019), there is a need to investigate the fake resistance of MFC measures through a new methodological lens. This research investigates the fake resistance of MFC measures through recently proposed differential item functioning (DIF) and differential test functioning (DTF) methodologies for MFC measures (Lee, Joo, & Stark, 2020). Overall, our results show that MFC measures are more fake resistant than Likert-type measures at the item and test levels. However, MFC measures may still be susceptible to faking if MFC measures include many mixed blocks consisting of positively and negatively keyed statements within a block. It may be necessary for future research to find an optimal strategy to design mixed blocks in the MFC measures to satisfy the goals of validity and scoring accuracy. Practical implications and limitations are discussed in the paper.


2020 ◽  
Vol 6 (3) ◽  
Author(s):  
Allan Bateson ◽  
William Dardick

Multiple choice test items typically consist of the key and 3-4 distractors. However, research has supported the efficacy of using fewer alternatives. Haladyna and Downing (1993) found that it is difficult to write test items with more than one plausible distractor, resulting in items with a correct answer and one alternative, also known as the alternate choice (AC) format. We constructed two 32-item tests; one with four alternatives (MC4) and one with two (AC), using an inter-judge agreement approach to eliminate distractors. Tests were administered to 138 personnel working for a U.S. Government agency. Testing time was significantly less and scores were higher for the AC test. However, score differences disappeared when both forms were corrected for guessing. There were no significant differences in test difficulty (mean p-values). The corrected KR-20 reliabilities for both forms, after applying the Spearman-Brown formula, were AC = .816 and MC4 = .893. We discuss the results with respect to the resources spent writing and reviewing test items, and in more broadly sampling a content domain using the AC format due to reduced testing times.


2020 ◽  
Vol 6 (3) ◽  
Author(s):  
Kristin Allen ◽  
Mathijs Affourtit ◽  
Craig Reddock

Criterion-related validation (CRV) studies are used to demonstrate the effectiveness of selection procedures. However, traditional CRV studies require significant investment of time and resources, as well as large sample sizes, which often create practical challenges. New techniques, which use machine learning to develop classification models from limited amounts of data, have emerged as a more efficient alternative. This study empirically investigates the effectiveness of traditional CRV with a variety of profiling approaches and machine learning techniques using repeated cross-validation. Results show that the traditional approach generally performs best both in terms of predicting performance and larger group differences between candidates identified as top or non-top performers. In addition to empirical effectiveness, other practical implications are discussed.


2020 ◽  
Vol 6 (3) ◽  
Author(s):  
Alexandra Harris ◽  
Jeremiah McMillan ◽  
Benjamin Listyg ◽  
Laura Matzen ◽  
Nathan Carter

The Sandia Matrices are a free alternative to the Raven’s Progressive Matrices (RPMs). This study offers a psychometric review of Sandia Matrices items focused on two of the most commonly investigated issues regarding the RPMs: (a) dimensionality and (b) sex differences. Model-data fit of three alternative factor structures are compared using confirmatory multidimensional item response theory (IRT) analyses, and measurement equivalence analyses are conducted to evaluate potential sex bias. Although results are somewhat inconclusive regarding factor structure, results do not show evidence of bias or mean differences by sex. Finally, although the Sandia Matrices software can generate infinite items, editing and validating items may be infeasible for many researchers. To aide implementation of the Sandia Matrices, we provide scoring materials for two brief static tests and a computer adaptive test. Implications and suggestions for future research using the Sandia Matrices are discussed.


2020 ◽  
Vol 6 (3) ◽  
Author(s):  
Scott Highhouse

2020 ◽  
Vol 6 (3) ◽  
Author(s):  
Jacob Bradburn ◽  
Ann Marie Ryan ◽  
Anthony Boyce ◽  
Tamera McKinniss ◽  
Jason Way

Research on personality within the organizational sciences and for employee selection typically focuses on main effects, as opposed to interactive effects between personality variables. Large, multi-organizational datasets involving two different measures of personality were examined to test theoretically driven trait by trait interactions in predicting job performance. Interactive effects of Agreeableness and Conscientiousness, Agreeableness and Extraversion, Extraversion and Conscientiousness, and Emotional Stability and Conscientiousness were hypothesized as predicting overall job performance. However, these hypothesized effects were generally not supported. Implications for personality assessment are discussed.


Sign in / Sign up

Export Citation Format

Share Document