Test-Taking Motivation and Personality Test Validity

2010 ◽  
Vol 9 (3) ◽  
pp. 117-125 ◽  
Author(s):  
Thomas A. O’Neill ◽  
Richard D. Goffin ◽  
Ian R. Gellatly

In this study we assessed whether the predictive validity of personality scores is stronger when respondent test-taking motivation (TTM) is higher rather than lower. Results from a field sample comprising 269 employees provided evidence for this moderation effect for one trait, Steadfastness. However, for Conscientiousness, valid criterion prediction was only obtained at low levels of TTM. Thus, it appears that TTM relates to the criterion validity of personality testing differently depending on the personality trait assessed. Overall, these and additional findings regarding the nomological net of TTM suggest that it is a unique construct that may have significant implications when personality assessment is used in personnel selection.

2012 ◽  
Vol 11 (4) ◽  
pp. 169-175 ◽  
Author(s):  
Katherine A. Sliter ◽  
Neil D. Christiansen

The present study evaluated the impact of reading self-coaching book excerpts on success at faking a personality test. Participants (N = 207) completed an initial honest personality assessment and a subsequent assessment with faking instructions under one of the following self-coaching conditions: no coaching, chapters from a commercial book on how to fake preemployment personality scales, and personality coaching plus a chapter on avoiding lie-detection scales. Results showed that those receiving coaching materials had greater success in raising their personality scores, primarily on the traits that had been targeted in the chapters. In addition, those who read the chapter on avoiding lie-detection scales scored significantly lower on a popular impression management scale while simultaneously increasing their personality scores. Implications for the use of personality tests in personnel selection are discussed.


Author(s):  
JiSoo Ock ◽  
HyeRyeon An

As we enter the digital age, new methods of personality testing-namely, machine learning-based personality assessment scales-are quickly gaining attraction. Because machine learning-based personality assessments are made based on algorithms that analyze digital footprints of people’s online behaviors, they are supposedly less prone to human biases or cognitive fallacies that are often cited as limitations of traditional personality tests. As a result, machine learning-based assessment tools are becoming increasingly popular in operational settings across the globe with the anticipation that they can effectively overcome the limitations of traditional personality testing. However, the provision of scientific evidence regarding the psychometric soundness and the fairness of machine learning-based assessment tools have lagged behind their use in practice. The current paper provides a brief review of empirical studies that have examined the validity of machine learning-based personality assessment, focusing primarily on social media text mining method. Based on this review, we offer some suggestions about future research directions, particularly regarding the important and immediate need to examine the machine learning-based personality assessment tools’ compliance with the practical and legal standards for use in practice (such as inter-algorithm reliability, test-retest reliability, and differential prediction across demographic groups). Additionally, we emphasize that the goal of machine learning-based personality assessment tools should not be to simply maximize the prediction of personality ratings. Rather, we should explore ways to use this new technology to further develop our fundamental understanding of human personality and to contribute to the development of personality theory.


2017 ◽  
Vol 1 (2) ◽  
Author(s):  
Rachelle Visser ◽  
Pieter Schaap

Orientation: Growing research has shown that not only test validity considerations but also the test-taking attitudes of job applicants are important in the choice of selection instruments as these can contribute to test performance and the perceived fairness of the selection process.Research purpose: The main purpose of this study was to determine the test-taking attitudes of a diverse group of job applicants towards personality and cognitive ability tests administered conjointly online as part of employee selection in a financial services company in South Africa.Motivation for the study: If users understand how job applicants view specific test types, they will know which assessments are perceived more negatively and how this situation can potentially be rectified.Research design, approach and method: A non-experimental and cross-sectional survey design was used. An adapted version of the Test Attitude Survey was used to determine job applicants’ attitudes towards tests administered online as part of an employee selection process. The sample consisted of a group of job applicants (N = 160) who were diverse in terms of ethnicity and age and the educational level applicable for sales and supervisory positions.Main findings: On average, the job applicants responded equally positively to the cognitive ability and personality tests. The African job applicants had a statistically significantly more positive attitude towards the tests than the other groups, and candidates applying for the sales position viewed the cognitive ability tests significantly less positively than the personality test.Practical and managerial implications: The choice of selection tests used in combination as well as the testing conditions that are applicable should be considered carefully as they are the factors that can potentially influence the test-taking motivation and general test-taking attitudes of job applicants.Contribution: This study consolidated the research findings on the determinants of attitudinal responses to cognitive ability and personality testing and produced valuable empirical findings on job applicants’ attitudes towards both test types when administered conjointly


Author(s):  
Kathrine Møller Solgaard ◽  
Morten Nissen

Personality testing is highly disputed, yet, widely used as a personnel selection tool. In most research, it is taken for granted that personality tests are used with the purpose of achieving a more objective assessment of job candidates. However, in Danish organizations the personality test is often framed as a ‘dialogue tool’. This paper explores the potentials of a dialogical reframing of the use of personality testing in personnel selection by analyzing empirical material from an ethnographic study of the hiring processes in a Danish trade union that declaredly uses personality tests as a dialogue tool. Through an affirmative critique we identify five framings that interact during the test-based dialogue: The ‘meritocratic’, ‘disciplinary’, ‘dialogical’, ‘pastoral’, and ‘con-test’ framing. Our study suggests that being committed to a dialogical reframing nurtures the possibility of focusing on what we call the ‘con-test’: Either as exploring the meta-competences of the candidate or as co-creating embryos through joint reflections on organizational issues. We argue that the long-lasting debates in the field of selection-related personality testing should be much more interested in the question of how personality tests in hiring are used, rather than whether or not they should be used.


2020 ◽  
pp. 009102602093558
Author(s):  
David M. Fisher ◽  
Christopher R. Milane ◽  
Sarah Sullivan ◽  
Robert P. Tett

Prominent standards/guidelines concerning test validation provide contradictory information about whether content-based evidence should be used as a means of validating personality test inferences for employee selection. This unresolved discrepancy is problematic considering the prevalence of personality testing, the importance of gathering sound validity evidence, and the deference given to these standards/guidelines in contemporary employee selection practice. As a consequence, test users and practitioners are likely to be reticent or uncertain about gathering content-based evidence for personality measures, which, in turn, may cause such evidence to be underutilized when personality testing is of interest. The current investigation critically examines whether (and how) content validity evidence should be used for measures of personality in relation to employee selection. The ensuing discussion, which is especially relevant in highly litigious contexts such as personnel selection in the public sector, sheds new light on test validation practices.


2000 ◽  
Vol 5 (1) ◽  
pp. 44-51 ◽  
Author(s):  
Peter Greasley

It has been estimated that graphology is used by over 80% of European companies as part of their personnel recruitment process. And yet, after over three decades of research into the validity of graphology as a means of assessing personality, we are left with a legacy of equivocal results. For every experiment that has provided evidence to show that graphologists are able to identify personality traits from features of handwriting, there are just as many to show that, under rigorously controlled conditions, graphologists perform no better than chance expectations. In light of this confusion, this paper takes a different approach to the subject by focusing on the rationale and modus operandi of graphology. When we take a closer look at the academic literature, we note that there is no discussion of the actual rules by which graphologists make their assessments of personality from handwriting samples. Examination of these rules reveals a practice founded upon analogy, symbolism, and metaphor in the absence of empirical studies that have established the associations between particular features of handwriting and personality traits proposed by graphologists. These rules guide both popular graphology and that practiced by professional graphologists in personnel selection.


2021 ◽  
pp. 109442812110029
Author(s):  
Tianjun Sun ◽  
Bo Zhang ◽  
Mengyang Cao ◽  
Fritz Drasgow

With the increasing popularity of noncognitive inventories in personnel selection, organizations typically wish to be able to tell when a job applicant purposefully manufactures a favorable impression. Past faking research has primarily focused on how to reduce faking via instrument design, warnings, and statistical corrections for faking. This article took a new approach by examining the effects of faking (experimentally manipulated and contextually driven) on response processes. We modified a recently introduced item response theory tree modeling procedure, the three-process model, to identify faking in two studies. Study 1 examined self-reported vocational interest assessment responses using an induced faking experimental design. Study 2 examined self-reported personality assessment responses when some people were in a high-stakes situation (i.e., selection). Across the two studies, individuals instructed or expected to fake were found to engage in more extreme responding. By identifying the underlying differences between fakers and honest respondents, the new approach improves our understanding of faking. Percentage cutoffs based on extreme responding produced a faker classification precision of 85% on average.


Sign in / Sign up

Export Citation Format

Share Document