fake good
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 10)

H-INDEX

13
(FIVE YEARS 2)

PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258603
Author(s):  
Adrian Hoffmann ◽  
Julia Meisters ◽  
Jochen Musch

In self-reports, socially desirable responding threatens the validity of prevalence estimates for sensitive personal attitudes and behaviors. Indirect questioning techniques such as the crosswise model attempt to control for the influence of social desirability bias. The crosswise model has repeatedly been found to provide more valid prevalence estimates than direct questions. We investigated whether crosswise model estimates are also less susceptible to deliberate faking than direct questions. To this end, we investigated the effect of “fake good” instructions on responses to direct and crosswise model questions. In a sample of 1,946 university students, 12-month prevalence estimates for a sensitive road traffic behavior were higher and thus presumably more valid in the crosswise model than in a direct question. Moreover, “fake good” instructions severely impaired the validity of the direct questioning estimates, whereas the crosswise model estimates were unaffected by deliberate faking. Participants also reported higher levels of perceived confidentiality and a lower perceived ease of faking in the crosswise model compared to direct questions. Our results corroborate previous studies finding the crosswise model to be an effective tool for counteracting the detrimental effects of positive self-presentation in surveys on sensitive issues.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Luka Tomat ◽  
Peter Trkman ◽  
Anton Manfreda

PurposeThe importance of information systems (IS) professions is increasing. As personality–job fit theory claims, employees must have suitable personality traits for particular IS professions. However, candidates can try to fake-good on personality tests towards the desired personality type. Thus, the purpose of this study is to identify archetypal IS professions, their associated personality types and examine the reliability of the Myers–Briggs Type Indicator (MBTI) personality test in IS recruitment decisions.Design/methodology/approachThe authors reviewed academic literature related to IS professions to identify job archetypes and personality traits for IS professions. Then, the authors conducted an experiment with 452 participants to investigate whether candidates can fake-good on personality tests when being tested for a particular IS profession.FindingsThe identified job archetypes were IS project manager, IS marketing specialist, IS consultant, IS security specialist, data scientist and business process analyst. The experimental results show that the participants were not able to fake-good considerably regarding their personality traits for a particular archetype.Research limitations/implicationsThe taxonomy of IS professions should be validated further. The experiment was executed in an educational organisation and not in a real-life environment. Actual work performance was not measured.Practical implicationsThis study enables a better identification of suitable candidates for a particular IS profession. Personality tests are good indicators of the candidate's true personality type but must be properly interpreted.Originality/valueThis study enhances the existing body of knowledge on IS professions' archetypes, proposes suitable MBTI personality types for each profession and provides experimental support for the appropriateness of using personality tests to identify potentially suitable candidates.


Author(s):  
Jessica Röhner ◽  
Ronald R. Holden

AbstractFaking detection is an ongoing challenge in psychological assessment. A notable approach for detecting fakers involves the inspection of response latencies and is based on the congruence model of faking. According to this model, respondents who fake good will provide favorable responses (i.e., congruent answers) faster than they provide unfavorable (i.e., incongruent) responses. Although the model has been validated in various experimental faking studies, to date, research supporting the congruence model has focused on scales with large numbers of items. Furthermore, in this previous research, fakers have usually been warned that faking could be detected. In view of the trend to use increasingly shorter scales in assessment, it becomes important to investigate whether the congruence model also applies to self-report measures with small numbers of items. In addition, it is unclear whether warning participants about faking detection is necessary for a successful application of the congruence model. To address these issues, we reanalyzed data sets of two studies that investigated faking good and faking bad on extraversion (n = 255) and need for cognition (n = 146) scales. Reanalyses demonstrated that having only a few items per scale and not warning participants represent a challenge for the congruence model. The congruence model of faking was only partly confirmed under such conditions. Although faking good on extraversion was associated with the expected longer latencies for incongruent answers, all other conditions remained nonsignificant. Thus, properties of the measurement and properties of the procedure affect the successful application of the congruence model.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Christopher Huber ◽  
Nathan Kuncel ◽  
Katie Huber ◽  
Anthony Boyce

Despite the established validity of personality measures for personnel selection, their susceptibility to faking has been a persistent concern. However, the lack of studies that combine generalizability with experimental control makes it difficult to determine the effects of applicant faking. This study addressed this deficit in two ways. First, we compared a subtle incentive to fake with the explicit “fake-good” instructions used in most faking experiments. Second, we compared standard Likert scales to multidimensional forced choice (MFC) scales designed to resist deception, including more and less fakable versions of the same MFC inventory. MFC scales substantially reduced motivated score elevation but also appeared to elicit selective faking on work-relevant dimensions. Despite reducing the effectiveness of impression management attempts, MFC scales did not retain more validity than Likert scales when participants faked. However, results suggested that faking artificially bolstered the criterion-related validity of Likert scales while diminishing their construct validity.


2021 ◽  
pp. 088626052110014
Author(s):  
Tracey McDonagh ◽  
Áine Travers ◽  
Siobhan Murphy ◽  
Ask Elklit

Self-report personality inventories may be useful in directing perpetrators of intimate partner violence (IPV) to appropriate intervention programs. They may also have predictive capabilities in assessing the likelihood of desistance or persistence of IPV. However, validity problems are inherent in self-report clinical tools, particularly in forensic settings. Scores of the modifying indices (subsections of the scale designed to detect biases in responding) of the Millon Clinical Multiaxial Inventory-III (MCMI-III) often are not reported in research. This study analyses the response sets of a sample of 492 IPV perpetrators at intake to a Danish perpetrator program. Profiles were grouped into levels of severity, and the proportion of exaggerated or minimized profiles at each severity level was analyzed. Findings suggested that 30% of the present sample were severely disturbed or exaggerating their symptoms. As expected, there were significant levels of exaggerated profiles present in the severe pathology group and significant levels of minimized profiles in the low pathology group. Self-referred participants were more likely to exaggerate their pathology, but minimization was not associated with referral status. Nor was there an association between gender and the modifying indices. It is suggested that so-called “fake good” or “fake bad” profiles should not necessarily be treated as invalid, but that elevations in the modifying indices can be interpreted as clinically and forensically relevant information in their own right and should be reported on in research.


2020 ◽  
Author(s):  
Eunike Wetzel ◽  
Susanne Frick ◽  
Anna Brown

A common concern with self-reports of personality traits in selection contexts is faking. The multidimensional forced-choice (MFC) format has been proposed as an alternative to rating scales (RS) that could prevent faking. The goal of this study was to compare the susceptibility of the MFC format and RS format to faking in a simulated high-stakes setting when using normative scoring for both formats. Participants were randomly assigned to three groups (total N = 1,867) and filled out the Big Five Triplets once under an honest instruction and once under a fake-good instruction. Latent mean differences between the honest and fake-good administrations indicated that the Big Five domains were faked in the expected direction. Faking effects for all traits were larger for RS compared to MFC. Faking effects were also larger for the MFC version with mixed triplets compared to the MFC version with triplets that were fully matched regarding their social desirability. The MFC format does not prevent faking completely, but it reduces faking substantially. Faking can be further reduced in the MFC format by matching the items presented in a block regarding their social desirability.


2020 ◽  
Vol 22 (1) ◽  
pp. 1-8

A Receiver Operating Characteristic Analysis (ROC Analysis) was conducted to assess the efficiency of six validity scales included in the Personality Clinical Form (PCF) to detect responses distortion. Undergraduate students were randomly assigned to simulate malingering, simulate defensiveness or complete PCF under standard instructions (no faking). Fake-good participants scored significantly higher than standard participants on all underreporting scales. The difference observed was even higher when the comparison was made between the fake-good and the fake-bad participants. Likewise, a reverse trend was observed for the overreporting scales. Participants in the fake-bad condition scored the highest, and the participants in the fake-good condition scored the lowest on all overreporting scales. Large effect sizes were found in most cases. The responses resulted from the malingering condition were also compared with those obtained from psychiatric inpatients. The responses resulted from the defensiveness group were also compared with responses obtained from employees in a high-stake assessment condition. The area under the ROC curve (AUC) provided an index of discriminative power. The validity scales discriminate better between the normal and the fake conditions than between malingerers and psychiatric inpatients, but most AUC values were within good or excellent range. Cut-off scores and their corresponding sensitivity and specificity were presented for each validity scale based on this explorative endeavour


Psihologija ◽  
2019 ◽  
Vol 52 (3) ◽  
pp. 303-321
Author(s):  
Goran Opacic ◽  
Tatjana Mentus

The aim of this study was to examine the extent to which the socially desirable responding (SDR) distorts results of HEDONICA personaliy inventory (acronim based on eight dimensions of this inventory: Honesty, Disintegration, Impulsiveness, Openness, Extraversion, Neuroticism, Conscientiousness, and Agreeableness). The inventory HEDONICA was merged with components of the Balanced Inventory of Desirable Responding (BIDR) as a control inventory and was administered to a sample of 227 students under two experimental situations/ contexts, operationalized by two instructions: the standard (S) one (such as ?be honest?) and the ?fake good? (FG) one (such as ?portray yourself in a most positive way?). Comparing scores in S and FG situations by using MANOVA, a clear distortion on all personality traits in socially desirable directions were evidenced. When, however, the BIDR subscales in the FG situation were entered into MANOVA as covariates, differences between personality scores in S and FG sitautions were considerably reduced, and became statistically insignificant on five personality dimensions. When the variance of dimensions of the BIDR inventory was removed from the variance of HEDONICA traits in FG situation, the change between intercorrelations of personality dimensions in S and FG situations did not attain statistical significance. This lead to the conclusion that the SDR bias, if even does affect test results (i.e., enhances scores in FG situation), does not affect the scale structure and predictive validity of the examined personality inventory.


2019 ◽  
Vol 35 (1) ◽  
pp. 86-97 ◽  
Author(s):  
Linda E. Lazowski ◽  
Brent B. Geary

Abstract. The study objective was to develop a revision of the adult Substance Abuse Subtle Screening Inventory-3 to include new items to identify nonmedical use of prescription medications, as well as additional subtle and symptom-related identifiers of substance use disorders (SUDs) and to evaluate its psychometric properties and screening accuracy against a criterion of DSM-5 diagnoses for SUD. Clinical professionals throughout the nine US Census Bureau regions and two Canadian provinces who used the SASSI Online screening tool submitted 1,284 completed administrations of the provisional SASSI-4 along with their independent DSM-5 diagnoses of SUD. Validation sample findings demonstrated SASSI-4 sensitivity of 93% and specificity of 90%, AUC = .91. Items added to identify respondents who were abusing prescription medications showed 94% overall screening accuracy. Logistic regression showed no significant effects of client demographic characteristics or type of screening setting on the accuracy of SASSI-4 screening outcomes. In Study 2, 120 adults in recovery from SUD completed the SASSI-4 under instructions to fake good. Sensitivity of 79% was demonstrated for the full scoring protocol and was 47% when only face valid scales were utilized. Clinical utility is discussed.


2019 ◽  
Vol 35 (1) ◽  
pp. 3-13 ◽  
Author(s):  
Laurențiu P. Maricuțoiu ◽  
Paul Sârbescu

Abstract. The purpose of this meta-analysis was to analyze the relationship between faking and response latencies (RL). Research studies included in online databases, as well as papers identified in previous reviews, were considered for selection. Inclusion criteria for the studies were (a) to have an experimental faking condition, (b) to measure RL using a computer, and (c) to provide data for calculating the d Cohen effect sizes. Overall effects were significant in the case of honest versus fake good condition ( d = 0.20, Z = 3.05, p < .05), and in the case of honest versus fake bad condition ( d = 0.39, Z = 2.21, p < .05). Subgroup analyses indicated moderator effects of item type, with larger effects computed on RL of positively keyed items, as compared with RL of negatively keyed items.


Sign in / Sign up

Export Citation Format

Share Document