Comparing the Ability of MMPI-2 and MMPI-2-RF Validity Scales to Detect Feigning: A Meta-Analysis

Assessment ◽  
2022 ◽  
pp. 107319112110675
Author(s):  
Maria Aparcero ◽  
Emilie H. Picard ◽  
Alicia Nijdam-Jones ◽  
Barry Rosenfeld

Several meta-analyses of the Minnesota Multiphasic Personality Inventory-2 (MMPI-2) and Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) have examined these instruments’ ability to detect symptom exaggeration or feigning. However, limited research has directly compared whether the scales across these two instruments are equally effective. This study used a moderated meta-analysis to compare 109 MMPI-2 and 41 MMPI-2-RF feigning studies, 83 (56.46%) of which were not included in previous meta-analyses. Although there were differences between the two test versions, with most MMPI-2 validity scales generating larger effect sizes than the corresponding MMPI-2-RF scales, these differences were not significant after controlling for study design and type of symptoms being feigned. Additional analyses showed that the F and Fp-r scales generated the largest effect sizes in identifying feigned psychiatric symptoms, while the FBS and RBS were better at detecting exaggerated medical symptoms. The findings indicate that the MMPI-2 validity scales and their MMPI-2-RF counterparts were similarly effective in differentiating genuine responders from those exaggerating or feigning psychiatric and medical symptoms. These results provide reassurance for the use of both the MMPI-2 and MMPI-2-RF in settings where symptom exaggeration or feigning is likely. Findings are discussed in the context of the recently released MMPI-3.

2020 ◽  
Vol 25 (1) ◽  
pp. 51-72 ◽  
Author(s):  
Christian Franz Josef Woll ◽  
Felix D. Schönbrodt

Abstract. Recent meta-analyses come to conflicting conclusions about the efficacy of long-term psychoanalytic psychotherapy (LTPP). Our first goal was to reproduce the most recent meta-analysis by Leichsenring, Abbass, Luyten, Hilsenroth, and Rabung (2013) who found evidence for the efficacy of LTPP in the treatment of complex mental disorders. Our replicated effect sizes were in general slightly smaller. Second, we conducted an updated meta-analysis of randomized controlled trials comparing LTPP (lasting for at least 1 year and 40 sessions) to other forms of psychotherapy in the treatment of complex mental disorders. We focused on a transparent research process according to open science standards and applied a series of elaborated meta-analytic procedures to test and control for publication bias. Our updated meta-analysis comprising 191 effect sizes from 14 eligible studies revealed small, statistically significant effect sizes at post-treatment for the outcome domains psychiatric symptoms, target problems, social functioning, and overall effectiveness (Hedges’ g ranging between 0.24 and 0.35). The effect size for the domain personality functioning (0.24) was not significant ( p = .08). No signs for publication bias could be detected. In light of a heterogeneous study set and some methodological shortcomings in the primary studies, these results should be interpreted cautiously. In conclusion, LTPP might be superior to other forms of psychotherapy in the treatment of complex mental disorders. Notably, our effect sizes represent the additional gain of LTPP versus other forms of primarily long-term psychotherapy. In this case, large differences in effect sizes are not to be expected.


1998 ◽  
Vol 9 (5) ◽  
pp. 402-404 ◽  
Author(s):  
Howard N. Garb ◽  
Colleen M. Florio ◽  
William M. Grove

Results from meta-analyses have been widely cited to defend the validity of the Rorschach. However, the meta-analyses have been flawed. For example, one meta-analysis included results that were obtained by calculating correlations but not results that were obtained by conducting t tests or analyses of variance. When we reanalyzed the data from the most widely cited meta-analysis (Parker, Hanson, & Hunsley, 1988), we found that for confirmatory studies (also called convergent-validity studies), the Minnesota Multiphasic Personality Inventory (MMPI) explained 23% to 30% of the variance, whereas the Rorschach explained only 8% to 13% of the variance. These results indicate that the Rorschach is not as valid as the MMPI.


2018 ◽  
Author(s):  
Christian Franz Josef Woll ◽  
Felix D. Schönbrodt

Recent meta-analyses come to conflicting conclusions about the efficacy of long-term psychoanalytic psychotherapy (LTPP). Our first goal was to reproduce the most recent meta-analysis by Leichsenring, Abbass, Luyten, Hilsenroth, and Rabung (2013) who found evidence for the efficacy of LTPP in the treatment of complex mental disorders. Our replicated effect sizes were in general slightly smaller. Second, we conducted an updated meta-analysis of randomized controlled trials comparing LTPP (lasting for at least one year and 40 sessions) to other forms of psychotherapy in the treatment of complex mental disorders. We focused on a transparent research process according to open science standards and applied a series of elaborated meta-analytic procedures to test and control for publication bias. Our updated meta-analysis comprising 191 effect sizes from 14 eligible studies revealed small, statistically significant effect sizes at post-treatment for the outcome domains psychiatric symptoms, target problems, social functioning, and overall effectiveness (Hedges’ g ranging between 0.24 and 0.35). The effect size for the domain personality functioning (0.24) was not significant (p = .08). No signs for publication bias could be detected. In light of a heterogeneous study set and some methodological shortcomings in the primary studies, these results should be interpreted cautiously. In conclusion, LTPP might be superior to other forms of psychotherapy in the treatment of complex mental disorders. Notably, our effect sizes represent the additional gain of LTPP vs. other forms of primarily long-term psychotherapy. In this case, large differences in effect sizes are not to be expected.


2018 ◽  
Vol 30 (7) ◽  
pp. 857-869 ◽  
Author(s):  
Kevin J. Bianchini ◽  
Luis E. Aguerrevere ◽  
Kelly L. Curtis ◽  
Tresa M. Roebuck-Spencer ◽  
F. Charles Frey ◽  
...  

2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
Vol 5 (1) ◽  
pp. e100135
Author(s):  
Xue Ying Zhang ◽  
Jan Vollert ◽  
Emily S Sena ◽  
Andrew SC Rice ◽  
Nadia Soliman

ObjectiveThigmotaxis is an innate predator avoidance behaviour of rodents and is enhanced when animals are under stress. It is characterised by the preference of a rodent to seek shelter, rather than expose itself to the aversive open area. The behaviour has been proposed to be a measurable construct that can address the impact of pain on rodent behaviour. This systematic review will assess whether thigmotaxis can be influenced by experimental persistent pain and attenuated by pharmacological interventions in rodents.Search strategyWe will conduct search on three electronic databases to identify studies in which thigmotaxis was used as an outcome measure contextualised to a rodent model associated with persistent pain. All studies published until the date of the search will be considered.Screening and annotationTwo independent reviewers will screen studies based on the order of (1) titles and abstracts, and (2) full texts.Data management and reportingFor meta-analysis, we will extract thigmotactic behavioural data and calculate effect sizes. Effect sizes will be combined using a random-effects model. We will assess heterogeneity and identify sources of heterogeneity. A risk-of-bias assessment will be conducted to evaluate study quality. Publication bias will be assessed using funnel plots, Egger’s regression and trim-and-fill analysis. We will also extract stimulus-evoked limb withdrawal data to assess its correlation with thigmotaxis in the same animals. The evidence obtained will provide a comprehensive understanding of the strengths and limitations of using thigmotactic outcome measure in animal pain research so that future experimental designs can be optimised. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines and disseminate the review findings through publication and conference presentation.


2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document