The effect of perceived threat on human rights: A meta-analysis

2020 ◽  
pp. 136843022096256
Author(s):  
Kevin R. Carriere ◽  
Anna Hallahan ◽  
Fathali M. Moghaddam

Individuals express support for civil liberties and human rights, but when threatened tend to restrict rights for both others and themselves. However, the question of whether or not rights are restricted to punish others or protect ourselves remains unclear. This meta-analysis integrates the findings of the effect of perceived threats on support for restrictions of civil liberties from 1997 to 2019. It includes 163 effect-size estimates from 46 different articles involving 91,716 participants. The presence of threat increased support for restrictions against outgroup members significantly more than ingroup members, providing a possible punitive explanation for support for restrictions of civil liberties. These findings contribute to the debate on rights and their relationship with deservingness, suggesting that we delineate those who deserve human rights and those who do not.

2021 ◽  
pp. 152483802110216
Author(s):  
Brooke N. Lombardi ◽  
Todd M. Jensen ◽  
Anna B. Parisi ◽  
Melissa Jenkins ◽  
Sarah E. Bledsoe

Background: The association between a lifetime history of sexual victimization and the well-being of women during the perinatal period has received increasing attention. However, research investigating this relationship has yet to be systematically reviewed or quantitatively synthesized. Aim: This systematic review and meta-analysis aims to calculate the pooled effect size estimate of the statistical association between a lifetime history of sexual victimization and perinatal depression (PND). Method: Four bibliographic databases were systematically searched, and reference harvesting was conducted to identify peer-reviewed articles that empirically examined associations between a lifetime history of sexual victimization and PND. A random effects model was used to ascertain an overall pooled effect size estimate in the form of an odds ratio and corresponding 95% confidence intervals (CIs). Subgroup analyses were also conducted to assess whether particular study features and sample characteristic (e.g., race and ethnicity) influenced the magnitude of effect size estimates. Results: This review included 36 studies, with 45 effect size estimates available for meta-analysis. Women with a lifetime history of sexual victimization had 51% greater odds of experiencing PND relative to women with no history of sexual victimization ( OR = 1.51, 95% CI [1.35, 1.67]). Effect size estimates varied considerably according to the PND instrument used in each study and the racial/ethnic composition of each sample. Conclusion: Findings provide compelling evidence for an association between a lifetime history of sexual victimization and PND. Future research should focus on screening practices and interventions that identify and support survivors of sexual victimization perinatally.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


1998 ◽  
Vol 21 (2) ◽  
pp. 123-148 ◽  
Author(s):  
Tam E. O'Shaughnessy ◽  
H. Lee Swanson

The purpose of the present study was to synthesize research that directly compares children with and without learning disabilities in reading on immediate memory performance. Forty-one studies were included in the synthesis, which involved 161 effect sizes. The overall mean effect size estimate in favor of children without learning disabilities in reading was -.61 ( SD=.87). Effect size estimates were submitted to a descriptive and a weighted least-square regression analysis. Results from the full regression model indicated that children with learning disabilities were distinctly disadvantaged compared to average readers when memory manipulations required the naming of visual information and task conditions involved serial recall. Age, IQ, and reading scores were not significant predictors of effect size estimates. Most importantly, nonstrategic (type of task and materials) rather than strategic factors best predicted effect size estimates. The results also indicated that memory difficulties of readers with learning disabilities persisted across age, suggesting that a deficit model best captures the performance of children with learning disabilities. Results are discussed in relation to current developmental models of learning disabilities.


2021 ◽  
Vol 44 ◽  
Author(s):  
Robert M. Ross ◽  
Robbie C. M. van Aert ◽  
Olmo R. van den Akker ◽  
Michiel van Elk

Abstract Lee and Schwarz interpret meta-analytic research and replication studies as providing evidence for the robustness of cleansing effects. We argue that the currently available evidence is unconvincing because (a) publication bias and the opportunistic use of researcher degrees of freedom appear to have inflated meta-analytic effect size estimates, and (b) preregistered replications failed to find any evidence of cleansing effects.


2021 ◽  
Author(s):  
Man Chen ◽  
James E Pustejovsky

Single-case experimental designs (SCEDs) are used to study the effects of interventions on the behavior of individual cases, by making comparisons between repeated measurements of an outcome under different conditions. In research areas where SCEDs are prevalent, there is a need for methods to synthesize results across multiple studies. One approach to synthesis uses a multi-level meta-analysis (MLMA) model to describe the distribution of effect sizes across studies and across cases within studies. However, MLMA relies on having accurate sampling variances of effect size estimates for each case, which may not be possible due to auto-correlation in the raw data series. One possible solution is to combine MLMA with robust variance estimation (RVE), which provides valid assessments of uncertainty even if the sampling variances of effect size estimates are inaccurate. Another possible solution is to forgo MLMA and use simpler, ordinary least squares (OLS) methods, with RVE. This study evaluates the performance of effect size estimators and methods of synthesizing SCEDs in the presence of auto-correlation, for several different effect size metrics, via a Monte Carlo simulation designed to emulate the features of real data series. Results demonstrate that the MLMA model with RVE performs properly in terms of bias, accuracy, and confidence interval coverage for estimating overall average log response ratios. The OLS estimator corrected with RVE performs the best in estimating overall average Tau effect sizes. None of the available methods perform adequately for meta-analysis of within-case standardized mean differences.


2017 ◽  
Vol 22 (4) ◽  
pp. 347-377 ◽  
Author(s):  
Arlin J. Benjamin ◽  
Sven Kepes ◽  
Brad J. Bushman

A landmark 1967 study showed that simply seeing a gun can increase aggression—called the “weapons effect.” Since 1967, many other studies have attempted to replicate and explain the weapons effect. This meta-analysis integrates the findings of weapons effect studies conducted from 1967 to 2017 and uses the General Aggression Model (GAM) to explain the weapons effect. It includes 151 effect-size estimates from 78 independent studies involving 7,668 participants. As predicted by the GAM, our naïve meta-analytic results indicate that the mere presence of weapons increased aggressive thoughts, hostile appraisals, and aggression, suggesting a cognitive route from weapons to aggression. Weapons did not significantly increase angry feelings. Yet, a comprehensive sensitivity analysis indicated that not all naïve mean estimates were robust to the presence of publication bias. In general, these results suggest that the published literature tends to overestimate the weapons effect for some outcomes and moderators.


2020 ◽  
Author(s):  
Molly Lewis ◽  
Maya B Mathur ◽  
Tyler VanderWeele ◽  
Michael C. Frank

What is the best way to estimate the size of important effects? Should we aggregate across disparate findings using statistical meta-analysis, or instead run large, multi-lab replications (MLR)? A recent paper by Kvarven, Strømland, and Johannesson (2020) compared effect size estimates derived from these two different methods for 15 different psychological phenomena. The authors report that, for the same phenomenon, the meta-analytic estimate tends to be about three times larger than the MLR estimate. These results pose an important puzzle: What is the relationship between these two estimates? Kvarven et al. suggest that their results undermine the value of meta-analysis. In contrast, we argue that both meta-analysis and MLR are informative, and that the discrepancy between estimates obtained via the two methods is in fact still unexplained. Informed by re-analyses of Kvarven et al.’s data and by other empirical evidence, we discuss possible sources of this discrepancy and argue that understanding the relationship between estimates obtained from these two methods is an important puzzle for future meta-scientific research.


1998 ◽  
Vol 24 (5) ◽  
pp. 577-592 ◽  
Author(s):  
Herman Aguinis ◽  
Charles A. Pierce

We propose and illustrate a three-step procedure for testing moderator variable hypotheses meta-analytically. The procedure is based on Hedges and Olkin's (1985) meta-analytic approach, yet it incorporates study-level corrections for methodological and statistical artifacts that are typically advocated and used within psychometric approaches to meta-analysis (e.g., Hunter & Schmidt, 1990). The three- step procedure entails: (a) correcting study-level effect size estimates for across-study variability due to methodological and statistical arti facts, (b) testing the overall homogeneity of study-level effect size esti mates after the artifactual sources of variance have been removed, and (c) testing the effects of hypothesized moderator variables.


Sign in / Sign up

Export Citation Format

Share Document