Sex differences in friendship expectations: A meta-analysis

2010 ◽  
Vol 28 (6) ◽  
pp. 723-747 ◽  
Author(s):  
Jeffrey A. Hall

Friendship expectations are prescriptive normative behaviors and highly valued qualities in ideal same-sex friends. This paper reports the results of five meta-analyses of sex differences from 37 manuscripts (36 samples, N = 8825). A small difference favoring females was detected in overall friendship expectations ( d = .17). Friendship expectations were higher for females in three of four categories: symmetrical reciprocity (e.g., loyalty, genuineness; d = .17), communion (e.g., self-disclosure, intimacy; d = .39), solidarity (e.g., mutual activities, companionship; d = .03), but agency (e.g., physical fitness, status; d = -.34) was higher in males. Overall expectations and symmetrical reciprocity showed small effect sizes. Medium effect sizes for communion favoring females and for agency favoring males support predictions of evolutionary theory.

PLoS ONE ◽  
2021 ◽  
Vol 16 (1) ◽  
pp. e0245061
Author(s):  
Monica C. O’Neill ◽  
Shaylea Badovinac ◽  
Rebecca Pillai Riddell ◽  
Jean-François Bureau ◽  
Carla Rumeo ◽  
...  

The present study aimed to systematically review and meta-analyze the concurrent and longitudinal relationship between caregiver sensitivity and preschool attachment measured using the Main and Cassidy (1988) and Cassidy and Marvin (1992) attachment classification systems. This review was pre-registered with the International Prospective Register of Systematic Reviews (PROSPERO; Registration Number CRD42017073417) and completed according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. The present review identified 36 studies made up of 21 samples (N = 3, 847) examining the relationship between caregiver sensitivity and preschool attachment. Eight primary meta-analyses were conducted separately according to the proximity of the assessment of sensitivity to attachment (i.e., concurrent versus longitudinal), operationalization of caregiver sensitivity (i.e., unidimensional versus multidimensional) and attachment categorizations (i.e., secure-insecure versus organized-disorganized). Overall, the meta-analyses revealed higher levels of caregiver sensitivity among caregivers with secure and organized preschoolers, relative to insecure and disorganized preschoolers, respectively. Medium effect sizes (g = .46 to .59) were found for both longitudinal and concurrent associations between caregiver sensitivity and preschool attachment when a unidimensional measure of caregiver sensitivity was employed, compared to small to medium effect sizes (g = .34 to .49) when a multidimensional measure of caregiver sensitivity was employed. Child age at attachment measurement was a significant moderator of the longitudinal association between unidimensional caregiver sensitivity and preschool attachment. Future directions for the literature and clinical implications are discussed.


2018 ◽  
Author(s):  
Michele B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde Augusteijn ◽  
Elise Anne Victoire Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2,442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of .26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of in intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small study effects, potentially indicating publication bias and overestimated effects. We found no differences in small study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We conclude that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2020 ◽  
Vol 8 (4) ◽  
pp. 36
Author(s):  
Michèle B. Nuijten ◽  
Marcel A. L. M. van Assen ◽  
Hilde E. M. Augusteijn ◽  
Elise A. V. Crompvoets ◽  
Jelte M. Wicherts

In this meta-study, we analyzed 2442 effect sizes from 131 meta-analyses in intelligence research, published from 1984 to 2014, to estimate the average effect size, median power, and evidence for bias. We found that the average effect size in intelligence research was a Pearson’s correlation of 0.26, and the median sample size was 60. Furthermore, across primary studies, we found a median power of 11.9% to detect a small effect, 54.5% to detect a medium effect, and 93.9% to detect a large effect. We documented differences in average effect size and median estimated power between different types of intelligence studies (correlational studies, studies of group differences, experiments, toxicology, and behavior genetics). On average, across all meta-analyses (but not in every meta-analysis), we found evidence for small-study effects, potentially indicating publication bias and overestimated effects. We found no differences in small-study effects between different study types. We also found no convincing evidence for the decline effect, US effect, or citation bias across meta-analyses. We concluded that intelligence research does show signs of low power and publication bias, but that these problems seem less severe than in many other scientific fields.


2018 ◽  
Author(s):  
Olmo Van den Akker ◽  
Marcel A. L. M. van Assen ◽  
mark van vugt ◽  
Jelte M. Wicherts

Do men and women differ in trusting behavior? This question is directly relevant to social, economic, and political domains, yet the answer remains elusive. In this paper, we present a meta-analytic review of the literature on sex differences in the trust game and a variant, the gift-exchange game. Informed by both evolutionary and cultural perspectives, we predicted men to be more trusting and women to be more trustworthy in these games. The trust game meta-analyses encompass 77 papers yielding 174 effect sizes based on 17,082 participants from 23 countries, while the gift-exchange game meta-analyses covered 15 papers reporting 35 effect sizes based on 1,362 participants from 19 countries. In the trust game, we found men to be more trusting than women, g = 0.22, but we found no significant sex difference in trustworthiness, g = 0.09. In the gift-exchange game we found no significant sex difference in trust, g = 0.15, yet we did find that men are more trustworthy than women, g = 0.33. The results of the meta-analyses show that the behavior in both games is inconsistent. It seems that when monetary transfers are multiplied men behave more cooperatively than women, but that there are no sex differences when such a multiplier is absent. This “male multiplier effect” is consistent with an evolutionary account emphasizing men’s historical role as resource provider. However, future research needs to substantiate this effect and provide a theoretical framework to explain it.


2018 ◽  
Vol 25 (2) ◽  
pp. 171-187 ◽  
Author(s):  
Ivo Marx ◽  
Thomas Hacker ◽  
Xue Yu ◽  
Samuele Cortese ◽  
Edmund Sonuga-Barke

Objective: Impulsive choices can lead to suboptimal decision making, a tendency which is especially marked in individuals with ADHD. We compared two different paradigms assessing impulsive choice: the simple choice paradigm (SCP) and the temporal discounting paradigm (TDP). Method: Random effects meta-analyses on 37 group comparisons (22 SCP; 15 TDP) consisting of 3.763 participants (53% ADHD). Results: Small-to-medium effect sizes emerged for both paradigms, confirming that participants with ADHD choose small immediate over large delayed rewards more frequently than controls. Moderation analyses show that offering real rewards in the SCP almost doubled the odds ratio for participants with ADHD. Conclusion: We suggest that a stronger than normal aversion toward delay interacts with a demotivating effect of hypothetical rewards, both factors promoting impulsive choice in participants with ADHD. Furthermore, we suggest the SCP as the paradigm of choice due to its larger ecological validity, contextual sensitivity, and reliability.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2021 ◽  
Vol 5 (1) ◽  
pp. e100135
Author(s):  
Xue Ying Zhang ◽  
Jan Vollert ◽  
Emily S Sena ◽  
Andrew SC Rice ◽  
Nadia Soliman

ObjectiveThigmotaxis is an innate predator avoidance behaviour of rodents and is enhanced when animals are under stress. It is characterised by the preference of a rodent to seek shelter, rather than expose itself to the aversive open area. The behaviour has been proposed to be a measurable construct that can address the impact of pain on rodent behaviour. This systematic review will assess whether thigmotaxis can be influenced by experimental persistent pain and attenuated by pharmacological interventions in rodents.Search strategyWe will conduct search on three electronic databases to identify studies in which thigmotaxis was used as an outcome measure contextualised to a rodent model associated with persistent pain. All studies published until the date of the search will be considered.Screening and annotationTwo independent reviewers will screen studies based on the order of (1) titles and abstracts, and (2) full texts.Data management and reportingFor meta-analysis, we will extract thigmotactic behavioural data and calculate effect sizes. Effect sizes will be combined using a random-effects model. We will assess heterogeneity and identify sources of heterogeneity. A risk-of-bias assessment will be conducted to evaluate study quality. Publication bias will be assessed using funnel plots, Egger’s regression and trim-and-fill analysis. We will also extract stimulus-evoked limb withdrawal data to assess its correlation with thigmotaxis in the same animals. The evidence obtained will provide a comprehensive understanding of the strengths and limitations of using thigmotactic outcome measure in animal pain research so that future experimental designs can be optimised. We will follow the Preferred Reporting Items for Systematic Reviews and Meta-Analyses reporting guidelines and disseminate the review findings through publication and conference presentation.


2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.


2012 ◽  
Vol 9 (5) ◽  
pp. 610-620 ◽  
Author(s):  
Thomas A Trikalinos ◽  
Ingram Olkin

Background Many comparative studies report results at multiple time points. Such data are correlated because they pertain to the same patients, but are typically meta-analyzed as separate quantitative syntheses at each time point, ignoring the correlations between time points. Purpose To develop a meta-analytic approach that estimates treatment effects at successive time points and takes account of the stochastic dependencies of those effects. Methods We present both fixed and random effects methods for multivariate meta-analysis of effect sizes reported at multiple time points. We provide formulas for calculating the covariance (and correlations) of the effect sizes at successive time points for four common metrics (log odds ratio, log risk ratio, risk difference, and arcsine difference) based on data reported in the primary studies. We work through an example of a meta-analysis of 17 randomized trials of radiotherapy and chemotherapy versus radiotherapy alone for the postoperative treatment of patients with malignant gliomas, where in each trial survival is assessed at 6, 12, 18, and 24 months post randomization. We also provide software code for the main analyses described in the article. Results We discuss the estimation of fixed and random effects models and explore five options for the structure of the covariance matrix of the random effects. In the example, we compare separate (univariate) meta-analyses at each of the four time points with joint analyses across all four time points using the proposed methods. Although results of univariate and multivariate analyses are generally similar in the example, there are small differences in the magnitude of the effect sizes and the corresponding standard errors. We also discuss conditional multivariate analyses where one compares treatment effects at later time points given observed data at earlier time points. Limitations Simulation and empirical studies are needed to clarify the gains of multivariate analyses compared with separate meta-analyses under a variety of conditions. Conclusions Data reported at multiple time points are multivariate in nature and are efficiently analyzed using multivariate methods. The latter are an attractive alternative or complement to performing separate meta-analyses.


Sign in / Sign up

Export Citation Format

Share Document