Self-Esteem, Self-Disclosure, Self-Expression, and Connection on Facebook: A Collaborative Replication Meta-Analysis

Author(s):  
Dana Charles Leighton ◽  
Nicole Legate ◽  
Sara LePine ◽  
Samantha Anderson ◽  
Jon E. Grahe

This replication meta-analysis explored the robustness of a highly cited study showing that those with low self-esteem perceived benefits for self-disclosure through Facebook compared to face-to-face interactions (i.e., Forest & Wood, 2012, Study 1). Seven pre-registered direct replication attempts of this study were conducted by research teams as part of the Collaborative Replication and Education Project (CREP), and results were meta-analyzed to better understand the strength and consistency of the effects reported in the original study. Half of the original results were clearly supported: Self-esteem negatively predicted perceived safety of self-disclosure on Facebook as compared to face-to-face interactions (meta-analytic effect size = -.28, original effect size = -.31), and self-esteem did not relate to perceived opportunities for self-expression (across the seven replications, all 95% confidence intervals (CIs) for effect sizes included zero). However, two other findings received less support: Self-esteem only weakly and inconsistently predicted perceived advantages of self-disclosure on Facebook (meta analytic effect size = -.16, original effect size = -.30), and contrary to the original study, there was no evidence for self-esteem predicting perceived opportunities for connection with others on Facebook (six of the seven replication effect size CIs contained zero). The results provided further evidence regarding the original study’s generalizability and robustness. The implications of the research and its relevance to social compensation theory is presented, and considerations for future multi-site replications are proposed.

2018 ◽  
Author(s):  
Robbie Cornelis Maria van Aert

More and more scientific research gets published nowadays, asking for statistical methods that enable researchers to get an overview of the literature in a particular research field. For that purpose, meta-analysis methods were developed that can be used for statistically combining the effect sizes from independent primary studies on the same topic. My dissertation focuses on two issues that are crucial when conducting a meta-analysis: publication bias and heterogeneity in primary studies’ true effect sizes. Accurate estimation of both the meta-analytic effect size as well as the between-study variance in true effect size is crucial since the results of meta-analyses are often used for policy making. Publication bias distorts the results of a meta-analysis since it refers to situations where publication of a primary study depends on its results. We developed new meta-analysis methods, p-uniform and p-uniform*, which estimate effect sizes corrected for publication bias and also test for publication bias. Although the methods perform well in many conditions, these and the other existing methods are shown not to perform well when researchers use questionable research practices. Additionally, when publication bias is absent or limited, traditional methods that do not correct for publication bias outperform p¬-uniform and p-uniform*. Surprisingly, we found no strong evidence for the presence of publication bias in our pre-registered study on the presence of publication bias in a large-scale data set consisting of 83 meta-analyses and 499 systematic reviews published in the fields of psychology and medicine. We also developed two methods for meta-analyzing a statistically significant published original study and a replication of that study, which reflects a situation often encountered by researchers. One method is a frequentist whereas the other method is a Bayesian statistical method. Both methods are shown to perform better than traditional meta-analytic methods that do not take the statistical significance of the original study into account. Analytical studies of both methods also show that sometimes the original study is better discarded for optimal estimation of the true effect size. Finally, we developed a program for determining the required sample size in a replication analogous to power analysis in null hypothesis testing. Computing the required sample size with the method revealed that large sample sizes (approximately 650 participants) are required to be able to distinguish a zero from a small true effect.Finally, in the last two chapters we derived a new multi-step estimator for the between-study variance in primary studies’ true effect sizes, and examined the statistical properties of two methods (Q-profile and generalized Q-statistic method) to compute the confidence interval of the between-study variance in true effect size. We proved that the multi-step estimator converges to the Paule-Mandel estimator which is nowadays one of the recommended methods to estimate the between-study variance in true effect sizes. Two Monte-Carlo simulation studies showed that the coverage probabilities of Q-profile and generalized Q-statistic method can be substantially below the nominal coverage rate if the assumptions underlying the random-effects meta-analysis model were violated.


2020 ◽  
pp. 152483802096734
Author(s):  
Mengtong Chen ◽  
Ko Ling Chan

Digital technologies are increasingly used in health-care delivery and are being introduced into work to prevent unintentional injury, violence, and suicide to reduce mortality. To understand the potential of digital health interventions (DHIs) to prevent and reduce these problems, we conduct a meta-analysis and provide an overview of their effectiveness and characteristics related to the effects. We searched electronic databases and reference lists of relevant reviews to identify randomized controlled trials (RCTs) published in or before March 2020 evaluating DHIs on injury, violence, or suicide reduction. Based on the 34 RCT studies included in the meta-analysis, the overall random effect size was 0.21, and the effect sizes for reducing suicidal ideation, interpersonal violence, and unintentional injury were 0.17, 0.24, and 0.31, respectively, which can be regarded as comparable to the effect sizes of traditional face-to-face interventions. However, there was considerable heterogeneity between the studies. In conclusion, DHIs have great potential to reduce unintentional injury, violence, and suicide. Future research should explore DHIs’ successful components to facilitate future implementation and wider access.


2016 ◽  
Vol 26 (4) ◽  
pp. 364-368 ◽  
Author(s):  
P. Cuijpers ◽  
E. Weitz ◽  
I. A. Cristea ◽  
J. Twisk

AimsThe standardised mean difference (SMD) is one of the most used effect sizes to indicate the effects of treatments. It indicates the difference between a treatment and comparison group after treatment has ended, in terms of standard deviations. Some meta-analyses, including several highly cited and influential ones, use the pre-post SMD, indicating the difference between baseline and post-test within one (treatment group).MethodsIn this paper, we argue that these pre-post SMDs should be avoided in meta-analyses and we describe the arguments why pre-post SMDs can result in biased outcomes.ResultsOne important reason why pre-post SMDs should be avoided is that the scores on baseline and post-test are not independent of each other. The value for the correlation should be used in the calculation of the SMD, while this value is typically not known. We used data from an ‘individual patient data’ meta-analysis of trials comparing cognitive behaviour therapy and anti-depressive medication, to show that this problem can lead to considerable errors in the estimation of the SMDs. Another even more important reason why pre-post SMDs should be avoided in meta-analyses is that they are influenced by natural processes and characteristics of the patients and settings, and these cannot be discerned from the effects of the intervention. Between-group SMDs are much better because they control for such variables and these variables only affect the between group SMD when they are related to the effects of the intervention.ConclusionsWe conclude that pre-post SMDs should be avoided in meta-analyses as using them probably results in biased outcomes.


2002 ◽  
Vol 6 (1) ◽  
pp. 59-71 ◽  
Author(s):  
Jean M. Twenge ◽  
W. Keith Campbell

Socioeconomic status (SES) has a small but significantrelationship with self-esteem (d = .15, r = .08) in a meta-analysis of 446 samples (total participant N = 312,940). Higher SES individuals report higher self-esteem. The effect size is very small in young children, increases substantially during young adulthood, continues higher until middle age, and is then smaller for adults over the age of 60. Gender interacts with birth cohort: The effect size increased over time for women but decreased over time for men. Asians and Asian Americans show a higher effect size, and occupation and education produce higher correlations with self-esteem than income does. The results are most consistent with a social indicator or salience model.


2013 ◽  
Vol 2013 ◽  
pp. 1-9 ◽  
Author(s):  
Liansheng Larry Tang ◽  
Michael Caudy ◽  
Faye Taxman

Multiple meta-analyses may use similar search criteria and focus on the same topic of interest, but they may yield different or sometimes discordant results. The lack of statistical methods for synthesizing these findings makes it challenging to properly interpret the results from multiple meta-analyses, especially when their results are conflicting. In this paper, we first introduce a method to synthesize the meta-analytic results when multiple meta-analyses use the same type of summary effect estimates. When meta-analyses use different types of effect sizes, the meta-analysis results cannot be directly combined. We propose a two-step frequentist procedure to first convert the effect size estimates to the same metric and then summarize them with a weighted mean estimate. Our proposed method offers several advantages over existing methods by Hemming et al. (2012). First, different types of summary effect sizes are considered. Second, our method provides the same overall effect size as conducting a meta-analysis on all individual studies from multiple meta-analyses. We illustrate the application of the proposed methods in two examples and discuss their implications for the field of meta-analysis.


2020 ◽  
pp. 1-9
Author(s):  
Devin S. Kielur ◽  
Cameron J. Powden

Context: Impaired dorsiflexion range of motion (DFROM) has been established as a predictor of lower-extremity injury. Compression tissue flossing (CTF) may address tissue restrictions associated with impaired DFROM; however, a consensus is yet to support these effects. Objectives: To summarize the available literature regarding CTF on DFROM in physically active individuals. Evidence Acquisition: PubMed and EBSCOhost (CINAHL, MEDLINE, and SPORTDiscus) were searched from 1965 to July 2019 for related articles using combination terms related to CTF and DRFOM. Articles were included if they measured the immediate effects of CTF on DFROM. Methodological quality was assessed using the Physiotherapy Evidence Database scale. The level of evidence was assessed using the Strength of Recommendation Taxonomy. The magnitude of CTF effects from pre-CTF to post-CTF and compared with a control of range of motion activities only were examined using Hedges g effect sizes and 95% confidence intervals. Randomeffects meta-analysis was performed to synthesize DFROM changes. Evidence Synthesis: A total of 6 studies were included in the analysis. The average Physiotherapy Evidence Database score was 60% (range = 30%–80%) with 4 out of 6 studies considered high quality and 2 as low quality. Meta-analysis indicated no DFROM improvements for CTF compared with range of motion activities only (effect size = 0.124; 95% confidence interval, −0.137 to 0.384; P = .352) and moderate improvements from pre-CTF to post-CTF (effect size = 0.455; 95% confidence interval, 0.022 to 0.889; P = .040). Conclusions: There is grade B evidence to suggest CTF may have no effect on DFROM when compared with a control of range of motion activities only and results in moderate improvements from pre-CTF to post-CTF. This suggests that DFROM improvements were most likely due to exercises completed rather than the band application.


Author(s):  
Helen G. M. Vossen ◽  
Maria Koutamanis ◽  
Joseph B. Walther

This study investigated the effect of receiving confirming vs. disconfirming feedback to individuals’ self-disclosure on their self-esteem, the role of giving reciprocal feedback in this relationship, and how these effects differ between online and face-to-face communication. Using a two (communication mode: online vs. face-to-face) by two (feedback valence: confirming vs. disconfirming) between-subjects experiment, we found that feedback had a significant indirect effect on self-esteem, through the receiver’s reciprocal feedback. This indirect effect of feedback differed in online communication from offline: In online communication, participants reciprocated negative feedback when they received it, more than in face-to-face communication. The reciprocal feedback enhanced their self-esteem in online communication, but not in face-to-face communication. Although people tend to respond more negatively to negative comments in online conversations, the process, overall, boosts rather than hinders their self-esteem.


Author(s):  
Michael S. Rosenberg ◽  
Hannah R. Rothstein ◽  
Jessica Gurevitch

One of the fundamental concepts in meta-analysis is that of the effect size. An effect size is a statistical parameter that can be used to compare, on the same scale, the results of different studies in which a common effect of interest has been measured. This chapter describes the conventional effect sizes most commonly encountered in ecology and evolutionary biology, and the types of data associated with them. While choice of a specific measure of effect size may influence the interpretation of results, it does not influence the actual inference methods of meta-analysis. One critical point to remember is that one cannot combine different measures of effect size in a single meta-analysis: once you have chosen how you are going to estimate effect size, you need to use it for all of the studies to be analyzed.


Author(s):  
Noémie Laurens

This chapter illustrates meta-analysis, which is a specific type of literature review, and more precisely a type of research synthesis, alongside traditional narrative reviews. Unlike in primary research, the unit of analysis of a meta-analysis is the results of individual studies. And unlike traditional reviews, meta-analysis only applies to: empirical research studies with quantitative findings hat are conceptually comparable and configured in similar statistical forms. What further distinguishes meta-analysis from other research syntheses is the method of synthesizing the results of studies — i.e. the use of statistics and, in particular, of effect sizes. An effect size represents the degree to which the phenomenon under study exists.


2019 ◽  
Vol 34 (6) ◽  
pp. 876-876
Author(s):  
A Walker ◽  
A Hauson ◽  
S Sarkissians ◽  
A Pollard ◽  
C Flora-Tostado ◽  
...  

Abstract Objective The Category Test (CT) has consistently been found to be sensitive at detecting the effects of alcohol on the brain. However, this test has not been as widely used in examining the effects of methamphetamine. The current meta-analysis compared effect sizes of studies that have examined performance on the CT in alcohol versus methamphetamine dependent participants. Data selection Three researchers independently searched nine databases (e.g., PsycINFO, Pubmed, ProceedingsFirst), extracted required data, and calculated effect sizes. Inclusion criteria identified studies that had (a) compared alcohol or methamphetamine dependent groups to healthy controls and (b) matched groups on either age, education, or IQ (at least 2 out of 3). Studies were excluded if participants were reported to have Axis I diagnoses (other than alcohol or methamphetamine dependence) or comorbidities known to impact neuropsychological functioning. Sixteen articles were coded and analyzed for the current study. Data synthesis Alcohol studies showed a large effect size (g = 0.745, p < 0.001) while methamphetamine studies evidenced a moderate effect size (g = 0.406, p = 0.001); both without statistically significant heterogeneity (I2 = 0). Subgroup analysis revealed a statistically significant difference between the effect sizes from alcohol versus methamphetamine studies (Q-between = 5.647, p = 0.017). Conclusions The CT is sensitive to the effects of both alcohol and methamphetamine and should be considered when examining dependent patients who might exhibit problem solving, concept formation, and set loss difficulties in everyday living.


Sign in / Sign up

Export Citation Format

Share Document