scholarly journals Predictive physiological anticipation preceding seemingly unpredictable stimuli: An update of Mossbridge et al’s meta-analysis

F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 407
Author(s):  
Michael Duggan ◽  
Patrizio Tressoldi

Background: This is an update of the Mossbridge et al’s meta-analysis related to the physiological anticipation preceding seemingly unpredictable stimuli. The overall effect size observed was 0.21; 95% Confidence Intervals: 0.13 - 0.29 Methods: Eighteen new peer and non-peer reviewed studies completed from January 2008 to October 2017 were retrieved describing a total of 26 experiments and 34 associated effect sizes. Results: The overall weighted effect size, estimated with a frequentist multilevel random model, was: 0.29; 95% Confidence Intervals: 0.19-0.38; the overall weighted effect size, estimated with a multilevel Bayesian model, was: 0.29; 95% Credible Intervals: 0.18-0.39. Effect sizes of peer reviewed studies were slightly higher: 0.38; Confidence Intervals: 0.27-0.48 than non-peer reviewed articles: 0.22; Confidence Intervals: 0.05-0.39. The statistical estimation of the publication bias by using the Copas model suggest that the main findings are not contaminated by publication bias. Conclusions: In summary, with this update, the main findings reported in Mossbridge et al’s meta-analysis, are confirmed.

F1000Research ◽  
2018 ◽  
Vol 7 ◽  
pp. 407 ◽  
Author(s):  
Michael Duggan ◽  
Patrizio Tressoldi

Background: This is an update of the Mossbridge et al’s meta-analysis related to the physiological anticipation preceding seemingly unpredictable stimuli which overall effect size was 0.21; 95% Confidence Intervals: 0.13 - 0.29 Methods: Nineteen new peer and non-peer reviewed studies completed from January 2008 to June 2018 were retrieved describing a total of 27 experiments and 36 associated effect sizes. Results: The overall weighted effect size, estimated with a frequentist multilevel random model, was: 0.28; 95% Confidence Intervals: 0.18-0.38; the overall weighted effect size, estimated with a multilevel Bayesian model, was: 0.28; 95% Credible Intervals: 0.18-0.38. The weighted mean estimate of the effect size of peer reviewed studies was higher than that of non-peer reviewed studies, but with overlapped confidence intervals: Peer reviewed: 0.36; 95% Confidence Intervals: 0.26-0.47; Non-Peer reviewed: 0.22; 95% Confidence Intervals: 0.05-0.39. Similarly, the weighted mean estimate of the effect size of Preregistered studies was higher than that of Non-Preregistered studies: Preregistered: 0.31; 95% Confidence Intervals: 0.18-0.45; No-Preregistered: 0.24; 95% Confidence Intervals: 0.08-0.41. The statistical estimation of the publication bias by using the Copas selection model suggest that the main findings are not contaminated by publication bias. Conclusions: In summary, with this update, the main findings reported in Mossbridge et al’s meta-analysis, are confirmed.


2018 ◽  
Author(s):  
Michael Duggan ◽  
Patrizio Tressoldi

Background: This is an update of the Mossbridge et al’s meta-analysis related to the physiological anticipation preceding seemingly unpredictable stimuli which overall effect size was 0.21; 95% Confidence Intervals: 0.13 - 0.29Methods: Nineteen new peer and non-peer reviewed studies completed from January 2008 to June 2018 were retrieved describing a total of 27 experiments and 36 associated effect sizes.Results: The overall weighted effect size, estimated with a frequentist multilevel random model, was: 0.28; 95% Confidence Intervals: 0.18-0.38; the overall weighted effect size, estimated with a multilevel Bayesian model, was: 0.28; 95% Credible Intervals: 0.18-0.38. The weighted mean estimate of the effect size of peer reviewed studies was higher than that of non peer reviewed studies, but with overlapped confidence intervals: Peer reviewed: 0.36; 95% Confidence Intervals: 0.26-0.47; Non peer reviewed: 0.22; 95% Confidence Intervals: 0.05-0.39. Similarly, the weighted mean estimate of the effect size of Preregistered studies was higher than that of Non-Preregistered studies: Preregistered: 0.31; 95% Confidence Intervals: 0.18-0.45; No-Preregistered: 0.24; 95% Confidence Intervals: 0.08-0.41.The statistical estimation of the publication bias by using the Copas selection model suggest that the main findings are not contaminated by publication bias.Conclusions: In summary, with this update, the main findings reported in Mossbridge et al’s meta-analysis, are confirmed.


2020 ◽  
Vol 35 (6) ◽  
pp. 817-817
Author(s):  
Eilenberger D

Abstract Objective This meta-analysis examined the potential for executive function, episodic memory, and motor function to differentiate HIV-associated neurocognitive disorder (HAND) from Alzheimer’s disease (AD), in an attempt to aid in accurate differential diagnosis. Data Selection The literature search identified records investigating neuropsychological test performance associated with HAND and AD. Databases used were: PsycINFO, Academic Search Complete, and Medline with Full Text. Eligibility was assessed using the following inclusion criteria: (a) study examines HAND or AD, (b) diagnosis is determined using standard diagnostic criteria, (c) study contains data regarding executive function, episodic memory, and/or motor function, (d) study published in English, (e) study is quantitative, and (f) study contains statistical information for effect size calculations. A total of 947 relevant studies were initially identified. Twenty studies were included. Data Synthesis Group difference effect sizes were converted/calculated using Cohen’s d and Cohen’s (1998) conventions. Three weighted effect sizes were calculated for constructs of interest for each disorder. Weighted effect size for executive function was large for each group (HAND d = 1.28; AD d = 1.57). A large weighted effect size for episodic memory in AD (AD d = −2.17) and a medium effect size for HAND (HAND d = −0.65) were calculated. A large weighted effect size was determined for motor function in AD (d = 3.60), while a small effect size was calculated for HAND (d = 0.27). Conclusions Level of impairment in episodic memory and motor function can be used to differentiate HAND from AD. Executive function lacked differences needed for diagnostic differentiation. Future research should be done directly comparing neuropsychological performance between HAND and AD.


2013 ◽  
Vol 280 (1768) ◽  
pp. 20131615 ◽  
Author(s):  
Adrian V. Jaeggi ◽  
Michael Gurven

Helping, i.e. behaviour increasing the fitness of others, can evolve when directed towards kin or reciprocating partners. These predictions have been tested in the context of food sharing both in human foragers and non-human primates. Here, we performed quantitative meta-analyses on 32 independent study populations to (i) test for overall effects of reciprocity on food sharing while controlling for alternative explanations, methodological biases, publication bias and phylogeny and (ii) compare the relative effects of reciprocity, kinship and tolerated scrounging, i.e. sharing owing to costs imposed by others. We found a significant overall weighted effect size for reciprocity of r = 0.20–0.48 for the most and least conservative measure, respectively. Effect sizes did not differ between humans and other primates, although there were species differences in in-kind reciprocity and trade. The relative effect of reciprocity in sharing was similar to those of kinship and tolerated scrounging. These results indicate a significant independent contribution of reciprocity to human and primate helping behaviour. Furthermore, similar effect sizes in humans and primates speak against cognitive constraints on reciprocity. This study is the first to use meta-analyses to quantify these effects on human helping and to directly compare humans and other primates.


F1000Research ◽  
2020 ◽  
Vol 9 ◽  
pp. 826
Author(s):  
Patrizio E. Tressoldi ◽  
Lance Storm

This meta-analysis is an investigation into anomalous perception (i.e., conscious identification of information without any conventional sensorial means). The technique used for eliciting an effect is the ganzfeld condition (a form of sensory homogenization that eliminates distracting peripheral noise). The database consists of peer-reviewed studies published between January 1974 and June 2020 inclusive. The overall effect size will be estimated using a frequentist model and a Bayesian random model. Moderator analysis will be used to examine the influence of level of experience of participants and the type of task. Publication bias will be estimated by using three different tests. Trend analysis will be conducted on the cumulative database.


2021 ◽  
Vol 12 ◽  
Author(s):  
Hanna Suh ◽  
Jisun Jeong

Objectives: Self-compassion functions as a psychological buffer in the face of negative life experiences. Considering that suicidal thoughts and behaviors (STBs) and non-suicidal self-injury (NSSI) are often accompanied by intense negative feelings about the self (e.g., self-loathing, self-isolation), self-compassion may have the potential to alleviate these negative attitudes and feelings toward oneself. This meta-analysis investigated the associations of self-compassion with STBs and NSSI.Methods: A literature search finalized in August 2020 identified 18 eligible studies (13 STB effect sizes and seven NSSI effect sizes), including 8,058 participants. Two studies were longitudinal studies, and the remainder were cross-sectional studies. A random-effects meta-analysis was conducted using CMA 3.0. Subgroup analyses, meta-regression, and publication bias analyses were conducted to probe potential sources of heterogeneity.Results: With regard to STBs, a moderate effect size was found for self-compassion (r = −0.34, k = 13). Positively worded subscales exhibited statistically significant effect sizes: self-kindness (r = −0.21, k = 4), common humanity (r = −0.20, k = 4), and mindfulness (r = −0.15, k = 4). For NSSI, a small effect size was found for self-compassion (r = −0.29, k = 7). There was a large heterogeneity (I2 = 80.92% for STBs, I2 = 86.25% for NSSI), and publication bias was minimal. Subgroup analysis results showed that sample characteristic was a moderator, such that a larger effect size was witnessed in clinical patients than sexually/racially marginalized individuals, college students, and healthy-functioning community adolescents.Conclusions: Self-compassion was negatively associated with STBs and NSSI, and the effect size of self-compassion was larger for STBs than NSSI. More evidence is necessary to gauge a clinically significant protective role that self-compassion may play by soliciting results from future longitudinal studies or intervention studies.


2020 ◽  
Vol 6 (2) ◽  
pp. 112-127
Author(s):  
Laurențiu Maricuțoiu

The present paper discusses the fundamental principles of meta-analysis, as a statistical method for summarising results of correlational studies. We approach fundamental issues such as: the finality of meta-analysis and the problems associated with study artefacts. The paper also contains recommendations for: selecting the studies for meta-analysis, identifying the relevant information within these studies, computing mean effect sizes, confidence intervals and heterogeneity indexes of the mean effect size. Finally, we present indications for reporting meta-analysis results.


Author(s):  
Michael J. Constantino ◽  
Alice E. Coyne ◽  
James F. Boswell ◽  
Brittany R. Iles ◽  
Andreea Vîslă

Patients’ perception of treatment credibility represents their belief about a treatment’s personal logicality, suitability, and efficaciousness. Although long considered an important common factor bearing on clinical outcome, there have been no systematic reviews of the credibility–outcome association. In this chapter, the authors first discuss the definitions of credibility and similar constructs, common measures of credibility, clinical examples of treatment credibility perception, and several landmark studies. The chapter then presents a meta-analysis of the association between patients’ credibility perception and their posttreatment outcomes. The meta-analysis was conducted on 24 independent samples with 1,504 patients. The overall weighted effect size was r = .12, or d = .24. Next, the authors present moderators and mediators of the treatment credibility–outcome link (the former in the context of the meta-analysis), evidence supporting causality in the association, patient factors contributing to their treatment credibility perception, and limitations of the research base. Finally, the chapter reviews diversity considerations, training implications, and therapeutic practices with regard to patient-perceived treatment credibility and its association with therapy outcome.


2016 ◽  
Vol 77 (4) ◽  
pp. 690-715 ◽  
Author(s):  
Stefan Wiens ◽  
Mats E. Nilsson

Because of the continuing debates about statistics, many researchers may feel confused about how to analyze and interpret data. Current guidelines in psychology advocate the use of effect sizes and confidence intervals (CIs). However, researchers may be unsure about how to extract effect sizes from factorial designs. Contrast analysis is helpful because it can be used to test specific questions of central interest in studies with factorial designs. It weighs several means and combines them into one or two sets that can be tested with t tests. The effect size produced by a contrast analysis is simply the difference between means. The CI of the effect size informs directly about direction, hypothesis exclusion, and the relevance of the effects of interest. However, any interpretation in terms of precision or likelihood requires the use of likelihood intervals or credible intervals (Bayesian). These various intervals and even a Bayesian t test can be obtained easily with free software. This tutorial reviews these methods to guide researchers in answering the following questions: When I analyze mean differences in factorial designs, where can I find the effects of central interest, and what can I learn about their effect sizes?


2018 ◽  
Author(s):  
Robbie Cornelis Maria van Aert ◽  
Marcel A. L. M. van Assen

Publication bias is a major threat to the validity of a meta-analysis resulting in overestimated effect sizes. P-uniform is a meta-analysis method that corrects estimates for publication bias but overestimates average effect size if heterogeneity in true effect sizes (i.e., between-study variance) is present. We propose an extension and improvement of p-uniform called p-uniform*. P-uniform* improves upon p-uniform in three important ways, as it (i) entails a more efficient estimator, (ii) eliminates the overestimation of effect size in case of between-study variance in true effect sizes, and (iii) enables estimating and testing for the presence of the between-study variance. We compared the statistical properties of p-uniform* with p-uniform, the selection model approach of Hedges (1992), and the random-effects model. Statistical properties of p-uniform* and the selection model approach were comparable and generally outperformed p-uniform and the random-effects model if publication bias was present. We demonstrate that p-uniform* and the selection model approach estimate average effect size and between-study variance rather well with ten or more studies in the meta-analysis when publication bias is not extreme. P-uniform* generally provides more accurate estimates of the between-study variance in meta-analyses containing many studies (e.g., 60 or more) and if publication bias is present. However, both methods do not perform well if the meta-analysis only includes statistically significant studies. P-uniform performed best in this case but only when between-study variance was zero or small. We offer recommendations for applied researchers, and provide an R package and an easy-to-use web application for applying p-uniform*.


Sign in / Sign up

Export Citation Format

Share Document