Accuracy in Facial Trustworthiness Impressions: Kernel of Truth or Modern Physiognomy? A Meta-Analysis

2021 ◽  
pp. 014616722110481
Author(s):  
Y. Z. Foo ◽  
C. A. M. Sutherland ◽  
N. S. Burton ◽  
S. Nakagawa ◽  
G. Rhodes

Being able to identify trustworthy strangers is a critical social skill. However, whether such impressions are accurate is debatable. Critically, the field currently lacks a quantitative summary of the evidence. To address this gap, we conducted two meta-analyses. We tested whether there is a correlation between perceived and actual trustworthiness across faces, and whether perceivers show above-chance accuracy at assessing trustworthiness. Both meta-analyses revealed significant, modest accuracy (face level, r = .14; perceiver level, r = .27). Perceiver-level effects depended on domain, with aggressiveness and sexual unfaithfulness having stronger effects than agreeableness, criminality, financial reciprocity, and honesty. We also applied research weaving to map the literature, revealing potential biases, including a preponderance of Western studies, a lack of “cross-talk” between research groups, and clarity issues. Overall, this modest accuracy is unlikely to be of practical utility. Moreover, we strongly urge the field to improve reporting standards and generalizability of the results.

2020 ◽  
Vol 24 (3) ◽  
pp. 195-209
Author(s):  
Richard E. Hohn ◽  
Kathleen L. Slaney ◽  
Donna Tafreshi

As meta-analytic studies have come to occupy a sizable contingent of published work in the psychological sciences, clarity in the research and reporting practices of such work is crucial to the interpretability and reproducibility of research findings. The present study examines the state of research and reporting practices within a random sample of 384 published psychological meta-analyses across several important dimensions (e.g., search methods, exclusion criteria, statistical techniques). In addition, we surveyed the first authors of the meta-analyses in our sample to ask them directly about the research practices employed and reporting decisions made in their studies, including the assessments and procedures they conducted and the guidelines or materials they relied on. Upon cross-validating the first author responses with what was reported in their published meta-analyses, we identified numerous potential gaps in reporting and research practices. In addition to providing a survey of recent reporting practices, our findings suggest that (a) there are several research practices conducted by meta-analysts that are ultimately not reported; (b) some aspects of meta-analysis research appear to be conducted at disappointingly low rates; and (c) the adoption of the reporting standards, including the Meta-Analytic Reporting Standards (MARS), has been slow to nonexistent within psychological meta-analytic research.


2018 ◽  
Vol 34 (2) ◽  
pp. 412 ◽  
Author(s):  
María Rubio-Aparicio ◽  
Julio Sánchez-Meca ◽  
Fulgencio Marín-Martínez ◽  
José Antonio López-López

<p>Meta-analysis is an essential methodology that allows researchers to synthesize the scientific evidence available on a given research question. Due to its wide applicability in most applied research fields, it is really important that meta-analyses be written and reported appropriately. In this paper we propose some guidelines to report the results of a meta-analysis in a scientific journal as Annals of Psychology. Concretely, the structure for reporting a meta-analysis following its different stages is detailed. In addition, some recommendations related to the usual tasks when conducting a meta-analysis are provided. A recent meta-analysis focused on the psychological field is used to illustrate the guidelines proposed. Finally, some concluding remarks are presented. </p>


2019 ◽  
Author(s):  
Thiago C. Moulin ◽  
Olavo B. Amaral

AbstractMeta-analytic methods are powerful resources to summarize the existing evidence concerning a given research question, and are widely used in many academic fields. However, meta-analyses can be vulnerable to various sources of bias, which should be considered to avoid inaccuracies. Many of these sources can be related to study authorship, as both methodological choices and researcher bias may lead to deviations in results between different research groups. In this work, we describe a method to objectively attribute study authorship within a given meta-analysis to different research groups by using graph cluster analysis of collaboration networks. We then provide empirical examples of how the research group of origin can impact effect size in distinct types of meta-analyses, demonstrating how non-independence between within-group results can bias effect size estimates if uncorrected. Finally, we show that multilevel random-effects models using research group as a level of analysis can be a simple tool for correcting biases related to study authorship.


2013 ◽  
Vol 12 (4) ◽  
pp. 157-169 ◽  
Author(s):  
Philip L. Roth ◽  
Allen I. Huffcutt

The topic of what interviews measure has received a great deal of attention over the years. One line of research has investigated the relationship between interviews and the construct of cognitive ability. A previous meta-analysis reported an overall corrected correlation of .40 ( Huffcutt, Roth, & McDaniel, 1996 ). A more recent meta-analysis reported a noticeably lower corrected correlation of .27 ( Berry, Sackett, & Landers, 2007 ). After reviewing both meta-analyses, it appears that the two studies posed different research questions. Further, there were a number of coding judgments in Berry et al. that merit review, and there was no moderator analysis for educational versus employment interviews. As a result, we reanalyzed the work by Berry et al. and found a corrected correlation of .42 for employment interviews (.15 higher than Berry et al., a 56% increase). Further, educational interviews were associated with a corrected correlation of .21, supporting their influence as a moderator. We suggest a better estimate of the correlation between employment interviews and cognitive ability is .42, and this takes us “back to the future” in that the better overall estimate of the employment interviews – cognitive ability relationship is roughly .40. This difference has implications for what is being measured by interviews and their incremental validity.


2020 ◽  
Vol 228 (1) ◽  
pp. 43-49 ◽  
Author(s):  
Michael Kossmeier ◽  
Ulrich S. Tran ◽  
Martin Voracek

Abstract. Currently, dedicated graphical displays to depict study-level statistical power in the context of meta-analysis are unavailable. Here, we introduce the sunset (power-enhanced) funnel plot to visualize this relevant information for assessing the credibility, or evidential value, of a set of studies. The sunset funnel plot highlights the statistical power of primary studies to detect an underlying true effect of interest in the well-known funnel display with color-coded power regions and a second power axis. This graphical display allows meta-analysts to incorporate power considerations into classic funnel plot assessments of small-study effects. Nominally significant, but low-powered, studies might be seen as less credible and as more likely being affected by selective reporting. We exemplify the application of the sunset funnel plot with two published meta-analyses from medicine and psychology. Software to create this variation of the funnel plot is provided via a tailored R function. In conclusion, the sunset (power-enhanced) funnel plot is a novel and useful graphical display to critically examine and to present study-level power in the context of meta-analysis.


2019 ◽  
Vol 227 (1) ◽  
pp. 64-82 ◽  
Author(s):  
Martin Voracek ◽  
Michael Kossmeier ◽  
Ulrich S. Tran

Abstract. Which data to analyze, and how, are fundamental questions of all empirical research. As there are always numerous flexibilities in data-analytic decisions (a “garden of forking paths”), this poses perennial problems to all empirical research. Specification-curve analysis and multiverse analysis have recently been proposed as solutions to these issues. Building on the structural analogies between primary data analysis and meta-analysis, we transform and adapt these approaches to the meta-analytic level, in tandem with combinatorial meta-analysis. We explain the rationale of this idea, suggest descriptive and inferential statistical procedures, as well as graphical displays, provide code for meta-analytic practitioners to generate and use these, and present a fully worked real example from digit ratio (2D:4D) research, totaling 1,592 meta-analytic specifications. Specification-curve and multiverse meta-analysis holds promise to resolve conflicting meta-analyses, contested evidence, controversial empirical literatures, and polarized research, and to mitigate the associated detrimental effects of these phenomena on research progress.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2020 ◽  
Vol 45 (6) ◽  
pp. 589-597
Author(s):  
BGS Casado ◽  
EP Pellizzer ◽  
JR Souto Maior ◽  
CAA Lemos ◽  
BCE Vasconcelos ◽  
...  

Clinical Relevance The use of laser light during bleaching will not reduce the incidence or severity of sensitivity and will not increase the degree of color change compared with nonlaser light sources. SUMMARY Objective: To evaluate whether the use of laser during in-office bleaching promotes a reduction in dental sensitivity after bleaching compared with other light sources. Methods: The present review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA) and is registered with PROSPERO (CDR42018096591). Searches were conducted in the PubMed/Medline, Web of Science, and Cochrane Library databases for relevant articles published up to August 2018. Only randomized clinical trials among adults that compared the use of laser during in-office whitening and other light sources were considered eligible. Results: After analysis of the texts retrieved during the database search, six articles met the eligibility criteria and were selected for the present review. For the outcome dental sensitivity, no significant difference was found favoring any type of light either for intensity (mean difference [MD]: −1.60; confidence interval [CI]: −3.42 to 0.22; p=0.09) or incidence (MD: 1.00; CI: 0.755 to 1.33; p=1.00). Regarding change in tooth color, no significant differences were found between the use of the laser and other light sources (MD: −2.22; CI: −6.36 to 1.93; p=0.29). Conclusions: Within the limitations of the present study, laser exerts no influence on tooth sensitivity compared with other light sources when used during in-office bleaching. The included studies demonstrated that laser use during in-office bleaching may have no influence on tooth color change.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews &amp; Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


2020 ◽  
Vol 10 (10) ◽  
pp. 3607
Author(s):  
Hoofar Shokravi ◽  
Hooman Shokravi ◽  
Norhisham Bakhary ◽  
Mahshid Heidarrezaei ◽  
Seyed Saeid Rahimian Koloor ◽  
...  

A large number of research studies in structural health monitoring (SHM) have presented, extended, and used subspace system identification. However, there is a lack of research on systematic literature reviews and surveys of studies in this field. Therefore, the current study is undertaken to systematically review the literature published on the development and application of subspace system identification methods. In this regard, major databases in SHM, including Scopus, Google Scholar, and Web of Science, have been selected and preferred reporting items for systematic reviews and meta-analyses (PRISMA) has been applied to ensure complete and transparent reporting of systematic reviews. Along this line, the presented review addresses the available studies that employed subspace-based techniques in the vibration-based damage detection (VDD) of civil structures. The selected papers in this review were categorized into authors, publication year, name of journal, applied techniques, research objectives, research gap, proposed solutions and models, and findings. This study can assist practitioners and academicians for better condition assessment of structures and to gain insight into the literature.


Sign in / Sign up

Export Citation Format

Share Document