Statistical and measurement pitfalls in the use of meta-regression in meta-analysis

2017 ◽  
Vol 22 (5) ◽  
pp. 469-476 ◽  
Author(s):  
Frank L. Schmidt

Purpose Meta-regression is widely used and misused today in meta-analyses in psychology, organizational behavior, marketing, management, and other social sciences, as an approach to the identification and calibration of moderators, with most users being unaware of serious problems in its use. The purpose of this paper is to describe nine serious methodological problems that plague applications of meta-regression. Design/methodology/approach This paper is methodological in nature and is based on well-established principles of measurement and statistics. These principles are used to illuminate the potential pitfalls in typical applications of meta-regression. Findings The analysis in this paper demonstrates that many of the nine statistical and measurement pitfalls in the use of meta-regression are nearly universal in applications in the literature, leading to the conclusion that few meta-regressions in the literature today are trustworthy. A second conclusion is that in almost all cases, hierarchical subgrouping of studies is superior to meta-regression as a method of identifying and calibrating moderators. Finally, a third conclusion is that, contrary to popular belief among researchers, the process of accurately identifying and calibrating moderators, even with the best available methods, is complex, difficult, and data demanding. Practical implications This paper provides useful guidance to meta-analytic researchers that will improve the practice of moderator identification and calibration in social science research literatures. Social implications Today, many important decisions are made on the basis of the results of meta-analyses. These include decisions in medicine, pharmacology, applied psychology, management, marketing, social policy, and other social sciences. The guidance provided in this paper will improve the quality of such decisions by improving the accuracy and trustworthiness of meta-analytic results. Originality/value This paper is original and valuable in that there is no similar listing and discussion of the pitfalls in the use of meta-regression in the literature, and there is currently a widespread lack of knowledge of these problems among meta-analytic researchers in all disciplines.

2005 ◽  
Vol 10 (1) ◽  
pp. 67-78 ◽  
Author(s):  
Alice Stuhlmacher ◽  
Treena Gillespie

AbstractNo longer on the fringes of research design, meta-analysis has established a methodological foothold in social science research. The use of meta-analysis as a research method to study social conflict, however, remains limited. This article is designed to increase the accessibility of meta-analyses, while identifying issues and controversies. To this end, we offer examples from our own experiences in an overview of the development, choices, and challenges of a meta-analysis, as well as more technical references for further instruction.


Author(s):  
Tom Elis Hardwicke ◽  
Joshua D Wallach ◽  
Mallory Kidwell ◽  
Theiss Bendixen ◽  
Sophia Crüwell ◽  
...  

Serious concerns about research quality have catalyzed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency, and enhance research credibility. Meta-research has evaluated the merits of some individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts. In this study, we manually examined a random sample of 250 articles in order to estimate the prevalence of a range of transparency and reproducibility-related indicators in the social sciences literature published between 2014-2017. Few articles indicated availability of materials (16/151, 11% [95% confidence interval, 7% to 16%]), protocols (0/156, 0% [0% to 1%]), raw data (11/156, 7% [2% to 13%]), or analysis scripts (2/156, 1% [0% to 3%]), and no studies were pre-registered (0/156, 0% [0% to 1%]). Some articles explicitly disclosed funding sources (or lack of; 74/236, 31% [25% to 37%]) and some declared no conflicts of interest (36/236, 15% [11% to 20%]). Replication studies were rare (2/156, 1% [0% to 3%]). Few studies were included in evidence synthesis via systematic review (17/151, 11% [7% to 16%]) or meta-analysis (2/151, 1% [0% to 3%]). Less than half the articles were publicly available (101/250, 40% [34% to 47%]). Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.


2020 ◽  
Vol 7 (2) ◽  
pp. 190806 ◽  
Author(s):  
Tom E. Hardwicke ◽  
Joshua D. Wallach ◽  
Mallory C. Kidwell ◽  
Theiss Bendixen ◽  
Sophia Crüwell ◽  
...  

Serious concerns about research quality have catalysed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency and enhance research credibility. Meta-research has evaluated the merits of some individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts. In this study, we manually examined a random sample of 250 articles in order to estimate the prevalence of a range of transparency and reproducibility-related indicators in the social sciences literature published between 2014 and 2017. Few articles indicated availability of materials (16/151, 11% [95% confidence interval, 7% to 16%]), protocols (0/156, 0% [0% to 1%]), raw data (11/156, 7% [2% to 13%]) or analysis scripts (2/156, 1% [0% to 3%]), and no studies were pre-registered (0/156, 0% [0% to 1%]). Some articles explicitly disclosed funding sources (or lack of; 74/236, 31% [25% to 37%]) and some declared no conflicts of interest (36/236, 15% [11% to 20%]). Replication studies were rare (2/156, 1% [0% to 3%]). Few studies were included in evidence synthesis via systematic review (17/151, 11% [7% to 16%]) or meta-analysis (2/151, 1% [0% to 3%]). Less than half the articles were publicly available (101/250, 40% [34% to 47%]). Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.


2021 ◽  
Vol 38 (10) ◽  
pp. 5-9
Author(s):  
Namita Mahapatra ◽  
Jyotshna Sahoo

Purpose This paper aims at analyzing the distinctive characteristics of highly cited articles (HCAs) in the domain of Social Sciences with respect to chronological growth pattern, productive journals, authorship pattern, prolific authors, top institutions and leading countries, network among institutions and top ranked keywords in social science research. Design/methodology/approach The required data has been retrieved from Scopus indexing database and further refined using various limits like document types, subject coverage and total citations, and finally, 839 articles were selected for detail analysis. A set of bibliometric indicators were used to make a quantitative analysis, whereas VOSviewer software tool was used to visualize the institutional network and keywords mapping of the HCAs. Findings This study revealed that highest number of HCAs (371) were published during the decade 2001–2010. Degree of collaboration, collaborative index and collaborative coefficient were observed to be 0.513, 1.98 and 0.988, respectively. The highly cited papers were emanated from 397 journals, contributed by 1,556 authors from 1,326 institutions placed in 46 countries. Social Science and Medicine was the most productive journal; J. Urry of Lancaster University, UK, was the most influential author; the USA, the UK and Canada are the torchbearers in social science research. The paper entitled “Five misunderstandings about case-study research,” authored by B. Flyvbjerg, published in 2006 in Qualitative Inquiry, received highest 4,730 citations. Originality/value The primary value of this paper lies in extending an understanding of the characteristics of HCAs in the domain of social sciences. It will provide an insight to the researchers to get acquainted with the most influential authors, journals, institutions, countries and major thrust areas of research in social sciences.


2019 ◽  
Vol 34 (1) ◽  
pp. 76-95 ◽  
Author(s):  
David Hay

Purpose The purpose of this paper is to discuss the increasing potential demand for meta-analysis studies in auditing. The paper includes a review of a new technique and meta-regression analysis, and explains its advantages in comparison to meta-analysis techniques used in prior auditing research. It also discusses opportunities for applying meta-analysis to auditing topics and potential pitfalls. Design/methodology/approach The paper provides a review and commentary on meta-analysis techniques used in auditing research, especially for meta-analyses of empirical archival studies that use regression models. Findings There is now considerable potential for meta-analysis to have an impact on auditing policy and regulation. Researchers using meta-analysis should make use of the most current techniques (e.g. meta-regression), which are more reliable and allow researchers to explore more issues about the research. Originality/value The paper informs auditing researchers about methods to advance their research and increase its usefulness.


Author(s):  
Gary Goertz ◽  
James Mahoney

Some in the social sciences argue that the same logic applies to both qualitative and quantitative research methods. This book demonstrates that these two paradigms constitute different cultures, each internally coherent yet marked by contrasting norms, practices, and toolkits. The book identifies and discusses major differences between these two traditions that touch nearly every aspect of social science research, including design, goals, causal effects and models, concepts and measurement, data analysis, and case selection. Although focused on the differences between qualitative and quantitative research, the book also seeks to promote toleration, exchange, and learning by enabling scholars to think beyond their own culture and see an alternative scientific worldview. The book is written in an easily accessible style and features a host of real-world examples to illustrate methodological points.


2021 ◽  
Vol 7 ◽  
pp. 237802312110244
Author(s):  
Katrin Auspurg ◽  
Josef Brüderl

In 2018, Silberzahn, Uhlmann, Nosek, and colleagues published an article in which 29 teams analyzed the same research question with the same data: Are soccer referees more likely to give red cards to players with dark skin tone than light skin tone? The results obtained by the teams differed extensively. Many concluded from this widely noted exercise that the social sciences are not rigorous enough to provide definitive answers. In this article, we investigate why results diverged so much. We argue that the main reason was an unclear research question: Teams differed in their interpretation of the research question and therefore used diverse research designs and model specifications. We show by reanalyzing the data that with a clear research question, a precise definition of the parameter of interest, and theory-guided causal reasoning, results vary only within a narrow range. The broad conclusion of our reanalysis is that social science research needs to be more precise in its “estimands” to become credible.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Kjell Asplund ◽  
Kerstin Hulter Åsberg

Abstract Background Previous studies have indicated that failure to report ethical approval is common in health science articles. In social sciences, the occurrence is unknown. The Swedish Ethics Review Act requests that sensitive personal data, in accordance with the EU General Data Protection Regulation (GDPR), should undergo independent ethical review, irrespective of academic discipline. We have explored the adherence to this regulation. Methods Using the Web of Science databases, we reviewed 600 consecutive articles from three domains (health sciences with and without somatic focus and social sciences) based on identifiable personal data published in 2020. Results Information on ethical review was lacking in 12 of 200 health science articles with somatic focus (6%), 21 of 200 health science articles with non-somatic focus (11%), and in 54 of 200 social science articles (27%; p < 0.001 vs. both groups of health science articles). Failure to report on ethical approval was more common in (a) observational than in interventional studies (p < 0.01), (b) articles with only 1–2 authors (p < 0.001) and (c) health science articles from universities without a medical school (p < 0.001). There was no significant association between journal impact factor and failure to report ethical approval. Conclusions We conclude that reporting of research ethics approval is reasonably good, but not strict, in health science articles. Failure to report ethical approval is about three times more frequent in social sciences compared to health sciences. Improved adherence seems needed particularly in observational studies, in articles with few authors and in social science research.


Sign in / Sign up

Export Citation Format

Share Document