Meta-Analysis, Pooling Sample Data, and Statistical Decision Rules

2021 ◽  
Author(s):  
Deepanshu Sharma ◽  
Surya Priya Ulaganathan ◽  
Vinay Sharma ◽  
Sakshi Piplani ◽  
Ravi Ranjan Kumar Niraj

Abstract Background and objectivesMeta-analysis is a statistical procedure which enables the researcher to integrate the results of various studies that were conducted for the same purpose. However, more often than not, researchers find themselves in a position unable to proceed further due to the complexity of the mathematics involved and unavailability of raw data. To alleviate the said difficulty, we are presenting a tool that will enable researchers to process raw data.MethodsThe GUI tool is written in python. The tool offers an automated conversion and obtainment of mean and standard deviation (SD) from median and interquartile range, utilizing the methods offered by Hozo et al. 2005 and Bland 2015.ResultsThe tool is tested on some sample data and validation is performed for Bland method on the data provided in the Bland method publication (14).ConclusionsThe provided tool is an easy alternative for the preparation of input data required for clinical meta-analysis in the required format.


Author(s):  
Charles F. Manski

This chapter considers reasonable decision making with sample data from randomized trials. It continues discussion of reasonable patient care under uncertainty. Because of its centrality to evidence-based medicine, the chapter focuses on the use of sample trial data in treatment choice. Moreover, having already addressed identification, the chapter considers only statistical imprecision, as has been the case in the statistical literature on trials. The Wald (1950) development of statistical decision theory provides a coherent framework for use of sample data to make decisions. A body of recent research applies statistical decision theory to determine treatment choices that achieve adequate performance in all states of nature, in the sense of maximum regret. This chapter describes the basic ideas and findings, which provide an appealing practical alternative to use of hypothesis tests.


1978 ◽  
Vol 69 (4) ◽  
pp. 375-382 ◽  
Author(s):  
George G. Klee ◽  
Eugene Ackerman ◽  
Lila R. Elveback ◽  
Laël C. Gatewood ◽  
Robert V. Pierre ◽  
...  

2016 ◽  
Vol 34 (7) ◽  
pp. 1042-1068 ◽  
Author(s):  
Mohammad Nejad

Purpose The purpose of this paper is to present a systematic overview of the current state of research on innovations in financial services and identifies the areas that have received less attention, and hence offer opportunities for future research. Design/methodology/approach An extensive search identified 121 research papers that have studied innovations in financial services from January 1990 to March 2015. A thorough content analysis objectively organized and coded the studies based on various aspects including publication year, focus of study, methodology, unit of analysis, sample, data analysis method, and geographical region. Analysis of the resulting data presents an overview of the research and identifies areas for future research. Findings The findings indicate that research on innovations in financial services is diverse and has explored various topics. The findings summarize the research papers with regards to each of the aforementioned aspects and offer researchers directions for future research. Research limitations/implications The sample size of 121 articles is an adequate sample size for the purpose of the study and it is in line with similar studies on innovations in other areas. However, future research can expand the study to include more academic journals in addition to reviewing and synthesizing the qualitative aspects of studies and meta-analysis of the identified relationships. Originality/value The study is the first to present a holistic overview of the current state of research on innovations in financial services. The findings offer clear directions to researchers for future research and hence can be used to promote research in these areas.


1982 ◽  
Vol 7 (2) ◽  
pp. 487-516 ◽  
Author(s):  
David Kaye

The preponderance-of-the-evidence standard usually is understood to mean that the plaintiff must show that the probability that the defendant is in fact liable exceeds 1/2. Several commentators and at least one court have suggested that in some situations it may be preferable to make each defendant pay plaintiff's damages discounted by the probability that the defendant in question is in fact liable. This article analyzes these and other decision rules from the standpoint of statistical decision theory. It argues that in most cases involving only one potential defendant, the conventional interpretation of the preponderance standard is appropriate, but it notes an important exception. The article also considers cases involving many defendants, only one of whom could have caused the injury to plaintiff. It argues that ordinarily the single defendant most likely to have been responsible should be liable for all the damages, even when the probability associated with this defendant is less than 1/2. At the same time, it identifies certain multiple-defendant cases in which the rule that weights each defendant's damages by the probability of that defendant's liability should apply.


Sign in / Sign up

Export Citation Format

Share Document