Introduction to Data Analysis and Statistical Inference.

1984 ◽  
Vol 79 (385) ◽  
pp. 242
Author(s):  
William F. Taylor ◽  
Carl N. Morris ◽  
John E. Rolph
1990 ◽  
Vol 3 (1) ◽  
pp. 3-12 ◽  
Author(s):  
Frank H. Duffy ◽  
Kenneth Jones ◽  
Peter Bartels ◽  
Marilyn Albert ◽  
Gloria B. McAnulty ◽  
...  

One aim of data analysis is its condensation, namely capturing its gist in an apposite way. This paper addresses the problem of constructing and assessing such condensations without reference to mechanisms which might have generated the data. The results obtained lead to non-probabilistic interpretations of some well-known inferential procedures of classical statistics and thereby shed new light on the structure of statistical inference and the theory of probability.


2004 ◽  
Vol 12 (1) ◽  
pp. 97-104 ◽  
Author(s):  
Stephen M. Shellman

While many areas of research in political science draw inferences from temporally aggregated data, rarely have researchers explored how temporal aggregation biases parameter estimates. With some notable exceptions (Freeman 1989, Political Analysis 1:61–98; Alt et al. 2001, Political Analysis 9:21–44; Thomas 2002, “Event Data Analysis and Threats from Temporal Aggregation”) political science studies largely ignore how temporal aggregation affects our inferences. This article expands upon others' work on this issue by assessing the effect of temporal aggregation decisions on vector autoregressive (VAR) parameter estimates, significance levels, Granger causality tests, and impulse response functions. While the study is relevant to all fields in political science, the results directly apply to event data studies of conflict and cooperation. The findings imply that political scientists should be wary of the impact that temporal aggregation has on statistical inference.


Extremes ◽  
2013 ◽  
Vol 17 (1) ◽  
pp. 127-155 ◽  
Author(s):  
Francesca Greselin ◽  
Leo Pasquazzi ◽  
Ričardas Zitikis

2021 ◽  
Author(s):  
Nivedita Rethnakar

AbstractThis paper investigates the mortality statistics of the COVID-19 pandemic from the United States perspective. Using empirical data analysis and statistical inference tools, we bring out several exciting and important aspects of the pandemic, otherwise hidden. Specific patterns seen in demo-graphics such as race/ethnicity and age are discussed both qualitatively and quantitatively. We also study the role played by factors such as population density. Connections between COVID-19 and other respiratory diseases are also covered in detail. The temporal dynamics of the COVID-19 outbreak and the impact of vaccines in controlling the pandemic are also looked at with sufficient rigor. It is hoped that statistical inference such as the ones gathered in this paper would be helpful for better scientific understanding, policy preparation and thus adequately preparing, should a similar situation arise in the future.


1982 ◽  
Vol 1 (3) ◽  
pp. 430
Author(s):  
Stephen M. Meyer ◽  
Carl Morris ◽  
John Rolph

2015 ◽  
Vol 2015 ◽  
pp. 1-13 ◽  
Author(s):  
Junsheng Ma ◽  
Brian P. Hobbs ◽  
Francesco C. Stingo

The process for using statistical inference to establish personalized treatment strategies requires specific techniques for data-analysis that optimize the combination of competing therapies with candidate genetic features and characteristics of the patient and disease. A wide variety of methods have been developed. However, heretofore the usefulness of these recent advances has not been fully recognized by the oncology community, and the scope of their applications has not been summarized. In this paper, we provide an overview of statistical methods for establishing optimal treatment rules for personalized medicine and discuss specific examples in various medical contexts with oncology as an emphasis. We also point the reader to statistical software for implementation of the methods when available.


Sign in / Sign up

Export Citation Format

Share Document