Entropy, Theory Construction and Spatial Analysis

1976 ◽  
Vol 8 (7) ◽  
pp. 741-752 ◽  
Author(s):  
E S Sheppard

The framework of Bayesian inference is proposed as a structure for unifying those highly disparate approaches to entropy modelling that have appeared in geography to date, and is used to illuminate the possibilities and shortcomings of some of these models. The inadequacy of most descriptive entropy statistics for measuring the information in a spatially-autocorrelated map is described. The contention that entropy maximization in itself provides theoretical justification for spatial models is critically evaluated. It is concluded that entropy should, first and foremost, be regarded as a technique to expand our methods of statistical inference and hypothesis testing, rather than one of theory construction.

2021 ◽  
Vol 2 (3) ◽  
Author(s):  
Oliver Schulz ◽  
Frederik Beaujean ◽  
Allen Caldwell ◽  
Cornelius Grunwald ◽  
Vasyl Hafych ◽  
...  

AbstractWe describe the development of a multi-purpose software for Bayesian statistical inference, BAT.jl, written in the Julia language. The major design considerations and implemented algorithms are summarized here, together with a test suite that ensures the proper functioning of the algorithms. We also give an extended example from the realm of physics that demonstrates the functionalities of BAT.jl.


2018 ◽  
Vol 1 (2) ◽  
pp. 281-295 ◽  
Author(s):  
Alexander Etz ◽  
Julia M. Haaf ◽  
Jeffrey N. Rouder ◽  
Joachim Vandekerckhove

Hypothesis testing is a special form of model selection. Once a pair of competing models is fully defined, their definition immediately leads to a measure of how strongly each model supports the data. The ratio of their support is often called the likelihood ratio or the Bayes factor. Critical in the model-selection endeavor is the specification of the models. In the case of hypothesis testing, it is of the greatest importance that the researcher specify exactly what is meant by a “null” hypothesis as well as the alternative to which it is contrasted, and that these are suitable instantiations of theoretical positions. Here, we provide an overview of different instantiations of null and alternative hypotheses that can be useful in practice, but in all cases the inferential procedure is based on the same underlying method of likelihood comparison. An associated app can be found at https://osf.io/mvp53/ . This article is the work of the authors and is reformatted from the original, which was published under a CC-By Attribution 4.0 International license and is available at https://psyarxiv.com/wmf3r/ .


1970 ◽  
Vol 64 (3) ◽  
pp. 772-791 ◽  
Author(s):  
Melvin J. Hinich ◽  
Peter C. Ordeshook

Spatial models of party competition constitute a recent and incrementally developing literature which seeks to explore the relationships between citizens' decisions and candidates' strategies. Despite the mathematical and deductive rigor of this approach, it is only now that political scientists can begin to see the incorporation of those considerations which less formal analyses identify as salient, and perhaps crucial, features of election contests.One such consideration concerns the candidates' objectives. Specifically, spatial analysis often confuses the distinction between candidates who maximize votes and candidates who maximize plurality. Downs and Garvey, for example, assume explicitly that candidates maximize votes, though plurality maximization is clearly the assumption which Garvey actually employs, while Downs frequently assumes that vote maximization, plurality maximization, and the goal of winning are equivalent. Downs, nevertheless, attempts to disentangle these objectives, observing that plurality maximization is the appropriate objective for candidates in a single-member district, while vote maximization is appropriate in proportional representation systems with many parties. All subsequent spatial analysis research, however, assumes either implicitly or explicitly that candidates maximize plurality. If Downs is correct, therefore, this research may not be relevant for a general understanding of electoral competition in diverse constitutional or historical circumstances. The question then is whether those strategies that maximize votes differ from those strategies that maximize plurality.


1999 ◽  
Vol 79 (2) ◽  
pp. 186-195 ◽  
Author(s):  
Julius Sim ◽  
Norma Reid

Abstract This article examines the role of the confidence interval (CI) in statistical inference and its advantages over conventional hypothesis testing, particularly when data are applied in the context of clinical practice. A CI provides a range of population values with which a sample statistic is consistent at a given level of confidence (usually 95%). Conventional hypothesis testing serves to either reject or retain a null hypothesis. A CI, while also functioning as a hypothesis test, provides additional information on the variability of an observed sample statistic (ie, its precision) and on its probable relationship to the value of this statistic in the population from which the sample was drawn (ie, its accuracy). Thus, the CI focuses attention on the magnitude and the probability of a treatment or other effect. It thereby assists in determining the clinical usefulness and importance of, as well as the statistical significance of, findings. The CI is appropriate for both parametric and nonparametric analyses and for both individual studies and aggregated data in meta-analyses. It is recommended that, when inferential statistical analysis is performed, CIs should accompany point estimates and conventional hypothesis tests wherever possible.


Sign in / Sign up

Export Citation Format

Share Document