statistical decision theory
Recently Published Documents


TOTAL DOCUMENTS

256
(FIVE YEARS 24)

H-INDEX

23
(FIVE YEARS 1)

Author(s):  
Marc Hallin

Unlike the real line, the real space, in dimension d ≥ 2, is not canonically ordered. As a consequence, extending to a multivariate context fundamental univariate statistical tools such as quantiles, signs, and ranks is anything but obvious. Tentative definitions have been proposed in the literature but do not enjoy the basic properties (e.g., distribution-freeness of ranks, their independence with respect to the order statistic, their independence with respect to signs) they are expected to satisfy. Based on measure transportation ideas, new concepts of distribution and quantile functions, ranks, and signs have been proposed recently that, unlike previous attempts, do satisfy these properties. These ranks, signs, and quantiles have been used, quite successfully, in several inference problems and have triggered, in a short span of time, a number of applications: fully distribution-free testing for multiple-output regression, MANOVA, and VAR models; R-estimation for VARMA parameters; distribution-free testing for vector independence; multiple-output quantile regression; nonlinear independent component analysis; and so on. Expected final online publication date for the Annual Review of Statistics, Volume 9 is March 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


Author(s):  
V. S. Mukha ◽  
N. F. Kako

In many applications it is desirable to consider not one random vector but a number of random vectors with the joint distribution. This paper is devoted to the integral and integral transformations connected with the joint vector Gaussian probability density function. Such integral and transformations arise in the statistical decision theory, particularly, in the dual control theory based on the statistical decision theory. One of the results represented in the paper is the integral of the joint Gaussian probability density function. The other results are the total probability formula and Bayes formula formulated in terms of the joint vector Gaussian probability density function. As an example the Bayesian estimations of the coefficients of the multiple regression function are obtained. The proposed integrals can be used as table integrals in various fields of research.


Doklady BGUIR ◽  
2021 ◽  
Vol 19 (2) ◽  
pp. 58-64
Author(s):  
V. S. Mukha ◽  
N. F. Kako

The total probability formula for continuous random variables is the integral of product of two probability density functions that defines the unconditional probability density function from the conditional one. The need for calculation of such integrals arises in many applications, for instant, in statistical decision theory. The statistical decision theory attracts attention due to the ability to formulate the problems in a strict mathematical form. One of the technical problems solved by the statistical decision theory is the problem of dual control that requires calculation of integrals connected with the multivariate probability distributions. The necessary integrals are not available in the literature. One theorem on the total probability formula for vector Gaussian distributions was published by the authors earlier. In this paper we repeat this theorem and prove a new theorem that uses more familiar form of the initial data and has more familiar form of the result. The new form of the theorem allows us to obtain the unconditional mathematical expectation and the unconditional variance-covariance matrix very simply. We also confirm the new theorem by direct calculation for the case of the simple linear regression.


Econometrica ◽  
2021 ◽  
Vol 89 (6) ◽  
pp. 2827-2853
Author(s):  
Charles F. Manski

Haavelmo (1944) proposed a probabilistic structure for econometric modeling, aiming to make econometrics useful for decision making. His fundamental contribution has become thoroughly embedded in econometric research, yet it could not answer all the deep issues that the author raised. Notably, Haavelmo struggled to formalize the implications for decision making of the fact that models can at most approximate actuality. In the same period, Wald (1939, 1945) initiated his own seminal development of statistical decision theory. Haavelmo favorably cited Wald, but econometrics did not embrace statistical decision theory. Instead, it focused on study of identification, estimation, and statistical inference. This paper proposes use of statistical decision theory to evaluate the performance of models in decision making. I consider the common practice of as‐if optimization: specification of a model, point estimation of its parameters, and use of the point estimate to make a decision that would be optimal if the estimate were accurate. A central theme is that one should evaluate as‐if optimization or any other model‐based decision rule by its performance across the state space, listing all states of nature that one believes feasible, not across the model space. I apply the theme to prediction and treatment choice. Statistical decision theory is conceptually simple, but application is often challenging. Advancing computation is the primary task to complete the foundations sketched by Haavelmo and Wald.


Author(s):  
Luis Anunciação ◽  
Marco A. Arruda ◽  
J. Landeira-Fernandez

The clinical utility of a measure involves its ability to support a wide range of decisions that enhance its pragmatism and use. Although several statistics are part of this feature, one centerpiece of this concept is the ability of an instrument to provide cutoff scores that can accurately discriminate between groups that consist of patients and non-patients. This latter aspect leads to such concepts as sensitivity, specificity, positive and negative predictive values and likelihood ratios, accuracy, and receiver operating characteristic curves. This chapter addresses these topics from two perspectives. First, because these features of clinical utility are encompassed as a subfield of statistical decision theory, the authors provide a historical review that links null hypothesis significance testing (NHST), signal detection theory (SDT), and psychological testing. Second, a real-data approach is used to demonstrate these concepts. Additionally, a free software program was developed to present these concepts.


Author(s):  
Aaron Mendon-Plasek

AbstractThe slow and uneven forging of a novel constellation of practices, concerns, and values that became machine learning occurred in 1950s and 1960s pattern recognition research through attempts to mechanize contextual significance that involved building “learning machines” that imitated human judgment by learning from examples. By the 1960s two crises emerged: the first was an inability to evaluate, compare, and judge different pattern recognition systems; the second was an inability to articulate what made pattern recognition constitute a distinct discipline. The resolution of both crises through the problem-framing strategies of supervised and unsupervised learning and the incorporation of statistical decision theory changed what it meant to provide an adequate description of the world even as it caused researchers to reimagine their own scientific self-identities.


2020 ◽  
Vol 3 (1) ◽  
Author(s):  
Dimitris A. Pinotsis ◽  
Earl K. Miller

AbstractNeural activity is organized at multiple scales, ranging from the cellular to the whole brain level. Connecting neural dynamics at different scales is important for understanding brain pathology. Neurological diseases and disorders arise from interactions between factors that are expressed in multiple scales. Here, we suggest a new way to link microscopic and macroscopic dynamics through combinations of computational models. This exploits results from statistical decision theory and Bayesian inference. To validate our approach, we used two independent MEG datasets. In both, we found that variability in visually induced oscillations recorded from different people in simple visual perception tasks resulted from differences in the level of inhibition specific to deep cortical layers. This suggests differences in feedback to sensory areas and each subject’s hypotheses about sensations due to differences in their prior experience. Our approach provides a new link between non-invasive brain imaging data, laminar dynamics and top-down control.


2020 ◽  
pp. 135481662094650
Author(s):  
Emilio Gómez-Déniz ◽  
José Boza-Chirino ◽  
Nancy Dávila-Cárdenes

In the Canary Islands (Spain), the tourism boom has been paralleled by sharp growth in the car rental sector. However, this economic activity is associated with problems such as rising levels of vehicle emissions. In this article, we discuss, on the one hand, the introduction of a tax to internalise the costs of emissions from car rental fleets and, on the other, the measures to reward users who rent environmentally-friendly cars. For this purpose, we propose a model based on statistical decision theory, from which a Bayesian rule is derived. According to this model, the tax increases with the number of days the car is rented but decreases in line with the environmental efficiency of the vehicle. A data sample of visitors to the Canary Islands is used to compare the covariates involved in computing the number of car rental days and the corresponding tax payable.


2020 ◽  
pp. 073194872093187
Author(s):  
Jack M. Fletcher ◽  
David J. Francis ◽  
Barbara R. Foorman ◽  
Christopher Schatschneider

Many states now mandate early screening for dyslexia, but vary in how they address these mandates. There is confusion about the nature of screening versus diagnostic assessments, risk versus diagnosis, concurrent versus predictive validity, and inattention to indices of classification accuracy as the basis for determining risk. To help define what constitutes a screening assessment, we summarize efforts to develop short (3–5 min), teacher-administered screens that used multivariate strategies for variable selection, item response theory to select items that are most discriminating at a threshold for predicting risk, and statistical decision theory. These methods optimize prediction and lower the burden on teachers by reducing the number of items needed to evaluate risk. A specific goal of these efforts was to minimize decision errors that would result in the failure to identify a child as at risk of dyslexia/reading problems (false negatives) despite the inevitable increase in identifications of children who eventually perform in the typical range (false positives). Five screens, developed for different periods during kindergarten, Grade 1, and Grade 2, predicted outcomes measured later in the same school year (Grade 2) or in the subsequent year (Grade 1). The results of this approach to development are applicable to other screening methods, especially those that attempt to predict those children at risk of dyslexia prior to the onset of reading instruction. Without reliable and valid early predictive screening measures that reduce the burden on teachers, early intervention and prevention of dyslexia and related reading problems will be difficult.


Sign in / Sign up

Export Citation Format

Share Document