scholarly journals Diffeological Statistical Models, the Fisher Metric and Probabilistic Mappings

Mathematics ◽  
2020 ◽  
Vol 8 (2) ◽  
pp. 167
Author(s):  
Hông Vân Lê

We introduce the notion of a C k -diffeological statistical model, which allows us to apply the theory of diffeological spaces to (possibly singular) statistical models. In particular, we introduce a class of almost 2-integrable C k -diffeological statistical models that encompasses all known statistical models for which the Fisher metric is defined. This class contains a statistical model which does not appear in the Ay–Jost–Lê–Schwachhöfer theory of parametrized measure models. Then, we show that, for any positive integer k , the class of almost 2-integrable C k -diffeological statistical models is preserved under probabilistic mappings. Furthermore, the monotonicity theorem for the Fisher metric also holds for this class. As a consequence, the Fisher metric on an almost 2-integrable C k -diffeological statistical model P ⊂ P ( X ) is preserved under any probabilistic mapping T : X ⇝ Y that is sufficient w.r.t. P. Finally, we extend the Cramér–Rao inequality to the class of 2-integrable C k -diffeological statistical models.

Modelling ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 78-104
Author(s):  
Vasili B. V. Nagarjuna ◽  
R. Vishnu Vardhan ◽  
Christophe Chesneau

Every day, new data must be analysed as well as possible in all areas of applied science, which requires the development of attractive statistical models, that is to say adapted to the context, easy to use and efficient. In this article, we innovate in this direction by proposing a new statistical model based on the functionalities of the sinusoidal transformation and power Lomax distribution. We thus introduce a new three-parameter survival distribution called sine power Lomax distribution. In a first approach, we present it theoretically and provide some of its significant properties. Then the practicality, utility and flexibility of the sine power Lomax model are demonstrated through a comprehensive simulation study, and the analysis of nine real datasets mainly from medicine and engineering. Based on relevant goodness of fit criteria, it is shown that the sine power Lomax model has a better fit to some of the existing Lomax-like distributions.


2001 ◽  
Vol 281 (3) ◽  
pp. H1148-H1155 ◽  
Author(s):  
C. Cerutti ◽  
M. P. Gustin ◽  
P. Molino ◽  
C. Z. Paultre

Several methods for estimating stroke volume (SV) were tested in conscious, freely moving rats in which ascending aortic pressure and cardiac flow were simultaneously (beat-to-beat) recorded. We compared two pulse-contour models to two new statistical models including eight parameters extracted from the pressure waveform in a multiple linear regression. Global as well as individual statistical models gave higher correlation coefficients between estimated and measured SV ( model 1, r = 0.97; model 2, r= 0.96) than pulse-contour models ( model 1, r = 0.83; model 2, r = 0.91). The latter models as well as statistical model 1 used the pulsatile systolic area and thus could be applied to only 47 ± 17% of the cardiac beats. In contrast, statistical model 2 used the pressure-increase characteristics and was therefore established for all of the cardiac beats. The global statistical model 2 applied to data sets independent of those used to establish the model gave reliable SV estimates: r= 0.54 ± 0.07, a small bias between −8% to +10%, and a mean precision of 7%. This work demonstrated the limits of pulse-contour models to estimate SV in conscious, unrestrained rats. A multivariate statistical model using eight parameters easily extracted from the aortic waveform could be applied to all cardiac beats with good precision.


Author(s):  
Paolo Giudici ◽  
Emanuela Raffinetti

AbstractIn a world that is increasingly connected on-line, cyber risks become critical. Cyber risk management is very difficult, as cyber loss data are typically not disclosed. To mitigate the reputational risks associated with their disclosure, loss data may be collected in terms of ordered severity levels. However, to date, there are no risk models for ordinal cyber data. We fill the gap, proposing a rank-based statistical model aimed at predicting the severity levels of cyber risks. The application of our approach to a real-world case shows that the proposed models are, while statistically sound, simple to implement and interpret.


2018 ◽  
Vol 02 (02) ◽  
pp. 1850015 ◽  
Author(s):  
Joseph R. Barr ◽  
Joseph Cavanaugh

It is not unusual that efforts to validate a statistical model exceed those used to build the model. Multiple techniques are used to validate, compare and contrast among competing statistical models: Some are concerned with a model’s ability to predict new data while others are concerned with model descriptiveness of the data. Without claiming to provide a comprehensive view of the landscape, in this paper we will touch on both aspects of model validation. There is much more to the subject and the reader is referred to any of the many classical statistical texts including the revised two volumes of Bickel and Docksum (2016), the one by Hastie, Tibshirani, and Friedman [The Elements of Statistical Learning: Data Mining, Inference, and Predication, 2nd edn. (Springer, 2009)], and several others listed in the bibliography.


2020 ◽  
Vol 495 (3) ◽  
pp. 2738-2753 ◽  
Author(s):  
May G Pedersen ◽  
Ana Escorza ◽  
Péter I Pápics ◽  
Conny Aerts

ABSTRACT We provide three statistical model prescriptions for the bolometric corrections appropriate for B-type stars as a function of (i) Teff, (ii) Teff and log g, and (iii)Teff, log g and [M/H]. These statistical models have been calculated for 27 different filters, including those of the Gaia space mission, and were derived based on two different grids of bolometric corrections assuming LTE and LTE+NLTE, respectively. Previous such work has mainly been limited to a single photometric passband without taking into account non-local thermodynamic equilibrium (NLTE) effects on the bolometric corrections. Using these statistical models, we calculate the luminosities of 34 slowly pulsating B-type (SPB) stars with available spectroscopic parameters, to place them in the Hertzsprung–Russell diagram and to compare their position to the theoretical SPB instability strip. We find that excluding NLTE effects has no significant effect on the derived luminosities for the temperature range 11 500–21 000 K. We conclude that spectroscopic parameters are needed in order to achieve meaningful luminosities of B-type stars. The three prescriptions for the bolometric corrections are valid for any galactic B-type star with effective temperatures and surface gravities in the ranges 10 000–30 000 K and 2.5–4.5 dex, respectively, covering regimes below the Eddington limit.


2011 ◽  
Vol 30 (2) ◽  
pp. 77 ◽  
Author(s):  
Marko Bukovec ◽  
Boštjan Likar ◽  
Franjo Pernuš

This paper presents a framework for the segmentation of anatomical structures in medical imagery by connected statistical models. The framework is based on three types of models: first, generic models which operate directly on image intensities, second, connecting models that impose restrictions on the spatial relationship of generic models, and third, a supervising model that represents an arbitrary number of generic and connecting models. In this paper, the statistical model of appearance is used as the generic model, whiles the statistical model of topology, obtained by applying principal component analysis (PCA) on aligned pose and shape parameters of the generic model, is used as the connecting model. The performance of such connected statistical model is demonstrated on anterior-posterior (AP) X-ray images of the hips and pelvis and compared to the modelling by one and six unconnected generic models. The most accurate and robust results were obtained by two-level hierarchical modelling, wherein connected statistical models were used first, followed by unconnected statistical models.


Author(s):  
Jan Sprenger ◽  
Stephan Hartmann

How does Bayesian inference handle the highly idealized nature of many (statistical) models in science? The standard interpretation of probability as degree of belief in the truth of a model does not seem to apply in such cases since all candidate models are most probably wrong. Similarly, it is not clear how chance-credence coordination works for the probabilities generated by a statistical model. We solve these problems by developing a suppositional account of degree of belief where probabilities in scientific modeling are decoupled from our actual (unconditional) degrees of belief. This explains the normative pull of chance-credence coordination in Bayesian inference, uncovers the essentially counterfactual nature of reasoning with Bayesian models, and squares well with our intuitive judgment that statistical models provide “objective” probabilities.


Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 703 ◽  
Author(s):  
Jun Suzuki

In this paper, we classify quantum statistical models based on their information geometric properties and the estimation error bound, known as the Holevo bound, into four different classes: classical, quasi-classical, D-invariant, and asymptotically classical models. We then characterize each model by several equivalent conditions and discuss their properties. This result enables us to explore the relationships among these four models as well as reveals the geometrical understanding of quantum statistical models. In particular, we show that each class of model can be identified by comparing quantum Fisher metrics and the properties of the tangent spaces of the quantum statistical model.


2016 ◽  
Vol 27 (2) ◽  
pp. 480-489 ◽  
Author(s):  
Moonseong Heo ◽  
Namhee Kim ◽  
Michael L Rinke ◽  
Judith Wylie-Rosett

Stepped-wedge (SW) designs have been steadily implemented in a variety of trials. A SW design typically assumes a three-level hierarchical data structure where participants are nested within times or periods which are in turn nested within clusters. Therefore, statistical models for analysis of SW trial data need to consider two correlations, the first and second level correlations. Existing power functions and sample size determination formulas had been derived based on statistical models for two-level data structures. Consequently, the second-level correlation has not been incorporated in conventional power analyses. In this paper, we derived a closed-form explicit power function based on a statistical model for three-level continuous outcome data. The power function is based on a pooled overall estimate of stratified cluster-specific estimates of an intervention effect. The sampling distribution of the pooled estimate is derived by applying a fixed-effect meta-analytic approach. Simulation studies verified that the derived power function is unbiased and can be applicable to varying number of participants per period per cluster. In addition, when data structures are assumed to have two levels, we compare three types of power functions by conducting additional simulation studies under a two-level statistical model. In this case, the power function based on a sampling distribution of a marginal, as opposed to pooled, estimate of the intervention effect performed the best. Extensions of power functions to binary outcomes are also suggested.


2021 ◽  
Vol 54 (48) ◽  
pp. 485301
Author(s):  
Alessandro Candeloro ◽  
Matteo G A Paris ◽  
Marco G Genoni

Abstract We address the use of asymptotic incompatibility (AI) to assess the quantumness of a multiparameter quantum statistical model. AI is a recently introduced measure which quantifies the difference between the Holevo and the symmetric logarithmic derivative (SLD) scalar bounds, and can be evaluated using only the SLD operators of the model. At first, we evaluate analytically the AI of the most general quantum statistical models involving two-level (qubit) and single-mode Gaussian continuous-variable quantum systems, and prove that AI is a simple monotonous function of the state purity. Then, we numerically investigate the same problem for qudits (d-dimensional quantum systems, with 2 < d ⩽ 4), showing that, while in general AI is not in general a function of purity, we have enough numerical evidence to conclude that the maximum amount of AI is attainable only for quantum statistical models characterized by a purity larger than μ min = 1 / ( d − 1 ) . In addition, by parametrizing qudit states as thermal (Gibbs) states, numerical results suggest that, once the spectrum of the Hamiltonian is fixed, the AI measure is in one-to-one correspondence with the fictitious temperature parameter β characterizing the family of density operators. Finally, by studying in detail the definition and properties of the AI measure we find that: (i) given a quantum statistical model, one can readily identify the maximum number of asymptotically compatible parameters; (ii) the AI of a quantum statistical model bounds from above the AI of any sub-model that can be defined by fixing one or more of the original unknown parameters (or functions thereof), leading to possibly useful bounds on the AI of models involving noisy quantum dynamics.


Sign in / Sign up

Export Citation Format

Share Document