statistical formalism
Recently Published Documents


TOTAL DOCUMENTS

16
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Lionel Roques ◽  
Etienne Klein ◽  
Julien Papaïx ◽  
Antoine Sar ◽  
Samuel Soubeyrand

AbstractThe COVID-19 epidemic started in the Hubei province in China in December 2019 and then spread around the world reaching the pandemic stage at the beginning of March 2020. Since then, several countries went into lockdown. We estimate the effect of the lockdown in France on the contact rate and the effective reproduction number Re of the COVID-19. We obtain a reduction by a factor 7 (Re = 0.47, 95%-CI: 0.45-0.50), compared to the estimates carried out in France at the early stage of the epidemic. We also estimate the fraction of the population that would be infected by the beginning of May, at the official date at which the lockdown should be relaxed. We find a fraction of 3.7% (95%-CI: 3.0-4.8%) of the total French population, without taking into account the number of recovered individuals before April 1st, which is not known. This proportion is seemingly too low to reach herd immunity. Thus, even if the lockdown strongly mitigated the first epidemic wave, keeping a low value of Re is crucial to avoid an uncontrolled second wave (initiated with much more infectious cases than the first wave) and to hence avoid the saturation of hospital facilities. Our approach is based on the mechanistic-statistical formalism, which uses a probabilistic model to connect the data collection process and the latent epidemiological process, which is described by a SIR-type differential equation model.


Pramana ◽  
2019 ◽  
Vol 92 (5) ◽  
Author(s):  
Chol Jong ◽  
Byong-Il Ri ◽  
Gwang-Dong Yu ◽  
Song-Guk Kim ◽  
Son-Il Jo ◽  
...  

2019 ◽  
Vol 6 (1) ◽  
pp. 433-460
Author(s):  
James O. Berger ◽  
Leonard A. Smith

The use of models to try to better understand reality is ubiquitous. Models have proven useful in testing our current understanding of reality; for instance, climate models of the 1980s were built for science discovery, to achieve a better understanding of the general dynamics of climate systems. Scientific insights often take the form of general qualitative predictions (i.e., “under these conditions, the Earth's poles will warm more than the rest of the planet”); such use of models differs from making quantitative forecasts of specific events (i.e. “high winds at noon tomorrow at London's Heathrow Airport”). It is sometimes hoped that, after sufficient model development, any model can be used to make quantitative forecasts for any target system. Even if that were the case, there would always be some uncertainty in the prediction. Uncertainty quantification aims to provide a framework within which that uncertainty can be discussed and, ideally, quantified, in a manner relevant to practitioners using the forecast system. A statistical formalism has developed that claims to be able to accurately assess the uncertainty in prediction. This article is a discussion of if and when this formalism can do so. The article arose from an ongoing discussion between the authors concerning this issue, the second author generally being considerably more skeptical concerning the utility of the formalism in providing quantitative decision-relevant information.


2015 ◽  
Vol 5 (1) ◽  
pp. 135-152 ◽  
Author(s):  
Jan Vanhove

I discuss three common practices that obfuscate or invalidate the statistical analysis of randomized controlled interventions in applied linguistics. These are (a) checking whether randomization produced groups that are balanced on a number of possibly relevant covariates, (b) using repeated measures ANOVA to analyze pretest-posttest designs, and (c) using traditional significance tests to analyze interventions in which whole groups were assigned to the conditions (cluster randomization). The first practice is labeled superfluous, and taking full advantage of important covariates regardless of balance is recommended. The second is needlessly complicated, and analysis of covariance is recommended as a more powerful alternative. The third produces dramatic inferential errors, which are largely, though not entirely, avoided when mixed-effects modeling is used. This discussion is geared towards applied linguists who need to design, analyze, or assess intervention studies or other randomized controlled trials. Statistical formalism is kept to a minimum throughout.


2004 ◽  
Vol 336 (3-4) ◽  
pp. 376-390 ◽  
Author(s):  
A.R Plastino ◽  
C Giordano ◽  
A Plastino ◽  
M Casas

Sign in / Sign up

Export Citation Format

Share Document