scholarly journals Joint Confidence Intervals for all Linear Functions of Means in the One-Way Layout with Unknown Group Variances

Biometrika ◽  
1972 ◽  
Vol 59 (3) ◽  
pp. 683
Author(s):  
Emil Spjotvoll
2019 ◽  
Vol 7 (1) ◽  
pp. 1-23
Author(s):  
Stanislav Anatolyev

AbstractThe kurtosis of the distribution of financial returns characterized by high volatility persistence and thick tails is notoriously difficult to estimate precisely. We propose a simple but effective procedure of estimating the kurtosis coefficient (and variance) based on volatility filtering that uses a simple GARCH model. In addition to an estimate, the proposed algorithm issues a signal of whether the kurtosis (or variance) is finite or infinite. We also show how to construct confidence intervals around the proposed estimates. Simulations indicate that the proposed estimates are much less median biased than the usual method-of-moments estimates, their confidence intervals having much more precise coverage probabilities. The procedure alsoworks well when the underlying volatility process is not the one the filtering technique is based on. We illustrate how the algorithm works using several actual series of returns.


Author(s):  
Marco Carricato ◽  
Joseph Duffy ◽  
Vincenzo Parenti-Castelli

Abstract In this article the inverse static analysis of a two degrees of freedom planar mechanism equipped with spiral springs is presented. Such analysis aims to detect the entire set of equilibrium configurations of the mechanism once the external load is assigned. While on the one hand the presence of flexural pivots represents a novelty, on the other it extremely complicates the problem, since it brings the two state variables in the solving equations to appear as arguments of both trigonometric and linear functions. The proposed procedure eliminates one variable and leads to write two equations in one unknown only. The union of the root sets of such equations constitutes the global set of solutions of the problem. Particular attention has been reserved to the analysis of the “reliability” of the final equations: it has been sought the existence of critical situations, in which the solving equations hide solutions or yield false ones. A numerical example is provided. Also, in Appendix it is shown a particular design of the mechanism that offers computational advantages.


2019 ◽  
Author(s):  
Guillaume A Rousselet ◽  
Cyril R Pernet ◽  
Rand R. Wilcox

The bootstrap is a versatile technique that relies on data-driven simulations to make statistical inferences. When combined with robust estimators, the bootstrap can afford much more powerful and flexible inferences than is possible with standard approaches such as t-tests on means. In this R tutorial, we use detailed illustrations of bootstrap simulations to give readers an intuition of what the bootstrap does and how it can be applied to solve many practical problems, such as building confidence intervals for many aspects of the data. In particular, we illustrate how to build confidence intervals for measures of location, including measures of central tendency, in the one-sample case, for two independent and two dependent groups. We also demonstrate how to compare correlation coefficients using the bootstrap and to perform simulations to determine if the bootstrap is fit for purpose for a particular application. The tutorial also addresses two widespread misconceptions about the bootstrap: that it makes no assumptions about the data, and that it leads to robust inferences on its own. The tutorial focuses on detailed graphical descriptions, with data and code available online to reproduce the figures and analyses in the article (https://osf.io/8b4t5/).


1985 ◽  
Vol 10 (3) ◽  
pp. 211-221
Author(s):  
Gottfried E. Noether

The paper presents a unified approach to some of the more popular nonparametric methods in current use. The approach provides the reader with new insights by exhibiting relationships to relevant population parameters, such as location and scale parameters for the one- and two-sample problems and regression parameters for bivariate data. For each parameter, a set of so-called elementary estimates is defined. The elementary estimates are then used to determine point estimates, confidence intervals, and test statistics for testing relevant nonparametric hypotheses. Among the tests discussed are the sign test, the Wilcoxon one- and two-sample tests, and Kendall’s test of independence.


Sign in / Sign up

Export Citation Format

Share Document