scholarly journals Confidence intervals with a priori parameter bounds

2015 ◽  
Vol 46 (3) ◽  
pp. 347-365 ◽  
Author(s):  
A. V. Lokhov ◽  
F. V. Tkachov
Econometrics ◽  
2019 ◽  
Vol 7 (2) ◽  
pp. 26 ◽  
Author(s):  
David Trafimow

There has been much debate about null hypothesis significance testing, p-values without null hypothesis significance testing, and confidence intervals. The first major section of the present article addresses some of the main reasons these procedures are problematic. The conclusion is that none of them are satisfactory. However, there is a new procedure, termed the a priori procedure (APP), that validly aids researchers in obtaining sample statistics that have acceptable probabilities of being close to their corresponding population parameters. The second major section provides a description and review of APP advances. Not only does the APP avoid the problems that plague other inferential statistical procedures, but it is easy to perform too. Although the APP can be performed in conjunction with other procedures, the present recommendation is that it be used alone.


2019 ◽  
Vol 104 ◽  
pp. 02003
Author(s):  
Vladimir Marchuk ◽  
Dmitry Chernyshov ◽  
Ilya Sadrtdinov ◽  
Alexander Minaev

The paper presents the results of the studies of the probability of a “flip” of the approximating function when processing the measurement results under conditions of a priori uncertainty about the signal function and the statistical characteristics of additive noise. It is analytically proved that the confidence intervals of the probability of the absence and the presence of a “flip” are equal, which is confirmed by the experimental results. The dependences of the “flipping” of the approximating function on the sample length, the dispersion of additive noise and the rate of change of the function itself are obtained.


Psychology ◽  
2020 ◽  
Author(s):  
David Trafimow

There are two main inferential statistical camps in psychology: frequentists and Bayesians. Within the frequentist camp, most researchers support the null hypothesis significance testing procedure but support is growing for using confidence intervals. The Bayesian camp holds a diversity of views that cannot be covered adequately here. Many researchers advocate power analysis to determine sample sizes. Finally, the a priori procedure is a promising new way to think about inferential statistics.


Methodology ◽  
2020 ◽  
Vol 16 (2) ◽  
pp. 112-126
Author(s):  
David Trafimow ◽  
Joshua Uhalt

Confidence intervals (CIs) constitute the most popular alternative to widely criticized null hypothesis significance tests. CIs provide more information than significance tests and lend themselves well to visual displays. Although CIs are no better than significance tests when used solely as significance tests, researchers need not limit themselves to this use of CIs. Rather, CIs can be used to estimate the precision of the data, and it is the precision argument that may set CIs in a superior position to significance tests. We tested two versions of the precision argument by performing computer simulations to test how well sample-based CIs estimate a priori CIs. One version pertains to precision of width whereas the other version pertains to precision of location. Using both versions, sample-based CIs poorly estimate a priori CIs at typical sample sizes and perform better as sample sizes increase.


2016 ◽  
Vol 77 (5) ◽  
pp. 831-854 ◽  
Author(s):  
David Trafimow

There has been much controversy over the null hypothesis significance testing procedure, with much of the criticism centered on the problem of inverse inference. Specifically, p gives the probability of the finding (or one more extreme) given the null hypothesis, whereas the null hypothesis significance testing procedure involves drawing a conclusion about the null hypothesis given the finding. Many critics have called for null hypothesis significance tests to be replaced with confidence intervals. However, confidence intervals also suffer from a version of the inverse inference problem. The only known solution to the inverse inference problem is to use the famous theorem by Bayes, but this involves commitments that many researchers are not willing to make. However, it is possible to ask a useful question for which inverse inference is not a problem and that leads to the computation of the coefficient of confidence. In turn, and much more important, using the coefficient of confidence implies the desirability of switching from the current emphasis on a posteriori inferential statistics to an emphasis on a priori inferential statistics.


PeerJ ◽  
2017 ◽  
Vol 5 ◽  
pp. e4105 ◽  
Author(s):  
James Steele ◽  
Andreas Endres ◽  
James Fisher ◽  
Paulo Gentil ◽  
Jürgen Giessing

‘Repetitions in Reserve’ (RIR) scales in resistance training (RT) are used to control effort but assume people accurately predict performance a priori (i.e. the number of possible repetitions to momentary failure (MF)). This study examined the ability of trainees with different experience levels to predict number of repetitions to MF. One hundred and forty-one participants underwent a full body RT session involving single sets to MF and were asked to predict the number of repetitions they could complete before reaching MF on each exercise. Participants underpredicted the number of repetitions they could perform to MF (Standard error of measurements [95% confidence intervals] for combined sample ranged between 2.64 [2.36–2.99] and 3.38 [3.02–3.83]). There was a tendency towards improved accuracy with greater experience. Ability to predict repetitions to MF is not perfectly accurate among most trainees though may improve with experience. Thus, RIR should be used cautiously in prescription of RT. Trainers and trainees should be aware of this as it may have implications for the attainment of training goals, particularly muscular hypertrophy.


Author(s):  
D. E. Luzzi ◽  
L. D. Marks ◽  
M. I. Buckett

As the HREM becomes increasingly used for the study of dynamic localized phenomena, the development of techniques to recover the desired information from a real image is important. Often, the important features are not strongly scattering in comparison to the matrix material in addition to being masked by statistical and amorphous noise. The desired information will usually involve the accurate knowledge of the position and intensity of the contrast. In order to decipher the desired information from a complex image, cross-correlation (xcf) techniques can be utilized. Unlike other image processing methods which rely on data massaging (e.g. high/low pass filtering or Fourier filtering), the cross-correlation method is a rigorous data reduction technique with no a priori assumptions.We have examined basic cross-correlation procedures using images of discrete gaussian peaks and have developed an iterative procedure to greatly enhance the capabilities of these techniques when the contrast from the peaks overlap.


Author(s):  
H.S. von Harrach ◽  
D.E. Jesson ◽  
S.J. Pennycook

Phase contrast TEM has been the leading technique for high resolution imaging of materials for many years, whilst STEM has been the principal method for high-resolution microanalysis. However, it was demonstrated many years ago that low angle dark-field STEM imaging is a priori capable of almost 50% higher point resolution than coherent bright-field imaging (i.e. phase contrast TEM or STEM). This advantage was not exploited until Pennycook developed the high-angle annular dark-field (ADF) technique which can provide an incoherent image showing both high image resolution and atomic number contrast.This paper describes the design and first results of a 300kV field-emission STEM (VG Microscopes HB603U) which has improved ADF STEM image resolution towards the 1 angstrom target. The instrument uses a cold field-emission gun, generating a 300 kV beam of up to 1 μA from an 11-stage accelerator. The beam is focussed on to the specimen by two condensers and a condenser-objective lens with a spherical aberration coefficient of 1.0 mm.


2019 ◽  
Vol 4 (5) ◽  
pp. 878-892
Author(s):  
Joseph A. Napoli ◽  
Linda D. Vallino

Purpose The 2 most commonly used operations to treat velopharyngeal inadequacy (VPI) are superiorly based pharyngeal flap and sphincter pharyngoplasty, both of which may result in hyponasal speech and airway obstruction. The purpose of this article is to (a) describe the bilateral buccal flap revision palatoplasty (BBFRP) as an alternative technique to manage VPI while minimizing these risks and (b) conduct a systematic review of the evidence of BBFRP on speech and other clinical outcomes. A report comparing the speech of a child with hypernasality before and after BBFRP is presented. Method A review of databases was conducted for studies of buccal flaps to treat VPI. Using the principles of a systematic review, the articles were read, and data were abstracted for study characteristics that were developed a priori. With respect to the case report, speech and instrumental data from a child with repaired cleft lip and palate and hypernasal speech were collected and analyzed before and after surgery. Results Eight articles were included in the analysis. The results were positive, and the evidence is in favor of BBFRP in improving velopharyngeal function, while minimizing the risk of hyponasal speech and obstructive sleep apnea. Before surgery, the child's speech was characterized by moderate hypernasality, and after surgery, it was judged to be within normal limits. Conclusion Based on clinical experience and results from the systematic review, there is sufficient evidence that the buccal flap is effective in improving resonance and minimizing obstructive sleep apnea. We recommend BBFRP as another approach in selected patients to manage VPI. Supplemental Material https://doi.org/10.23641/asha.9919352


Addiction ◽  
1997 ◽  
Vol 92 (12) ◽  
pp. 1671-1698 ◽  
Author(s):  
Project Match Research Group
Keyword(s):  
A Priori ◽  

Sign in / Sign up

Export Citation Format

Share Document