From Finite Sample to Asymptotic Methods in Statistics

Author(s):  
Pranab K. Sen ◽  
Julio M. Singer ◽  
Antonio C. Pedroso de Lima
2002 ◽  
Vol 111 (2) ◽  
pp. 135-140
Author(s):  
Richard J Smith ◽  
H.Peter Boswijk

Methodology ◽  
2012 ◽  
Vol 8 (1) ◽  
pp. 23-38 ◽  
Author(s):  
Manuel C. Voelkle ◽  
Patrick E. McKnight

The use of latent curve models (LCMs) has increased almost exponentially during the last decade. Oftentimes, researchers regard LCM as a “new” method to analyze change with little attention paid to the fact that the technique was originally introduced as an “alternative to standard repeated measures ANOVA and first-order auto-regressive methods” (Meredith & Tisak, 1990, p. 107). In the first part of the paper, this close relationship is reviewed, and it is demonstrated how “traditional” methods, such as the repeated measures ANOVA, and MANOVA, can be formulated as LCMs. Given that latent curve modeling is essentially a large-sample technique, compared to “traditional” finite-sample approaches, the second part of the paper addresses the question to what degree the more flexible LCMs can actually replace some of the older tests by means of a Monte-Carlo simulation. In addition, a structural equation modeling alternative to Mauchly’s (1940) test of sphericity is explored. Although “traditional” methods may be expressed as special cases of more general LCMs, we found the equivalence holds only asymptotically. For practical purposes, however, no approach always outperformed the other alternatives in terms of power and type I error, so the best method to be used depends on the situation. We provide detailed recommendations of when to use which method.


2006 ◽  
Vol 54 (3) ◽  
pp. 343-350 ◽  
Author(s):  
C. F. H. Longin ◽  
H. F. Utz ◽  
A. E. Melchinger ◽  
J.C. Reif

The optimum allocation of breeding resources is crucial for the efficiency of breeding programmes. The objectives were to (i) compare selection gain ΔGk for finite and infinite sample sizes, (ii) compare ΔGk and the probability of identifying superior hybrids (Pk), and (iii) determine the optimum allocation of the number of hybrids and test locations in hybrid maize breeding using doubled haploids. Infinite compared to finite sample sizes led to almost identical optimum allocation of test resources, but to an inflation of ΔGk. This inflation decreased as the budget and the number of finally selected hybrids increased. A reasonable Pk was reached for hybrids belonging to the q = 1% best of the population. The optimum allocations for Pk(q) and ΔGkwere similar, indicating that Pk(q) is promising for optimizing breeding programmes.


2006 ◽  
Vol 37 (5) ◽  
pp. 407-419
Author(s):  
A. I. Filippov ◽  
P. N. Mikhailov ◽  
K. A. Filippov

2008 ◽  
Vol 47 (02) ◽  
pp. 167-173 ◽  
Author(s):  
A. Pfahlberg ◽  
O. Gefeller ◽  
R. Weißbach

Summary Objectives: In oncological studies, the hazard rate can be used to differentiate subgroups of the study population according to their patterns of survival risk over time. Nonparametric curve estimation has been suggested as an exploratory means of revealing such patterns. The decision about the type of smoothing parameter is critical for performance in practice. In this paper, we study data-adaptive smoothing. Methods: A decade ago, the nearest-neighbor bandwidth was introduced for censored data in survival analysis. It is specified by one parameter, namely the number of nearest neighbors. Bandwidth selection in this setting has rarely been investigated, although the heuristical advantages over the frequently-studied fixed bandwidth are quite obvious. The asymptotical relationship between the fixed and the nearest-neighbor bandwidth can be used to generate novel approaches. Results: We develop a new selection algorithm termed double-smoothing for the nearest-neighbor bandwidth in hazard rate estimation. Our approach uses a finite sample approximation of the asymptotical relationship between the fixed and nearest-neighbor bandwidth. By so doing, we identify the nearest-neighbor bandwidth as an additional smoothing step and achieve further data-adaption after fixed bandwidth smoothing. We illustrate the application of the new algorithm in a clinical study and compare the outcome to the traditional fixed bandwidth result, thus demonstrating the practical performance of the technique. Conclusion: The double-smoothing approach enlarges the methodological repertoire for selecting smoothing parameters in nonparametric hazard rate estimation. The slight increase in computational effort is rewarded with a substantial amount of estimation stability, thus demonstrating the benefit of the technique for biostatistical applications.


Sign in / Sign up

Export Citation Format

Share Document