scholarly journals Impact of different conditions on accuracy of five rules for principal components retention

Psihologija ◽  
2013 ◽  
Vol 46 (3) ◽  
pp. 331-347
Author(s):  
Aleksandar Zoric ◽  
Goran Opacic

Polemics about criteria for nontrivial principal components are still present in the literature. Finding of a lot of papers, is that the most frequently used Guttman Kaiser?s criterion has very poor performance. In the last three years some new criteria were proposed. In this Monte Carlo experiment we aimed to investigate the impact that sample size, number of analyzed variables, number of supposed factors and proportion of error variance have on the accuracy of analyzed criteria for principal components retention. We compared the following criteria: Bartlett?s ?2 test, Horn?s Parallel Analysis, Guttman-Kaiser?s eigenvalue over one, Velicer?s MAP and CHull originally proposed by Ceulemans & Kiers. Factors were systematically combined resulting in 690 different combinations. A total of 138,000 simulations were performed. Novelty in this research is systematic variation of the error variance. Performed simulations showed that, in favorable research conditions, all analyzed criteria work properly. Bartlett?s and Horns criterion expressed the robustness in most of analyzed situations. Velicer?s MAP had the best accuracy in situations with small number of subjects and high number of variables. Results confirm earlier findings of Guttman-Kaiser?s criterion having the worse performance.

2012 ◽  
Vol 2012 ◽  
pp. 1-17 ◽  
Author(s):  
Lucia Cassettari ◽  
Roberto Mosca ◽  
Roberto Revetria

The idea of a methodology capable of determining in a precise and practical way the optimal sample size came from studying Monte Carlo simulation models concerning financial problems, risk analysis, and supply chain forecasting. In these cases the number of extractions from the frequency distributions characterizing the model is inadequate or limited to just one, so it is necessary to replicate simulation runs many times in order to obtain a complete statistical description of the model variables. Generally, as shown in the literature, the sample size is fixed by the experimenter based on empirical assumptions without considering the impact on result accuracy in terms of tolerance interval. In this paper, the authors propose a methodology by means of which it is possible to graphically highlight the evolution of experimental error variance as a function of the sample size. Therefore, the experimenter can choose the best ratio between the experimental cost and the expected results.


1994 ◽  
Vol 78 (1) ◽  
pp. 323-330 ◽  
Author(s):  
Bradley E. Huitema ◽  
Joseph W. McKean

Among the problems associated with the application of time-series analysis to typical psychological data are difficulties in parameter estimation. For example, estimates of autocorrelation coefficients are known to be biased in the small-sample case. Previous work by the present authors has shown that, in the case of conventional autocorrelation estimators of ρ1 the degree of bias is more severe than is predicted by formulas that are based on large-sample theory. Two new autocorrelation estimators, rF1 and rF2, were proposed; a Monte Carlo experiment was carried out to evaluate the properties of these statistics. The results demonstrate that both estimators provide major reduction of bias. The average absolute bias of rF2 is somewhat smaller than that of rF1 at all sample sizes, but both are far less biased than is the conventional estimator found in most time-series software. The reduction in bias comes at the price of an increase in error variance. A comparison of the properties of these estimators with those of other estimators suggested in 1991 shows advantages and disadvantages for each. It is recommended that the choice among autocorrelation estimators be based upon the nature of the application. The new estimator rF2 is especially appropriate when pooling estimates from several samples.


Methodology ◽  
2019 ◽  
Vol 15 (2) ◽  
pp. 45-55 ◽  
Author(s):  
Eduardo Garcia-Garzon ◽  
Francisco José Abad ◽  
Luis Eduardo Garrido

Abstract. Bi-factor exploratory modeling has recently emerged as a promising approach to multidimensional psychological measurement. However, state-of-the-art methods relying on target rotation require researchers to select an arbitrary cut-off for defining the target matrix. Unfortunately, the consequences of such choice on factor recovery remain uninvestigated under realistic conditions (e.g., factors differing in their average loadings). Built upon the iterative target rotation based on Schmid-Leiman algorithm (SLi), a novel method is here introduced (SLiD). SLiD settles an empirical, factor-specific cut-off based on the first prominent one-lagged difference of sorted squared normalized factor loadings. SLiD and SLi with arbitrary cut-off (ranging from .05 to .20) performance were evaluated via Monte Carlo simulation manipulating sample size, number of specific factors, number of indicators, and cross-loading magnitude. Results indicate that SLiD performed the best for all conditions. For SLi, and due to the presence of minor factors, smaller cut-offs (i.e., .05) outperformed higher ones (i.e., .20).


Author(s):  
Ojo O. Oluwadare ◽  
Enesi O. Lateifat ◽  
Owonipa R. Oluremi

Overtime finite mixtures of Normal in regression have gained popularity and also shown to be useful in modelling heterogeneous data. This study examines the effects of prior and sample size in regression mixtures of Normal models with Bayesian approach. Monte Carlo experiment was carried out on the Normal mixtures model in order to examine the strength of priors and also to know the suitable sample size to produce stable results. Results obtained from the experiment indicate that an informative prior gives a reliable estimate than non-informative prior while large sample sizes maybe needed to obtain stable results.


2018 ◽  
Vol 5 (338) ◽  
pp. 115-131
Author(s):  
Anna Staszewska-Bystrova

The goal of the paper is to investigate the estimation precision of forecast error variance decomposition (FEVD) based on stable structural vector autoregressive models identified using short‑run and long‑run restrictions. The analysis is performed by means of Monte Carlo experiments. It is demonstrated that for processes with roots close to one, selected FEVD parameters can be esti­mated more accurately using recursive restrictions on the long‑run multipliers than under recursive restrictions on the impact effects of shocks. This finding contributes to the discussion of pros and cons of using alternative identification schemes by providing counterexamples for the notion that short‑run identifying restrictions lead to smaller estimation errors than long‑run restrictions.


2017 ◽  
Vol 21 (1) ◽  
pp. 34-50
Author(s):  
Muhammad Dwirifqi Kharisma Putra ◽  
Jahja Umar ◽  
Bahrul Hayat ◽  
Agung Priyo Utomo

Studi ini menggunakan simulasi Monte Carlo dilakukan untuk melihat pengaruh ukuran sampel dan intraclass correlation coefficients (ICC) terhadap bias estimasi parameter multilevel latent variable modeling. Kondisi simulasi diciptakan dengan beberapa faktor yang ditetapkan yaitu lima kondisi ICC (0.05, 0.10, 0.15, 0.20, 0.25), jumlah kelompok (30, 50, 100 dan 150), jumlah observasi dalam kelompok (10, 20 dan 50) dan diestimasi menggunakan lima metode estimasi: ML, MLF, MLR, WLSMV dan BAYES. Jumlah kondisi keseluruhan sebanyak 300 kondisi dimana tiap kondisi direplikasi sebanyak 1000 kali dan dianalisis menggunakan software Mplus 7.4. Kriteria bias yang masih dapat diterima adalah < 10%. Hasil penelitian ini menunjukkan bahwa bias yang terjadi dipengaruhi oleh ukuran sampel dan ICC, penelitian ini juga menujukkan bahwa metode estimasi WLSMV dan BAYES berfungsi lebih baik pada berbagai kondisi dibandingkan dengan metode estimasi berbasis ML.Kata kunci: multilevel latent variable modeling, intraclass correlation coefficients, Metode Markov Chain Monte Carlo THE IMPACT OF SAMPLE SIZE AND INTRACLASS CORRELATION COEFFICIENTS (ICC) ON THE BIAS OF PARAMETER ESTIMATION IN MULTILEVEL LATENT VARIABLE MODELING: A MONTE CARLO STUDYAbstractA monte carlo study was conducted to investigate the effect of sample size and intraclass correlation coefficients (ICC) on the bias of parameter estimates in multilevel latent variable modeling. The design factors included (ICC: 0.05, 0.10, 0.15, 0.20, 0.25), number of groups in between level model (NG: 30, 50, 100 and 150), cluster size (CS: 10, 20 and 50) to be estimated with five different estimator: ML, MLF, MLR, WLSMV and BAYES. Factors were interegated into 300 conditions (4 NG  3 CS  5 ICC  5 Estimator). For each condition, replications with convergence problems were exclude until at least 1.000 replications were generated and analyzed using Mplus 7.4, we also consider absolute percent bias <10% to represent an acceptable level of bias. We find that the degree of bias depends on sample size and ICC. We also show that WLSMV and BAYES estimator performed better than ML-based estimator across varying sample sizes and ICC’s conditions.Keywords: multilevel latent variable modeling, intraclass correlation coefficients, Markov Chain Monte Carlo method


Sign in / Sign up

Export Citation Format

Share Document