scholarly journals Identification of facet models by means of factor rotation: A simulation study and data analysis of a test for the Berlin Model of Intelligence Structure

2019 ◽  
Author(s):  
André Beauducel ◽  
Martin Kersting

Until now there has been no successful exploration of a priori unknown faceted structure by means of exploratory factor analysis (EFA) of the measured variables (items or tasks). For this reason, we investigate by means of a simulation study how well methods for factor rotation can identify a two-facet orthogonal simple structure. Samples were generated from orthogonal two-facet population factor models with 4 (2 factors per facet) to 12 factors (6 factors per facet) and submitted to factor analysis with subsequent Varimax, Equamax, Parsimax, Factor Parsimony, Tandem I, Tandem II, Infomax, and McCammon’s Minimum Entropy rotation. As a benchmark, orthogonal target rotation of the sample loadings towards the corresponding faceted population loadings was also investigated. The conditions were sample size (n = 400, 1,000), number of factors (q = 4-12), and main loading size (l = .40, .50, .60). Mean congruence coefficients of the sample loading matrices with the corresponding population loading matrices and the root mean squared error between sample loading matrices and corresponding population loading matrices were used as dependent measures. For less than six factors Infomax and McCammon’s Minimum Entropy rotation and for six and more factors Tandem II rotation yielded the highest similarity of sample loading matrices with faceted population loading matrices. Analysis of data of 393 participants that performed a test for the Berlin Model of Intelligence Structure revealed that the faceted structure of this model could be found by means of target rotation of task aggregates corresponding to the cross-products of the facets. Moreover, McCammon’s Minimum Entropy rotation resulted in a loading pattern corresponding to the model, although the factor for figural intelligence was only weakly represented. Implications for the identification of faceted models by means of factor rotation are discussed.

2020 ◽  
Vol 80 (5) ◽  
pp. 995-1019
Author(s):  
André Beauducel ◽  
Martin Kersting

We investigated by means of a simulation study how well methods for factor rotation can identify a two-facet simple structure. Samples were generated from orthogonal and oblique two-facet population factor models with 4 (2 factors per facet) to 12 factors (6 factors per facet). Samples drawn from orthogonal populations were submitted to factor analysis with subsequent Varimax, Equamax, Parsimax, Factor Parsimony, Tandem I, Tandem II, Infomax, and McCammon’s minimum entropy rotation. Samples drawn from oblique populations were submitted to factor analysis with subsequent Geomin rotation and a Promax-based Tandem II rotation. As a benchmark, we investigated a target rotation of the sample loadings toward the corresponding faceted population loadings. The three conditions were sample size ( n = 400, 1,000), number of factors ( q = 4-12), and main loading size ( l = .40, .50, .60). For less than six orthogonal factors Infomax and McCammon’s minimum entropy rotation and for six and more factors Tandem II rotation yielded the highest congruence of sample loading matrices with faceted population loading matrices. For six and more oblique factors Geomin rotation and a Promax-based Tandem II rotation yielded the highest congruence with faceted population loadings. Analysis of data of 393 participants that performed a test for the Berlin Model of Intelligence Structure revealed that the faceted structure of this model could be identified by means of a Promax-based Tandem II rotation of task aggregates corresponding to the cross-products of the facets. Implications for the identification of faceted models by means of factor rotation are discussed.


Methodology ◽  
2019 ◽  
Vol 15 (Supplement 1) ◽  
pp. 43-60 ◽  
Author(s):  
Florian Scharf ◽  
Steffen Nestler

Abstract. It is challenging to apply exploratory factor analysis (EFA) to event-related potential (ERP) data because such data are characterized by substantial temporal overlap (i.e., large cross-loadings) between the factors, and, because researchers are typically interested in the results of subsequent analyses (e.g., experimental condition effects on the level of the factor scores). In this context, relatively small deviations in the estimated factor solution from the unknown ground truth may result in substantially biased estimates of condition effects (rotation bias). Thus, in order to apply EFA to ERP data researchers need rotation methods that are able to both recover perfect simple structure where it exists and to tolerate substantial cross-loadings between the factors where appropriate. We had two aims in the present paper. First, to extend previous research, we wanted to better understand the behavior of the rotation bias for typical ERP data. To this end, we compared the performance of a variety of factor rotation methods under conditions of varying amounts of temporal overlap between the factors. Second, we wanted to investigate whether the recently proposed component loss rotation is better able to decrease the bias than traditional simple structure rotation. The results showed that no single rotation method was generally superior across all conditions. Component loss rotation showed the best all-round performance across the investigated conditions. We conclude that Component loss rotation is a suitable alternative to simple structure rotation. We discuss this result in the light of recently proposed sparse factor analysis approaches.


Econometrics ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 40
Author(s):  
Erhard Reschenhofer ◽  
Manveer K. Mangat

For typical sample sizes occurring in economic and financial applications, the squared bias of estimators for the memory parameter is small relative to the variance. Smoothing is therefore a suitable way to improve the performance in terms of the mean squared error. However, in an analysis of financial high-frequency data, where the estimates are obtained separately for each day and then combined by averaging, the variance decreases with the sample size but the bias remains fixed. This paper proposes a method of smoothing that does not entail an increase in the bias. This method is based on the simultaneous examination of different partitions of the data. An extensive simulation study is carried out to compare it with conventional estimation methods. In this study, the new method outperforms its unsmoothed competitors with respect to the variance and its smoothed competitors with respect to the bias. Using the results of the simulation study for the proper interpretation of the empirical results obtained from a financial high-frequency dataset, we conclude that significant long-range dependencies are present only in the intraday volatility but not in the intraday returns. Finally, the robustness of these findings against daily and weekly periodic patterns is established.


Geophysics ◽  
1971 ◽  
Vol 36 (2) ◽  
pp. 261-265 ◽  
Author(s):  
James N. Galbraith

Prediction error filtering has been widely used for deconvolution. The mean squared error in prediction is a monotonically nonincreasing function of operator length, and the value of the error is readily available from the Wiener‐Levinson algorithm. In general, the value of this error for the infinitely long operator is not known a priori. It is shown that the final value of the error can be obtained by considering the Kolmogorov spectrum factorization. Simple criteria can then be established for operator effectiveness and length.


2021 ◽  
Vol 19 (1) ◽  
pp. 2-21
Author(s):  
Talha Omer ◽  
Zawar Hussain ◽  
Muhammad Qasim ◽  
Said Farooq Shah ◽  
Akbar Ali Khan

Shrinkage estimators are introduced for the scale parameter of the Rayleigh distribution by using two different shrinkage techniques. The mean squared error properties of the proposed estimator have been derived. The comparison of proposed classes of the estimators is made with the respective conventional unbiased estimators by means of mean squared error in the simulation study. Simulation results show that the proposed shrinkage estimators yield smaller mean squared error than the existence of unbiased estimators.


2020 ◽  
pp. 1-27
Author(s):  
Erik-Jan van Kesteren ◽  
Rogier A. Kievit

Dimension reduction is widely used and often necessary to make network analyses and their interpretation tractable by reducing high-dimensional data to a small number of underlying variables. Techniques such as exploratory factor analysis (EFA) are used by neuroscientists to reduce measurements from a large number of brain regions to a tractable number of factors. However, dimension reduction often ignores relevant a priori knowledge about the structure of the data. For example, it is well established that the brain is highly symmetric. In this paper, we (a) show the adverse consequences of ignoring a priori structure in factor analysis, (b) propose a technique to accommodate structure in EFA by using structured residuals (EFAST), and (c) apply this technique to three large and varied brain-imaging network datasets, demonstrating the superior fit and interpretability of our approach. We provide an R software package to enable researchers to apply EFAST to other suitable datasets.


2020 ◽  
pp. 004912411988246
Author(s):  
Marcin Hitczenko

Researchers interested in studying the frequency of events or behaviors among a population must rely on count data provided by sampled individuals. Often, this involves a decision between live event counting, such as a behavioral diary, and recalled aggregate counts. Diaries are generally more accurate, but their greater cost and respondent burden generally yield less data. The choice of survey mode, therefore, involves a potential trade-off between bias and variance of estimators. We use a case study comparing inferences about payment instrument use based on different survey designs to illustrate this dilemma. We then use a simulation study to show how and under what conditions a hybrid survey design can improve efficiency of estimation, in terms of mean-squared error. Overall, our work suggests that such a hybrid design can have considerable benefits, as long as there is nontrivial overlap in the diary and recall samples.


Author(s):  
Tarek Mahmoud Omara

In this paper, we introduce the new biased estimator to deal with the problem of multicollinearity. This estimator is considered a modification of Two-Parameter Ridge-Liu estimator based on ridge estimation. Furthermore, the superiority of the new estimator than Ridge, Liu and Two-Parameter Ridge-Liu estimator were discussed. We used the mean squared error matrix (MSEM) criterion to verify the superiority of the new estimate.  In addition to, we illustrated the performance of the new estimator at several factors through the simulation study.


Sign in / Sign up

Export Citation Format

Share Document