scholarly journals The Effects of Manifest Residual Variances, Indicator Communality, and Sample Size on the χ2-Test Statistic of the Metric Invariance Model

2019 ◽  
Author(s):  
Eric Klopp ◽  
Stefan Klößner

In this contribution, we investigate the effects of manifest residual variance, indicator communality and sample size on the χ2-test statistic of the metric measurement invariance model, i.e. the model with equality constraints on all loadings. We demonstrate by means of Monte Carlo studies that the χ2-test statistic relates inversely to manifest residual variance, whereas sample size and χ2-test statistic show the well-known pro- portional relation. Moreover, we consider indicator communality as a key factor for the size of the χ2-test statistic. In this context, we introduce the concept of signal-to-noise ratio as a tool for studying the effects of manifest residual error and indicator commu- nality and demonstrate its use with some examples. Finally, we discuss the limitations of this contribution and its practical implication for the analysis of metric measurement invariance models.

2020 ◽  
Author(s):  
Eric Klopp ◽  
Stefan Klößner

We investigate the effects of manifest residual variances, indicator communalities, and sample size on the χ2-test statistic of the metric measurement invariance model when the model is misspecified, i.e., there is at least one population loading that violates metric measurement invariance. First, we demonstrate the choice of the scaling method does not affect the model’s χ2-test statistic. Afterward, we demonstrate that the χ2-test statistic relates inversely to manifest residual variances, whereas sample size and χ2-test statistic show a positive relation. Moreover, we consider indicator communality as a key factor for the size of the χ2-test statistic. In this context, we introduce the concept of signal-to-noise ratio as a tool for studying the effects of manifest residual variance and indicator communality and demonstrate its use with the example. Finally, we discuss the limitations and the practical implications for the analysis of metric measurement invariance models.


Biometrika ◽  
2019 ◽  
Vol 106 (3) ◽  
pp. 619-634 ◽  
Author(s):  
Ping-Shou Zhong ◽  
Runze Li ◽  
Shawn Santo

Summary This paper deals with the detection and identification of changepoints among covariances of high-dimensional longitudinal data, where the number of features is greater than both the sample size and the number of repeated measurements. The proposed methods are applicable under general temporal-spatial dependence. A new test statistic is introduced for changepoint detection, and its asymptotic distribution is established. If a changepoint is detected, an estimate of the location is provided. The rate of convergence of the estimator is shown to depend on the data dimension, sample size, and signal-to-noise ratio. Binary segmentation is used to estimate the locations of possibly multiple changepoints, and the corresponding estimator is shown to be consistent under mild conditions. Simulation studies provide the empirical size and power of the proposed test and the accuracy of the changepoint estimator. An application to a time-course microarray dataset identifies gene sets with significant gene interaction changes over time.


2019 ◽  
Vol 47 (10) ◽  
pp. 1-9
Author(s):  
Eun-Young Park ◽  
Joungmin Kim

We aimed to verify the factor model and measurement invariance of the abbreviated Center for Epidemiologic Studies Depression Scale by conducting a confirmatory factor analysis using data from 761 parents of individuals with intellectual disabilities who completed the scale as part of the 2011 Survey on the Actual Conditions of Individuals with Developmental Disabilities, South Korea, and 7,301 participants from the general population who completed the scale as part of the 2011 Welfare Panel Study and Survey by the Ministry of Health and Welfare, South Korea. We used fit indices to assess data reliability and Amos 22.0 for data analysis. According to the results, the 4-factor model had an appropriate fit to the data and the regression coefficients were significant. However, the chi-square difference test result was nonsignificant; therefore, the metric invariance model was the most appropriate measurement invariance model for the data. Implications of the findings are discussed.


Author(s):  
Markus Ekvall ◽  
Michael Höhle ◽  
Lukas Käll

Abstract Motivation Permutation tests offer a straightforward framework to assess the significance of differences in sample statistics. A significant advantage of permutation tests are the relatively few assumptions about the distribution of the test statistic are needed, as they rely on the assumption of exchangeability of the group labels. They have great value, as they allow a sensitivity analysis to determine the extent to which the assumed broad sample distribution of the test statistic applies. However, in this situation, permutation tests are rarely applied because the running time of naïve implementations is too slow and grows exponentially with the sample size. Nevertheless, continued development in the 1980s introduced dynamic programming algorithms that compute exact permutation tests in polynomial time. Albeit this significant running time reduction, the exact test has not yet become one of the predominant statistical tests for medium sample size. Here, we propose a computational parallelization of one such dynamic programming-based permutation test, the Green algorithm, which makes the permutation test more attractive. Results Parallelization of the Green algorithm was found possible by non-trivial rearrangement of the structure of the algorithm. A speed-up—by orders of magnitude—is achievable by executing the parallelized algorithm on a GPU. We demonstrate that the execution time essentially becomes a non-issue for sample sizes, even as high as hundreds of samples. This improvement makes our method an attractive alternative to, e.g. the widely used asymptotic Mann-Whitney U-test. Availabilityand implementation In Python 3 code from the GitHub repository https://github.com/statisticalbiotechnology/parallelPermutationTest under an Apache 2.0 license. Supplementary information Supplementary data are available at Bioinformatics online.


2014 ◽  
Vol 22 (1) ◽  
pp. 45-60 ◽  
Author(s):  
Daniel L. Oberski

Latent variable models can only be compared across groups when these groups exhibit measurement equivalence or “invariance,” since otherwise substantive differences may be confounded with measurement differences. This article suggests examining directly whether measurement differences present could confound substantive analyses, by examining the expected parameter change (EPC)-interest. The EPC-interest approximates the change in parameters of interest that can be expected when freeing cross-group invariance restrictions. Monte Carlo simulations suggest that the EPC-interest approximates these changes well. Three empirical applications show that the EPC-interest can help avoid two undesirable situations: first, it can prevent unnecessarily concluding that groups are incomparable, and second, it alerts the user when comparisons of interest may still be invalidated even when the invariance model appears to fit the data. R code and data for the examples discussed in this article are provided in the electronic appendix (http://hdl.handle.net/1902.1/21816).


Author(s):  
Abdul Bashiru Jibril ◽  
Michael Adu Kwarteng ◽  
Miloslava Chovancova

Purpose – the aim of this research is to understand and present the outcomes of the strength of association between consumers and the use of the green (herbal) product from a demographic viewpoint. By extension, it measures the magnitude of dependents among demographic factors influencing the use of the green product in a developing country. Research methodology – to evaluate consumer’s demographics on the use of the green (herbal) product, 207 participants took part in the survey through a structured questionnaire. Data were obtained from users of green products (specifically herbs) in Ghana. A nonparametric test precisely chi-square test (x2) and Spearman's correlation rs were employed for our empirical analysis. Findings – the paper indicated the youthful population as the highest number of users of the green product in the herbal market. Results from the nonparametric test (Spearman’s rho) revealed that demographic factors (gender, age, education, and occupation) have an inverse relationship on the use of the green product. Whiles the chi-square test also discloses insignificant relationships among the observed attributes. This suggests that there is no empirical evidence to support the claim that use of green product depends on demographic factors of consumers. Research limitations – the limitation of this study considered the research scope, taking into account a smaller sample size for the study hence, future researchers should expand the sample size as well the other demographic variables necessary for a similar study. Practical implications – the practical implication of this study gives insights to practitioners and marketers in the herbal industry on how best they can progress in their quest to sustain in the business. Originality/Value – the present study aided in widening the scope of consumer behaviour towards the green product in the marketing discipline taken into consideration the widespread competition in the business nowadays especially in the herbal (green product) market


Author(s):  
Patrick Royston ◽  
Abdel Babiker

We present a menu-driven Stata program for the calculation of sample size or power for complex clinical trials with a survival time or a binary outcome. The features supported include up to six treatment arms, an arbitrary time-to-event distribution, fixed or time-varying hazard ratios, unequal patient allocation, loss to follow-up, staggered patient entry, and crossover of patients from their allocated treatment to an alternative treatment. The computations of sample size and power are based on the logrank test and are done according to the asymptotic distribution of the logrank test statistic, adjusted appropriately for the design features.


2014 ◽  
Vol 26 (5) ◽  
pp. 499-509 ◽  
Author(s):  
Uche Nwabueze

Purpose – The purpose of this paper is to delineate the factors responsible for the decline of total quality management (TQM) in the National Health Service (NHS). It is suggested that if these factors were initially identified and eliminated prior to implementation, the decline of TQM as a strategy for improving the provision and delivery of quality patient care could have been prevented. Design/methodology/approach – The case study approach was chosen because it is the preferred method when “how” or “what” questions are being posed. It is applicable as is evident in this paper where the researcher has little control over events and when the focus is on a contemporary phenomenon within some real-life context. The case study enables the researcher to give an accurate rendition of actual events; it contributes uniquely to the knowledge of individual, organisational, social, and political phenomena. The semi-structured face-to-face interview constituted the main data collection technique of the research. Interviews were held with 23 quality management managers in the British NHS. The central focus of the interview was on “what” factors contributed to the rapid decline of TQM in the NHS. The respondents were chosen because they were directly involved with the implementation of TQM. They were in the vintage position to offer a full insight into the TQM initiative. The analysis of the case is based on Yin's analytic technique of explanation building. Findings – The decline of TQM in the NHS could have been prevented if top executives in hospitals had adopted the sequential steps to quality improvement: In the authors opinion, to land a man on the moon needed a belief in the possibility and breakthrough in the attitudes that viewed space travel as pure science fiction as opposed to a practical reality, and so it should have been with TQM in the NHS. However, the attitude of many NHS managers was that TQM was all right for “other institutions” because “they need it” whereas in the NHS, “we don’t”. This negative attitude should have been overcome if TQM was to be accepted as a corporate, all encompassing philosophy. Research limitations/implications – The limitation of the research may be the sample size of the respondents, which was limited to 23 quality managers that had hands-on experience and the leadership role to lead and implement TQM in the NHS. Future research may consider a broader sample size. It may also be considered for new research to use surveys to identify a broader set of reasons why TQM declined in the NHS. Practical implications – This paper is the first constructive insight to determine reasons for the decline of TQM in the NHS from the individuals who had the sole responsibility for implementation. Any other, group would have amounted to hearsay. Therefore, to constructively delineate the reasons for failure, it was pertinent to learn from the quality managers directly and to ensure that the reasons was representative of their experiences with TQM. The practical implication is to prepare future managers about how to avoid failure. Originality/value – The paper clearly suggests the systematic process required for effective implementation of TQM in a healthcare setting by identifying factors that must be avoided to ensure the successful and sustainable implementation of TQM.


2018 ◽  
Vol 51 (6) ◽  
pp. 1605-1615
Author(s):  
Zhiyuan Wang ◽  
Huarui Wu ◽  
Liang Chen ◽  
Liangwei Sun ◽  
Xuewu Wang

The neutron flux of the Compact Pulsed Hadron Source (CPHS) is about 2–3 orders of magnitude lower than that of large neutron sources, which means that the beam intensity should be improved to achieve good statistics. Multi-pinhole collimation can be used to obtain a lower Q with an acceptable beam intensity in a very small angle neutron scattering (VSANS) instrument and a higher beam intensity for a larger sample size in a small-angle neutron scattering (SANS) instrument. A new nine-pinhole structure is used in a SANS instrument at CPHS to achieve an acceptable range and resolution of Q and a higher beam intensity compared to single-pinhole collimation. The crosstalk issue associated with multi-pinhole collimation is addressed using an optimized algorithm to achieve a higher safety margin and a larger pinhole size with a higher beam intensity at the sample. Different collimator aperture structures are compared on the basis of their noise production. Experiments are performed to verify the theory of calculating reflection noise from the inner surface of the collimator's aperture and parasitic noise from the beveled collimator structure. From a simulated SANS experiment using cold neutrons in the SANS instrument, it is clarified that multi-pinhole collimators with an opening angle on the downstream side have better performance than those with an opening angle on the upstream side and straight-cut collimators. Compared with a single-pinhole collimation system, a nine-pinhole collimation system increases the intensity at the sample by approximately sevenfold when the sample size is increased by 20-fold for CPHS-SANS, and the signal-to-noise ratio is improved by exploiting a specific collimator aperture structure. Our goal is to install a multi-pinhole collimator based SANS instrument at CPHS in the future, and it is hoped that these results will serve to promote the utilization of multi-pinhole collimation systems at other facilities.


Sign in / Sign up

Export Citation Format

Share Document