latent variable model
Recently Published Documents


TOTAL DOCUMENTS

333
(FIVE YEARS 96)

H-INDEX

29
(FIVE YEARS 5)

2021 ◽  
Vol 15 (4) ◽  
Author(s):  
Silvia D’Angelo ◽  
Lorraine Brennan ◽  
Isobel Claire Gormley

2021 ◽  
pp. 1471082X2110592
Author(s):  
Jian-Wei Gou ◽  
Ye-Mao Xia ◽  
De-Peng Jiang

Two-part model (TPM) is a widely appreciated statistical method for analyzing semi-continuous data. Semi-continuous data can be viewed as arising from two distinct stochastic processes: one governs the occurrence or binary part of data and the other determines the intensity or continuous part. In the regression setting with the semi-continuous outcome as functions of covariates, the binary part is commonly modelled via logistic regression and the continuous component via a log-normal model. The conventional TPM, still imposes assumptions such as log-normal distribution of the continuous part, with no unobserved heterogeneity among the response, and no collinearity among covariates, which are quite often unrealistic in practical applications. In this article, we develop a two-part nonlinear latent variable model (TPNLVM) with mixed multiple semi-continuous and continuous variables. The semi-continuous variables are treated as indicators of the latent factor analysis along with other manifest variables. This reduces the dimensionality of the regression model and alleviates the potential multicollinearity problems. Our TPNLVM can accommodate the nonlinear relationships among latent variables extracted from the factor analysis. To downweight the influence of distribution deviations and extreme observations, we develop a Bayesian semiparametric analysis procedure. The conventional parametric assumptions on the related distributions are relaxed and the Dirichlet process (DP) prior is used to improve model fitting. By taking advantage of the discreteness of DP, our method is effective in capturing the heterogeneity underlying population. Within the Bayesian paradigm, posterior inferences including parameters estimates and model assessment are carried out through Markov Chains Monte Carlo (MCMC) sampling method. To facilitate posterior sampling, we adapt the Polya-Gamma stochastic representation for the logistic model. Using simulation studies, we examine properties and merits of our proposed methods and illustrate our approach by evaluating the effect of treatment on cocaine use and examining whether the treatment effect is moderated by psychiatric problems.


2021 ◽  
Author(s):  
Koshi Watanabe ◽  
Keisuke Maeda ◽  
Takahiro Ogawa ◽  
Miki Haseyama

Machines ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 229 ◽  
Author(s):  
Ning Chen ◽  
Fuhai Hu ◽  
Jiayao Chen ◽  
Zhiwen Chen ◽  
Weihua Gui ◽  
...  

Due to the ubiquitous dynamics of industrial processes, the variable time lag raises great challenge to the high-precision industrial process monitoring. To this end, a process monitoring method based on the dynamic autoregressive latent variable model is proposed in this paper. First, from the perspective of process data, a dynamic autoregressive latent variable model (DALM) with process variables as input and quality variables as output is constructed to adapt to the variable time lag characteristic. In addition, a fusion Bayesian filtering, smoothing and expectation maximization algorithm is used to identify model parameters. Then, the process monitoring method based on DALM is constructed, in which the process data are filtered online to obtain the latent space distribution of the current state, and T2 statistics are constructed. Finally, by comparing with an existing method, the feasibility and effectiveness of the proposed method is tested on the sintering process of ternary cathode materials. Detailed comparisons show the superiority of the proposed method.


Psychometrika ◽  
2021 ◽  
Author(s):  
Jules L. Ellis

AbstractIt is argued that the generalizability theory interpretation of coefficient alpha is important. In this interpretation, alpha is a slightly biased but consistent estimate for the coefficient of generalizability in a subjects x items design where both subjects and items are randomly sampled. This interpretation is based on the “domain sampling” true scores. It is argued that these true scores have a more solid empirical basis than the true scores of Lord and Novick (1968), which are based on “stochastic subjects” (Holland, 1990), while only a single observation is available for each within-subject distribution. Therefore, the generalizability interpretation of coefficient alpha is to be preferred, unless the true scores can be defined by a latent variable model that has undisputed empirical validity for the test and that is sufficiently restrictive to entail a consistent estimate of the reliability—as, for example, McDonald’s omega. If this model implies that the items are essentially tau-equivalent, both the generalizability and the reliability interpretation of alpha can be defensible.


Sign in / Sign up

Export Citation Format

Share Document