Graph Summarization with Latent Variable Probabilistic Models

Author(s):  
Shintaro Fukushima ◽  
Ryoga Kanai ◽  
Kenji Yamanishi
2015 ◽  
Vol 57 (5) ◽  
pp. 701-725 ◽  
Author(s):  
Hervé Guyon ◽  
Jean-François Petiot

Ratings-based conjoint analysis suffers two problems: the distortion raised by consumer perceptions of brand equity, and the lack of efficiency of probabilistic models for estimating preference shares. This article proposes two new approaches to scale customer-based brand equity using repeated measures and structural equation modeling and to estimate the share of preferences on the basis of a randomized first choice. The outcome is a new tool to predict accurate preference shares, taking into account product utilities (estimated by rating-based conjoint analysis) and the brand equity related to product attributes (estimated as a latent variable with structural equation modeling). An example with three products illustrates this new approach.


Perpetrating fraud for financial gain is a known phenomenon, in this fast-growing adoption of smart phones and increased internet penetration, embracing digital technology. Evolution of financial transactions over the years, from paper currency to electronic media, leading the way in the form of credit cards or interbank electronic transactions. Consumers trending towards e-commerce hasn't deterred criminals, but considered this as the opportunity to make money through defrauding methods. Criminals are rapidly improving their fraud abilities. The current Supervised and Unsupervised Machine Learning Algorithm approaches to the discovery of fraud are their inability to learn and explore all possible information representation. The proposed system, VAE based fraud detection, which uses a variational autoencoder for predicting and detecting of fraud detection. The VAE based fraud detection model consists of three major layers, an encoder, a decoder and a fraud detector element. The VAE-based fraud detection model is capable of learning latent variable probabilistic models by optimizing the average value of the information observed. The fraud detector uses the latent representations obtained from the variational autoencoder to classify whether transactions are fraud or not. The model is applied on real time credit card fraud dataset. The experimental results show that, implemented model perform better than supervised Logistic Regression, unsupervised Autoencoders or Random Forest ensemble model.


Methodology ◽  
2011 ◽  
Vol 7 (2) ◽  
pp. 63-67 ◽  
Author(s):  
Ali Ünlü

Schrepp (2005) points out and builds upon the connection between knowledge space theory (KST) and latent class analysis (LCA) to propose a method for constructing knowledge structures from data. Candidate knowledge structures are generated, they are considered as restricted latent class models and fitted to the data, and the BIC is used to choose among them. This article adds additional information about the relationship between KST and LCA. It gives a more comprehensive overview of the literature and the probabilistic models that are at the interface of KST and LCA. KST and LCA are also compared with regard to parameter estimation and model testing methodologies applied in their fields. This article concludes with an overview of KST-related publications addressing the outlined connection and presents further remarks about possible future research arising from a connection of KST to other latent variable modeling approaches.


1978 ◽  
Vol 3 (4) ◽  
pp. 305-317 ◽  
Author(s):  
Wim J. van der Linden

Macready and Dayton (1977) introduced two probabilistic models for mastery assessment based on an idealistic all-or-none conception of mastery. Although these models are in statistical respects complete, the question is whether they are a plausible rendering of what happens when an examinee responds to an item. First, a correction is proposed that takes account of the fact that a master who is not able to produce the right answer to an item may guess. The meaning of this correction and its consequences for estimating the model parameters are discussed. Second, Macready and Dayton’s latent class models are confronted with the three-parameter logistic model extended with the conception of mastery as a region on a latent variable. It appears that from a latent trait theoretic point of view, the Macready and Dayton models assume item characteristic curves that have the unrealistic form of a step function with a single step. The implications of the all-or-none conception of mastery for the learning process will be pointed out shortly. Finally, the interpretation of the forgetting parameter of the Macready and Dayton models is discussed and approached form a latent trait theoretic point of view.


2016 ◽  
Vol 37 (4) ◽  
pp. 239-249
Author(s):  
Xuezhu Ren ◽  
Tengfei Wang ◽  
Karl Schweizer ◽  
Jing Guo

Abstract. Although attention control accounts for a unique portion of the variance in working memory capacity (WMC), the way in which attention control contributes to WMC has not been thoroughly specified. The current work focused on fractionating attention control into distinctly different executive processes and examined to what extent key processes of attention control including updating, shifting, and prepotent response inhibition were related to WMC and whether these relations were different. A number of 216 university students completed experimental tasks of attention control and two measures of WMC. Latent variable analyses were employed for separating and modeling each process and their effects on WMC. The results showed that both the accuracy of updating and shifting were substantially related to WMC while the link from the accuracy of inhibition to WMC was insignificant; on the other hand, only the speed of shifting had a moderate effect on WMC while neither the speed of updating nor the speed of inhibition showed significant effect on WMC. The results suggest that these key processes of attention control exhibit differential effects on individual differences in WMC. The approach that combined experimental manipulations and statistical modeling constitutes a promising way of investigating cognitive processes.


Methodology ◽  
2011 ◽  
Vol 7 (4) ◽  
pp. 157-164
Author(s):  
Karl Schweizer

Probability-based and measurement-related hypotheses for confirmatory factor analysis of repeated-measures data are investigated. Such hypotheses comprise precise assumptions concerning the relationships among the true components associated with the levels of the design or the items of the measure. Measurement-related hypotheses concentrate on the assumed processes, as, for example, transformation and memory processes, and represent treatment-dependent differences in processing. In contrast, probability-based hypotheses provide the opportunity to consider probabilities as outcome predictions that summarize the effects of various influences. The prediction of performance guided by inexact cues serves as an example. In the empirical part of this paper probability-based and measurement-related hypotheses are applied to working-memory data. Latent variables according to both hypotheses contribute to a good model fit. The best model fit is achieved for the model including latent variables that represented serial cognitive processing and performance according to inexact cues in combination with a latent variable for subsidiary processes.


2004 ◽  
Vol 49 (2) ◽  
pp. 204-204
Author(s):  
Alexander von Eye

Sign in / Sign up

Export Citation Format

Share Document