collapsed gibbs sampling
Recently Published Documents


TOTAL DOCUMENTS

17
(FIVE YEARS 7)

H-INDEX

4
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Cheng Chen ◽  
Jesse Mullis ◽  
Beshoy Morkos

Abstract Risk management is vital to a product’s lifecycle. The current practice of reducing risks relies on domain experts or management tools to identify unexpected engineering changes, where such approaches are prone to human errors and laborious operations. However, this study presents a framework to contribute to requirements management by implementing a generative probabilistic model, the supervised latent Dirichlet allocation (LDA) with collapsed Gibbs sampling (CGS), to study the topic composition within three unlabeled and unstructured industrial requirements documents. As finding the preferred number of topics remains an open-ended question, a case study estimates an appropriate number of topics to represent each requirements document based on both perplexity and coherence values. Using human evaluations and interpretable visualizations, the result demonstrates the different level of design details by varying the number of topics. Further, a relevance measurement provides the flexibility to improve the quality of topics. Designers can increase design efficiency by understanding, organizing, and analyzing high-volume requirements documents in confirmation management based on topics across different domains. With domain knowledge and purposeful interpretation of topics, designers can make informed decisions on product evolution and mitigate the risks of unexpected engineering changes.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lixue Zou ◽  
Xiwen Liu ◽  
Wray Buntine ◽  
Yanli Liu

PurposeFull text of a document is a rich source of information that can be used to provide meaningful topics. The purpose of this paper is to demonstrate how to use citation context (CC) in the full text to identify the cited topics and citing topics efficiently and effectively by employing automatic text analysis algorithms.Design/methodology/approachThe authors present two novel topic models, Citation-Context-LDA (CC-LDA) and Citation-Context-Reference-LDA (CCRef-LDA). CC is leveraged to extract the citing text from the full text, which makes it possible to discover topics with accuracy. CC-LDA incorporates CC, citing text, and their latent relationship, while CCRef-LDA incorporates CC, citing text, their latent relationship and reference information in CC. Collapsed Gibbs sampling is used to achieve an approximate estimation. The capacity of CC-LDA to simultaneously learn cited topics and citing topics together with their links is investigated. Moreover, a topic influence measure method based on CC-LDA is proposed and applied to create links between the two-level topics. In addition, the capacity of CCRef-LDA to discover topic influential references is also investigated.FindingsThe results indicate CC-LDA and CCRef-LDA achieve improved or comparable performance in terms of both perplexity and symmetric Kullback–Leibler (sKL) divergence. Moreover, CC-LDA is effective in discovering the cited topics and citing topics with topic influence, and CCRef-LDA is able to find the cited topic influential references.Originality/valueThe automatic method provides novel knowledge for cited topics and citing topics discovery. Topic influence learnt by our model can link two-level topics and create a semantic topic network. The method can also use topic specificity as a feature to rank references.


2020 ◽  
Author(s):  
Kazuhiro Yamaguchi ◽  
Jonathan Templin

This paper proposes a novel collapsed Gibbs sampling algorithm that marginalizes model parameters and directly samples latent attribute mastery patterns in diagnostic classification models. This estimation method makes it possible to avoid boundary problems in the estimation of model item parameters by eliminating the need to estimate such parameters. A simulation study showed the collapsed Gibbs sampling algorithm can accurately recover the true attribute mastery status in various conditions. In a real data analysis, the collapsed Gibbs sampling algorithm indicated good classification agreement with results from a previous study.


We advocates a Topic methods for unsupervised cluster matching; this is the project of locating matching amongst clusters in first rate domains without correspondence statistics. As an instance, the proposed version famous correspondences among record clusters in English and German without alignment statistics, along with dictionaries and parallel sentences/files. The proposed version assumes that files in all languages have a not unusual latent challenge rely shape, and there are in all likelihood endless numbers of subject matter proportion percent vectors in a latent subject rely region that is shared by means of way of all languages. Each record is generated the use of one of the subject matter percentage percent vectors and language-particular phrase distributions. Via inferring a subject percent vector used for each document, we are able to allocate documents in wonderful languages into commonplace clusters, wherein each cluster is associated with a subject percent vector. Documents assigned into the same cluster are considered to be matched. We extend an green inference method for the proposed version based totally on collapsed Gibbs sampling. The effectiveness of the proposed model is confirmed with real datasets together with multilingual corpora of Wikipedia and product reviews.


Author(s):  
Shuo Yang ◽  
Kai Shu ◽  
Suhang Wang ◽  
Renjie Gu ◽  
Fan Wu ◽  
...  

Social media has become one of the main channels for people to access and consume news, due to the rapidness and low cost of news dissemination on it. However, such properties of social media also make it a hotbed of fake news dissemination, bringing negative impacts on both individuals and society. Therefore, detecting fake news has become a crucial problem attracting tremendous research effort. Most existing methods of fake news detection are supervised, which require an extensive amount of time and labor to build a reliably annotated dataset. In search of an alternative, in this paper, we investigate if we could detect fake news in an unsupervised manner. We treat truths of news and users’ credibility as latent random variables, and exploit users’ engagements on social media to identify their opinions towards the authenticity of news. We leverage a Bayesian network model to capture the conditional dependencies among the truths of news, the users’ opinions, and the users’ credibility. To solve the inference problem, we propose an efficient collapsed Gibbs sampling approach to infer the truths of news and the users’ credibility without any labelled data. Experiment results on two datasets show that the proposed method significantly outperforms the compared unsupervised methods.


Author(s):  
Bambang Subeno ◽  
Retno Kusumaningrum ◽  
Farikhin Farikhin

<span lang="EN-GB">Latent Dirichlet Allocation (LDA) is a probability model for grouping hidden topics in documents by the number of predefined topics. If conducted incorrectly, determining the amount of K topics will result in limited word correlation with topics. Too large or too small number of K topics causes inaccuracies in grouping topics in the formation of training models. This study aims to determine the optimal number of corpus topics in the LDA method using the maximum likelihood and Minimum Description Length (MDL) approach. The experimental process uses Indonesian news articles with the number of documents at 25, 50, 90, and 600; in each document, the numbers of words are 3898, 7760, 13005, and 4365. The results show that the maximum likelihood and MDL approach result in the same number of optimal topics. The optimal number of topics is influenced by alpha and beta parameters. In addition, the number of documents does not affect the computation times but the number of words does. Computational times for each of those datasets are 2.9721, 6.49637, 13.2967, and 3.7152 seconds. The optimisation model has resulted in many LDA topics as a classification model. This experiment shows that the highest average accuracy is 61% with alpha 0.1 and beta 0.001.</span>


Author(s):  
Xuan Bui ◽  
Tu Vu ◽  
Khoat Than

The problem of posterior inference for individual documents is particularly important in topic models. However, it is often intractable in practice. Many existing methods for posterior inference such as variational Bayes, collapsed variational Bayes and collapsed Gibbs sampling do not have any guarantee on either quality or rate of convergence. The online maximum a posteriori estimation (OPE) algorithm has more attractive properties than other inference approaches. In this paper, we introduced four algorithms to improve OPE (namely, OPE1, OPE2, OPE3, and OPE4) by combining two stochastic bounds. Our new algorithms not only preserve the key advantages of OPE but also can sometimes perform significantly better than OPE. These algorithms were employed to develop new effective methods for learning topic models from massive/streaming text collections. Empirical results show that our approaches were often more efficient than the state-of-theart methods. DOI: 10.32913/rd-ict.vol2.no15.687


Sign in / Sign up

Export Citation Format

Share Document