scholarly journals A Nested Chinese Restaurant Topic Model for Short Texts with Document Embeddings

2021 ◽  
Vol 11 (18) ◽  
pp. 8708
Author(s):  
Yue Niu ◽  
Hongjie Zhang ◽  
Jing Li

In recent years, short texts have become a kind of prevalent text on the internet. Due to the short length of each text, conventional topic models for short texts suffer from the sparsity of word co-occurrence information. Researchers have proposed different kinds of customized topic models for short texts by providing additional word co-occurrence information. However, these models cannot incorporate sufficient semantic word co-occurrence information and may bring additional noisy information. To address these issues, we propose a self-aggregated topic model incorporating document embeddings. Aggregating short texts into long documents according to document embeddings can provide sufficient word co-occurrence information and avoid incorporating non-semantic word co-occurrence information. However, document embeddings of short texts contain a lot of noisy information resulting from the sparsity of word co-occurrence information. So we discard noisy information by changing the document embeddings into global and local semantic information. The global semantic information is the similarity probability distribution on the entire dataset and the local semantic information is the distances of similar short texts. Then we adopt a nested Chinese restaurant process to incorporate these two kinds of information. Finally, we compare our model to several state-of-the-art models on four real-world short texts corpus. The experiment results show that our model achieves better performances in terms of topic coherence and classification accuracy.

Symmetry ◽  
2019 ◽  
Vol 11 (12) ◽  
pp. 1486
Author(s):  
Zhinan Gou ◽  
Zheng Huo ◽  
Yuanzhen Liu ◽  
Yi Yang

Supervised topic modeling has been successfully applied in the fields of document classification and tag recommendation in recent years. However, most existing models neglect the fact that topic terms have the ability to distinguish topics. In this paper, we propose a term frequency-inverse topic frequency (TF-ITF) method for constructing a supervised topic model, in which the weight of each topic term indicates the ability to distinguish topics. We conduct a series of experiments with not only the symmetric Dirichlet prior parameters but also the asymmetric Dirichlet prior parameters. Experimental results demonstrate that the result of introducing TF-ITF into a supervised topic model outperforms several state-of-the-art supervised topic models.


2019 ◽  
Vol 26 (5) ◽  
pp. 531-549
Author(s):  
Chuan Wu ◽  
Evangelos Kanoulas ◽  
Maarten de Rijke

AbstractEntities play an essential role in understanding textual documents, regardless of whether the documents are short, such as tweets, or long, such as news articles. In short textual documents, all entities mentioned are usually considered equally important because of the limited amount of information. In long textual documents, however, not all entities are equally important: some are salient and others are not. Traditional entity topic models (ETMs) focus on ways to incorporate entity information into topic models to better explain the generative process of documents. However, entities are usually treated equally, without considering whether they are salient or not. In this work, we propose a novel ETM, Salient Entity Topic Model, to take salient entities into consideration in the document generation process. In particular, we model salient entities as a source of topics used to generate words in documents, in addition to the topic distribution of documents used in traditional topic models. Qualitative and quantitative analysis is performed on the proposed model. Application to entity salience detection demonstrates the effectiveness of our model compared to the state-of-the-art topic model baselines.


2021 ◽  
Author(s):  
Yue Niu ◽  
Hongjie Zhang

With the growth of the internet, short texts such as tweets from Twitter, news titles from the RSS, or comments from Amazon have become very prevalent. Many tasks need to retrieve information hidden from the content of short texts. So ontology learning methods are proposed for retrieving structured information. Topic hierarchy is a typical ontology that consists of concepts and taxonomy relations between concepts. Current hierarchical topic models are not specially designed for short texts. These methods use word co-occurrence to construct concepts and general-special word relations to construct taxonomy topics. But in short texts, word cooccurrence is sparse and lacking general-special word relations. To overcome this two problems and provide an interpretable result, we designed a hierarchical topic model which aggregates short texts into long documents and constructing topics and relations. Because long documents add additional semantic information, our model can avoid the sparsity of word cooccurrence. In experiments, we measured the quality of concepts by topic coherence metric on four real-world short texts corpus. The result showed that our topic hierarchy is more interpretable than other methods.


2020 ◽  
pp. 016555152096869
Author(s):  
Saedeh Tahery ◽  
Saeed Farzi

With the rapid growth of the Internet, search engines play vital roles in meeting the users’ information needs. However, formulating information needs to simple queries for canonical users is a problem yet. Therefore, query auto-completion, which is one of the most important characteristics of the search engines, is leveraged to provide a ranked list of queries matching the user’s entered prefix. Although query auto-completion utilises useful information provided by search engine logs, time-, semantic- and context-aware features are still important resources of extra knowledge. Specifically, in this study, a hybrid query auto-completion system called TIPS ( Time-aware Personalised Semantic-based query auto-completion) is introduced to combine the well-known systems performing based on popularity and neural language model. Furthermore, this system is supplemented by time-aware features that blend both context and semantic information in a collaborative manner. Experimental studies on the standard AOL dataset are conducted to compare our proposed system with state-of-the-art methods, that is, FactorCell, ConcatCell and Unadapted. The results illustrate the significant superiorities of TIPS in terms of mean reciprocal rank (MRR), especially for short-length prefixes.


Author(s):  
Minor Eduardo Quesada Grosso ◽  
Edgar Casasola Murillo ◽  
Jorge Antonio Leoni de León

Abstract: Mining and exploitation of data in social networks has been the focus of many efforts, but despite the resources and energy invested, still remains a lot for doing given its complexity, which requires the adoption of a multidisciplinary approach.Specifically, on what concerns to this research, the content of the texts published regularly, and at a very rapid pace, at sites of microblogs (eg Twitter.com) can be used to analyze global and local trends. These trends are marked by microblogs emerging topics that are distinguished from others by a sudden and accelerated rate of posts related to the same topic; in other words, by an increment of popularity in relatively short periods, a day or a few hours, for example Wanner et al.The problem, then, is twofold, first to extract the topics, then to identify which of those topics are trending. A recent solution, known as Bursty Biterm Topic Model (BBTM) is an algorithm for identifying trending topics, with a good level of performance in Twitter, but it requires great amount of computer processing. Hence, this research aims to evaluate if it is possible to reduce the amount of processing required and getting equally good results. This reduction carry out by a discrimination of co-occurrences of words (biterms) used by BBTM to model trending topics. In contrast to our previous work, in this research, we carry on a more complete and exhaustive set of experiments.  Spanish Abstract: La minería y explotación de datos contenidos en las redes sociales ha sido foco de múltiples esfuerzos. Sin embargo, a pesar de los recursos y energía invertidos aún queda mucho por hacer dada su complejidad. Específicamente, esta investigación se centra en el contenido de los textos publicados regularmente, en los sitios de microblogs (por ejemplo, en Twitter.com) los cuales pueden ser utilizados para analizar tendencias. Estas ultimas son marcadas por temas emergentes que se distinguen de los demás por un súbito y acelerado aumento de publicaciones relacionadas al mismo tema; en otras palabras, por un incremento de popularidad en periodos relativamente cortos, de un día o de unas cuantas horas. En consecuencia, el problema es doble, primero extraer los temas sobre los cuáles se escribe y luego identificar cuáles de esos temas son tendencia. Una solución reciente, conocida como Bursty Biterm Topic Model (BBTM) es un algoritmo que utiliza coocurrencia de palabras (bitérminos) para identificación de temas emergentes y que cuenta con un buen nivel de resultados en Twitter. Sin embargo, su complejidad computacional es alta y requiere de una considerable cantidad de procesamiento computacional. De ahí, que esta investigación busca evaluar si es posible reducir la cantidad de procesamiento requerido y obtener resultados cuya calidad sean igualmente buenos. Esta reducción es llevada a cabo por una discriminación de las coocurrencias de palabras (bitérminos) utilizadas por BBTM para modelar los temas emergentes. En contraste al trabajo realizado previamente, en esta investigación, se llevan a cabo experimentos más completos y exhaustivos. 


Author(s):  
Pankaj Gupta ◽  
Yatin Chaudhary ◽  
Florian Buettner ◽  
Hinrich Schütze

We address two challenges in topic models: (1) Context information around words helps in determining their actual meaning, e.g., “networks” used in the contexts artificial neural networks vs. biological neuron networks. Generative topic models infer topic-word distributions, taking no or only little context into account. Here, we extend a neural autoregressive topic model to exploit the full context information around words in a document in a language modeling fashion. The proposed model is named as iDocNADE. (2) Due to the small number of word occurrences (i.e., lack of context) in short text and data sparsity in a corpus of few documents, the application of topic models is challenging on such texts. Therefore, we propose a simple and efficient way of incorporating external knowledge into neural autoregressive topic models: we use embeddings as a distributional prior. The proposed variants are named as DocNADEe and iDocNADEe. We present novel neural autoregressive topic model variants that consistently outperform state-of-the-art generative topic models in terms of generalization, interpretability (topic coherence) and applicability (retrieval and classification) over 7 long-text and 8 short-text datasets from diverse domains.


Author(s):  
Ryohei Hisano

Topic models are frequently used in machine learning owing to their high interpretability and modular structure. However, extending a topic model to include a supervisory signal, to incorporate pre-trained word embedding vectors and to include a nonlinear output function is not an easy task because one has to resort to a highly intricate approximate inference procedure. The present paper shows that topic modeling with pre-trained word embedding vectors can be viewed as implementing a neighborhood aggregation algorithm where messages are passed through a network defined over words. From the network view of topic models, nodes correspond to words in a document and edges correspond to either a relationship describing co-occurring words in a document or a relationship describing the same word in the corpus. The network view allows us to extend the model to include supervisory signals, incorporate pre-trained word embedding vectors and include a nonlinear output function in a simple manner. In experiments, we show that our approach outperforms the state-of-the-art supervised Latent Dirichlet Allocation implementation in terms of held-out document classification tasks.


2018 ◽  
Vol 45 (4) ◽  
pp. 554-570 ◽  
Author(s):  
Jian Jin ◽  
Qian Geng ◽  
Haikun Mou ◽  
Chong Chen

Interdisciplinary studies are becoming increasingly popular, and research domains of many experts are becoming diverse. This phenomenon brings difficulty in recommending experts to review interdisciplinary submissions. In this study, an Author–Subject–Topic (AST) model is proposed with two versions. In the model, reviewers’ subject information is embedded to analyse topic distributions of submissions and reviewers’ publications. The major difference between the AST and Author–Topic models lies in the introduction of a ‘Subject’ layer, which supervises the generation of hierarchical topics and allows sharing of subjects among authors. To evaluate the performance of the AST model, papers in Information System and Management (a typical interdisciplinary domain) in a famous Chinese academic library are investigated. Comparative experiments are conducted, which show the effectiveness of the AST model in topic distribution analysis and reviewer recommendation for interdisciplinary studies.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4595 ◽  
Author(s):  
Clara Gomez ◽  
Alejandra C. Hernandez ◽  
Ramon Barber

Exploration of unknown environments is a fundamental problem in autonomous robotics that deals with the complexity of autonomously traversing an unknown area while acquiring the most important information of the environment. In this work, a mobile robot exploration algorithm for indoor environments is proposed. It combines frontier-based concepts with behavior-based strategies in order to build a topological representation of the environment. Frontier-based approaches assume that, to gain the most information of an environment, the robot has to move to the regions on the boundary between open space and unexplored space. The novelty of this work is in the semantic frontier classification and frontier selection according to a cost–utility function. In addition, a probabilistic loop closure algorithm is proposed to solve cyclic situations. The system outputs a topological map of the free areas of the environment for further navigation. Finally, simulated and real-world experiments have been carried out, their results and the comparison to other state-of-the-art algorithms show the feasibility of the exploration algorithm proposed and the improvement that it offers with regards to execution time and travelled distance.


Author(s):  
Tomer Raviv ◽  
Asaf Schwartz ◽  
Yair Be'ery

Tail-biting convolutional codes extend the classical zero-termination convolutional codes: Both encoding schemes force the equality of start and end states, but under the tail-biting each state is a valid termination. This paper proposes a machine-learning approach to improve the state-of-the-art decoding of tail-biting codes, focusing on the widely employed short length regime as in the LTE standard. This standard also includes a CRC code. First, we parameterize the circular Viterbi algorithm, a baseline decoder that exploits the circular nature of the underlying trellis. An ensemble combines multiple such weighted decoders, each decoder specializes in decoding words from a specific region of the channel words' distribution. A region corresponds to a subset of termination states; the ensemble covers the entire states space. A non-learnable gating satisfies two goals: it filters easily decoded words and mitigates the overhead of executing multiple weighted decoders. The CRC criterion is employed to choose only a subset of experts for decoding purpose. Our method achieves FER improvement of up to 0.75dB over the CVA in the waterfall region for multiple code lengths, adding negligible computational complexity compared to the circular Viterbi algorithm in high SNRs.


Sign in / Sign up

Export Citation Format

Share Document