Comparative Analysis of Scientific Papers Collections via Topic Modeling and Co-authorship Networks

Author(s):  
Fedor Krasnov ◽  
Alexander Dimentov ◽  
Mikhail Shvartsman
2018 ◽  
Vol 22 (3) ◽  
pp. 109-134
Author(s):  
JIHYO KIM ◽  
Eon-Suk Ko

2020 ◽  
Vol 14 (2) ◽  
pp. 1-27
Author(s):  
Ting Hua ◽  
Chang-Tien Lu ◽  
Jaegul Choo ◽  
Chandan K. Reddy

2021 ◽  
Vol 21 (5) ◽  
pp. 79-88
Author(s):  
Chae Yeon Han ◽  
Woo Sik Kim ◽  
Dong Keun Yoon

This study aims to analyze differences in domestic and international disaster research trends. We first performed topic modeling on 20,477 papers published in three domestic and 12 international journals over the last 21 years (2000-2020) and then visualized the trends. Based on the extracted topics and keywords, we analyzed keyword networks using Gephi. Research in domestic journals mainly revolved around natural disasters like earthquakes, fire, and flooding. In contrast, international journals spotlighted policy-based topics such as disaster governance and community resilience. Meanwhile, globally, building and civil engineering research has shrunk in recent five years (we refer to this as a cold topic). On the other hand, in the past five years, fire and flood research has appeared more frequently in domestic journals, while international journals have presented more articles on community resilience, risk perception, and behavior (we refer to this as a hot topic). Results of this research can provide suggestions about the directions domestic disaster research should develop in the future.


2021 ◽  
Vol 10 (04) ◽  
pp. 541-548
Author(s):  
Denis Luiz Marcello Owa

AI ◽  
2021 ◽  
Vol 2 (4) ◽  
pp. 578-599
Author(s):  
Fuad Alattar ◽  
Khaled Shaalan

Comparing two sets of documents to identify new topics is useful in many applications, like discovering trending topics from sets of scientific papers, emerging topic detection in microblogs, and interpreting sentiment variations in Twitter. In this paper, the main topic-modeling-based approaches to address this task are examined to identify limitations and necessary enhancements. To overcome these limitations, we introduce two separate frameworks to discover emerging topics through a filtered latent Dirichlet allocation (filtered-LDA) model. The model acts as a filter that identifies old topics from a timestamped set of documents, removes all documents that focus on old topics, and keeps documents that discuss new topics. Filtered-LDA also genuinely reduces the chance of using keywords from old topics to represent emerging topics. The final stage of the filter uses multiple topic visualization formats to improve human interpretability of the filtered topics, and it presents the most-representative document for each topic.


Author(s):  
Yiming Wang ◽  
Ximing Li ◽  
Jihong Ouyang

Neural topic modeling provides a flexible, efficient, and powerful way to extract topic representations from text documents. Unfortunately, most existing models cannot handle the text data with network links, such as web pages with hyperlinks and scientific papers with citations. To resolve this kind of data, we develop a novel neural topic model , namely Layer-Assisted Neural Topic Model (LANTM), which can be interpreted from the perspective of variational auto-encoders. Our major motivation is to enhance the topic representation encoding by not only using text contents, but also the assisted network links. Specifically, LANTM encodes the texts and network links to the topic representations by an augmented network with graph convolutional modules, and decodes them by maximizing the likelihood of the generative process. The neural variational inference is adopted for efficient inference. Experimental results validate that LANTM significantly outperforms the existing models on topic quality, text classification and link prediction..


Sign in / Sign up

Export Citation Format

Share Document