A Reply Graph-based Social Mining Method with Topic Modeling

2014 ◽  
Vol 24 (6) ◽  
pp. 640-645 ◽  
Author(s):  
Sang Yeon Lee ◽  
Keon Myung Lee
2018 ◽  
Vol 11 (1) ◽  
pp. 18-27 ◽  
Author(s):  
Micah D. Saxton

Topic modeling is a data mining method which can be used to understand and categorize large corpora of data; as such, it is a tool which theological librarians can use in their professional workflows and scholarly practices. In this article I provide a gentle introduction to topic modeling for those who have no prior knowledge of the topic. I begin with a conceptual overview of topic modeling which does not rely on the complicated mathematics behind the process. Then, I illustrate topic modeling by providing a narrative of building a topic model using the entirety Theological Librarianship as my example corpus. This narrative ends with an analysis of the success of the model and suggestions for improvement. Finally, I recommend a few resources for those who would like to pursue topic modeling further.


Author(s):  
Maria A. Milkova

Nowadays the process of information accumulation is so rapid that the concept of the usual iterative search requires revision. Being in the world of oversaturated information in order to comprehensively cover and analyze the problem under study, it is necessary to make high demands on the search methods. An innovative approach to search should flexibly take into account the large amount of already accumulated knowledge and a priori requirements for results. The results, in turn, should immediately provide a roadmap of the direction being studied with the possibility of as much detail as possible. The approach to search based on topic modeling, the so-called topic search, allows you to take into account all these requirements and thereby streamline the nature of working with information, increase the efficiency of knowledge production, avoid cognitive biases in the perception of information, which is important both on micro and macro level. In order to demonstrate an example of applying topic search, the article considers the task of analyzing an import substitution program based on patent data. The program includes plans for 22 industries and contains more than 1,500 products and technologies for the proposed import substitution. The use of patent search based on topic modeling allows to search immediately by the blocks of a priori information – terms of industrial plans for import substitution and at the output get a selection of relevant documents for each of the industries. This approach allows not only to provide a comprehensive picture of the effectiveness of the program as a whole, but also to visually obtain more detailed information about which groups of products and technologies have been patented.


2020 ◽  
Vol 16 (2) ◽  
pp. 83-115
Author(s):  
Mira Kim ◽  
◽  
Hye Sun Hwang ◽  
Xu Li

2019 ◽  
Vol 58 (6) ◽  
pp. 197-207
Author(s):  
Juhae Baeck ◽  
Hyungil Kwon ◽  
Mihwa Choi ◽  
Yi-Hsiu Lin

Author(s):  
Priyanka R. Patil ◽  
Shital A. Patil

Similarity View is an application for visually comparing and exploring multiple models of text and collection of document. Friendbook finds ways of life of clients from client driven sensor information, measures the closeness of ways of life amongst clients, and prescribes companions to clients if their ways of life have high likeness. Roused by demonstrate a clients day by day life as life records, from their ways of life are separated by utilizing the Latent Dirichlet Allocation Algorithm. Manual techniques can't be utilized for checking research papers, as the doled out commentator may have lacking learning in the exploration disciplines. For different subjective views, causing possible misinterpretations. An urgent need for an effective and feasible approach to check the submitted research papers with support of automated software. A method like text mining method come to solve the problem of automatically checking the research papers semantically. The proposed method to finding the proper similarity of text from the collection of documents by using Latent Dirichlet Allocation (LDA) algorithm and Latent Semantic Analysis (LSA) with synonym algorithm which is used to find synonyms of text index wise by using the English wordnet dictionary, another algorithm is LSA without synonym used to find the similarity of text based on index. LSA with synonym rate of accuracy is greater when the synonym are consider for matching.


Sign in / Sign up

Export Citation Format

Share Document