A Comparison on the Use of LSA and LDA in Psychology Analysis on “Courage” Definitions

2017 ◽  
Vol 11 (03) ◽  
pp. 373-389
Author(s):  
Sara Santilli ◽  
Laura Nota ◽  
Giovanni Pilato

In the present work Latent Semantic Analysis of textual data was applied on texts related to courage, in order to compare and contrast results and evaluate the opportunity of integrating different data sets. To better understand the definition of courage in Italian context, 1199 participants were involved in the present study and was asked to answer to the following question “Courage is[Formula: see text]”. The participants’ definitions of courage were analyzed with the Latent Semantic Analysis (LSA), and Latent Dirichlet Allocation (LDA), in order to study the fundamental concepts arising from the population. An analogous comparison with Twitter posts has been also carried out to analyze if the public opinion emerging from social media provides a challenging and rich context to explore computational models of natural language.

Author(s):  
Priyanka R. Patil ◽  
Shital A. Patil

Similarity View is an application for visually comparing and exploring multiple models of text and collection of document. Friendbook finds ways of life of clients from client driven sensor information, measures the closeness of ways of life amongst clients, and prescribes companions to clients if their ways of life have high likeness. Roused by demonstrate a clients day by day life as life records, from their ways of life are separated by utilizing the Latent Dirichlet Allocation Algorithm. Manual techniques can't be utilized for checking research papers, as the doled out commentator may have lacking learning in the exploration disciplines. For different subjective views, causing possible misinterpretations. An urgent need for an effective and feasible approach to check the submitted research papers with support of automated software. A method like text mining method come to solve the problem of automatically checking the research papers semantically. The proposed method to finding the proper similarity of text from the collection of documents by using Latent Dirichlet Allocation (LDA) algorithm and Latent Semantic Analysis (LSA) with synonym algorithm which is used to find synonyms of text index wise by using the English wordnet dictionary, another algorithm is LSA without synonym used to find the similarity of text based on index. LSA with synonym rate of accuracy is greater when the synonym are consider for matching.


Natural Language Processing uses word embeddings to map words into vectors. Context vector is one of the techniques to map words into vectors. The context vector gives importance of terms in the document corpus. The derivation of context vector is done using various methods such as neural networks, latent semantic analysis, knowledge base methods etc. This paper proposes a novel system to devise an enhanced context vector machine called eCVM. eCVM is able to determine the context phrases and its importance. eCVM uses latent semantic analysis, existing context vector machine, dependency parsing, named entities, topics from latent dirichlet allocation and various forms of words like nouns, adjectives and verbs for building the context. eCVM uses context vector and Pagerank algorithm to find the importance of the term in document and is tested on BBC news dataset. Results of eCVM are compared with compared with the state of the art for context detrivation. The proposed system shows improved performance over existing systems for standard evaluation parameters.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
John T. Hale ◽  
Luca Campanelli ◽  
Jixing Li ◽  
Shohini Bhattasali ◽  
Christophe Pallier ◽  
...  

Efforts to understand the brain bases of language face the Mapping Problem: At what level do linguistic computations and representations connect to human neurobiology? We review one approach to this problem that relies on rigorously defined computational models to specify the links between linguistic features and neural signals. Such tools can be used to estimate linguistic predictions, model linguistic features, and specify a sequence of processing steps that may be quantitatively fit to neural signals collected while participants use language. Progress has been helped by advances in machine learning, attention to linguistically interpretable models, and openly shared data sets that allow researchers to compare and contrast a variety of models. We describe one such data set in detail in the Supplementary Appendix. Expected final online publication date for the Annual Review of Linguistics, Volume 8 is January 2022. Please see http://www.annualreviews.org/page/journal/pubdates for revised estimates.


The Covid-19 pandemic is the deadliest outbreak in our living memory. So, it is need of hour, to prepare the world with strategies to prevent and control the impact of the epidemics. In this paper, a novel semantic pattern detection approach in the Covid-19 literature using contextual clustering and intelligent topic modeling is presented. For contextual clustering, three level weights at term level, document level, and corpus level are used with latent semantic analysis. For intelligent topic modeling, semantic collocations using pointwise mutual information(PMI) and log frequency biased mutual dependency(LBMD) are selected and latent dirichlet allocation is applied. Contextual clustering with latent semantic analysis presents semantic spaces with high correlation in terms at corpus level. Through intelligent topic modeling, topics are improved in the form of lower perplexity and highly coherent. This research helps in finding the knowledge gap in the area of Covid-19 research and offered direction for future research.


Author(s):  
Christopher John Quinn ◽  
Matthew James Quinn ◽  
Alan Olinsky ◽  
John Thomas Quinn

This chapter provides an overview for a number of important issues related to studying user interactions in an online social network. The approach of social network analysis is detailed along with important basic concepts for network models. The different ways of indicating influence within a network are provided by describing various measures such as degree centrality, betweenness centrality and closeness centrality. Network structure as represented by cliques and components with measures of connectedness defined by clustering and reciprocity are also included. With the large volume of data associated with social networks, the significance of data storage and sampling are discussed. Since verbal communication is significant within networks, textual analysis is reviewed with respect to classification techniques such as sentiment analysis and with respect to topic modeling specifically latent semantic analysis, probabilistic latent semantic analysis, latent Dirichlet allocation and alternatives. Another important area that is provided in detail is information diffusion.


Author(s):  
William J. Irwin ◽  
Saul D. Robinson ◽  
Stephen M. Belt

Objective Introduced is a visual data exploration technique for compiling, reducing, organizing, visually rendering, and filtering text-based narratives for detailed analysis. Background The analysis of data sets provides an increasingly difficult problem. The method of visual representation is considered an effective tool in many applications. The focus of this study was to determine if a latent semantic analysis–based projection of narrative data into a geographic information systems software program provided a useful tool for reducing and organizing large sums of narrative data for analysis. Method This approach utilizes latent semantic analysis to reduce narratives to a high-dimensional vector, truncates the vector to a two-dimensional projection through application of isometric mapping, and then visually renders the result with geographic information systems software. This method is demonstrated on aviation self-reported safety narratives sourced from the Aviation Safety Reporting System. Results Thematic regions from the corpus are illustrated along with the first five topics identified. Conclusion Shown is the ability to assimilate a large number of narratives, identify contextual themes, recognize common events and outliers, and organize resultant topics. Application Large narrative-based data sets present in aviation and other domains may be visualized to facilitate efficient analysis, enhance comprehension, and improve safety.


Entropy ◽  
2019 ◽  
Vol 21 (7) ◽  
pp. 660 ◽  
Author(s):  
Sergei Koltcov ◽  
Vera Ignatenko ◽  
Olessia Koltsova

Topic modeling is a popular approach for clustering text documents. However, current tools have a number of unsolved problems such as instability and a lack of criteria for selecting the values of model parameters. In this work, we propose a method to solve partially the problems of optimizing model parameters, simultaneously accounting for semantic stability. Our method is inspired by the concepts from statistical physics and is based on Sharma–Mittal entropy. We test our approach on two models: probabilistic Latent Semantic Analysis (pLSA) and Latent Dirichlet Allocation (LDA) with Gibbs sampling, and on two datasets in different languages. We compare our approach against a number of standard metrics, each of which is able to account for just one of the parameters of our interest. We demonstrate that Sharma–Mittal entropy is a convenient tool for selecting both the number of topics and the values of hyper-parameters, simultaneously controlling for semantic stability, which none of the existing metrics can do. Furthermore, we show that concepts from statistical physics can be used to contribute to theory construction for machine learning, a rapidly-developing sphere that currently lacks a consistent theoretical ground.


2021 ◽  
Vol 31 (3) ◽  
Author(s):  
Cinzia Viroli ◽  
Laura Anderlucci

AbstractMixtures of unigrams are one of the simplest and most efficient tools for clustering textual data, as they assume that documents related to the same topic have similar distributions of terms, naturally described by multinomials. When the classification task is particularly challenging, such as when the document-term matrix is high-dimensional and extremely sparse, a more composite representation can provide better insight into the grouping structure. In this work, we developed a deep version of mixtures of unigrams for the unsupervised classification of very short documents with a large number of terms, by allowing for models with further deeper latent layers; the proposal is derived in a Bayesian framework. The behavior of the deep mixtures of unigrams is empirically compared with that of other traditional and state-of-the-art methods, namely k-means with cosine distance, k-means with Euclidean distance on data transformed according to semantic analysis, partition around medoids, mixture of Gaussians on semantic-based transformed data, hierarchical clustering according to Ward’s method with cosine dissimilarity, latent Dirichlet allocation, mixtures of unigrams estimated via the EM algorithm, spectral clustering and affinity propagation clustering. The performance is evaluated in terms of both correct classification rate and Adjusted Rand Index. Simulation studies and real data analysis prove that going deep in clustering such data highly improves the classification accuracy.


2021 ◽  
pp. 089448652110083
Author(s):  
Joshua J. Daspit ◽  
James J. Chrisman ◽  
Triss Ashton ◽  
Nicholas Evangelopoulos

While progress has been made in recent years to understand the differences among family firms, insights remain fragmented due, in part, to an incomplete understanding of heterogeneity and the scope of differences that exist among family firms. Given this, we offer a definition of and review the literature on family firm heterogeneity. A latent semantic analysis of 781 articles from 33 journals identified nine common themes of family firm heterogeneity. For each theme, we review scholarly progress made and highlight differences among family firms. Additionally, we offer directions for advancing the study of family firm heterogeneity.


Author(s):  
Samuel Kim ◽  
Panayiotis Georgiou ◽  
Shrikanth Narayanan

We propose the notion of latent acoustic topics to capture contextual information embedded within a collection of audio signals. The central idea is to learn a probability distribution over a set of latent topics of a given audio clip in an unsupervised manner, assuming that there exist latent acoustic topics and each audio clip can be described in terms of those latent acoustic topics. In this regard, we use the latent Dirichlet allocation (LDA) to implement the acoustic topic models over elemental acoustic units, referred as acoustic words, and perform text-like audio signal processing. Experiments on audio tag classification with the BBC sound effects library demonstrate the usefulness of the proposed latent audio context modeling schemes. In particular, the proposed method is shown to be superior to other latent structure analysis methods, such as latent semantic analysis and probabilistic latent semantic analysis. We also demonstrate that topic models can be used as complementary features to content-based features and offer about 9% relative improvement in audio classification when combined with the traditional Gaussian mixture model (GMM)–Support Vector Machine (SVM) technique.


Sign in / Sign up

Export Citation Format

Share Document