latent space models
Recently Published Documents


TOTAL DOCUMENTS

24
(FIVE YEARS 11)

H-INDEX

7
(FIVE YEARS 2)

2021 ◽  
Vol 30 (1) ◽  
pp. 19-33
Author(s):  
Annis Shafika Amran ◽  
Sharifah Aida Sheikh Ibrahim ◽  
Nurul Hashimah Ahamed Hassain Malim ◽  
Nurfaten Hamzah ◽  
Putra Sumari ◽  
...  

Electroencephalogram (EEG) is a neurotechnology used to measure brain activity via brain impulses. Throughout the years, EEG has contributed tremendously to data-driven research models (e.g., Generalised Linear Models, Bayesian Generative Models, and Latent Space Models) in Neuroscience Technology and Neuroinformatic. Due to versatility, portability, cost feasibility, and non-invasiveness. It contributed to various Neuroscientific data that led to advancement in medical, education, management, and even the marketing field. In the past years, the extensive uses of EEG have been inclined towards medical healthcare studies such as in disease detection and as an intervention in mental disorders, but not fully explored for uses in neuromarketing. Hence, this study construes the data acquisition technique in neuroscience studies using electroencephalogram and outlines the trend of revolution of this technique in aspects of its technology and databases by focusing on neuromarketing uses.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0253873
Author(s):  
Hanxuan Yang ◽  
Wei Xiong ◽  
Xueliang Zhang ◽  
Kai Wang ◽  
Maozai Tian

Online social networks like Twitter and Facebook are among the most popular sites on the Internet. Most online social networks involve some specific features, including reciprocity, transitivity and degree heterogeneity. Such networks are so called scale-free networks and have drawn lots of attention in research. The aim of this paper is to develop a novel methodology for directed network embedding within the latent space model (LSM) framework. It is known, the link probability between two individuals may increase as the features of each become similar, which is referred to as homophily attributes. To this end, penalized pair-specific attributes, acting as a distance measure, are introduced to provide with more powerful interpretation and improve link prediction accuracy, named penalized homophily latent space models (PHLSM). The proposed models also involve in-degree heterogeneity of directed scale-free networks by embedding with the popularity scales. We also introduce LASSO-based PHLSM to produce an accurate and sparse model for high-dimensional covariates. We make Bayesian inference using MCMC algorithms. The finite sample performance of the proposed models is evaluated by three benchmark simulation datasets and two real data examples. Our methods are competitive and interpretable, they outperform existing approaches for fitting directed networks.


2020 ◽  
Vol 74 (3) ◽  
pp. 324-341
Author(s):  
Silvia D'Angelo ◽  
Marco Alfò ◽  
Thomas Brendan Murphy

2020 ◽  
Vol 34 (04) ◽  
pp. 5289-5297
Author(s):  
Luke J. O'Connor ◽  
Muriel Medard ◽  
Soheil Feizi

A latent space model for a family of random graphs assigns real-valued vectors to nodes of the graph such that edge probabilities are determined by latent positions. Latent space models provide a natural statistical framework for graph visualizing and clustering. A latent space model of particular interest is the Random Dot Product Graph (RDPG), which can be fit using an efficient spectral method; however, this method is based on a heuristic that can fail, even in simple cases. Here, we consider a closely related latent space model, the Logistic RDPG, which uses a logistic link function to map from latent positions to edge likelihoods. Over this model, we show that asymptotically exact maximum likelihood inference of latent position vectors can be achieved using an efficient spectral method. Our method involves computing top eigenvectors of a normalized adjacency matrix and scaling eigenvectors using a regression step. The novel regression scaling step is an essential part of the proposed method. In simulations, we show that our proposed method is more accurate and more robust than common practices. We also show the effectiveness of our approach over standard real networks of the karate club and political blogs.


2020 ◽  
Vol 34 (04) ◽  
pp. 4304-4311
Author(s):  
Mingi Ji ◽  
Weonyoung Joo ◽  
Kyungwoo Song ◽  
Yoon-Yeong Kim ◽  
Il-Chul Moon

Recent studies identified that sequential Recommendation is improved by the attention mechanism. By following this development, we propose Relation-Aware Kernelized Self-Attention (RKSA) adopting a self-attention mechanism of the Transformer with augmentation of a probabilistic model. The original self-attention of Transformer is a deterministic measure without relation-awareness. Therefore, we introduce a latent space to the self-attention, and the latent space models the recommendation context from relation as a multivariate skew-normal distribution with a kernelized covariance matrix from co-occurrences, item characteristics, and user information. This work merges the self-attention of the Transformer and the sequential recommendation by adding a probabilistic model of the recommendation task specifics. We experimented RKSA over the benchmark datasets, and RKSA shows significant improvements compared to the recent baseline models. Also, RKSA were able to produce a latent space model that answers the reasons for recommendation.


2020 ◽  
Vol 112 ◽  
pp. 103792 ◽  
Author(s):  
Fernando Linardi ◽  
Cees Diks ◽  
Marco van der Leij ◽  
Iuri Lazier

2019 ◽  
Vol 10 (1) ◽  
Author(s):  
Xinqiang Ding ◽  
Zhengting Zou ◽  
Charles L. Brooks III

AbstractProtein sequences contain rich information about protein evolution, fitness landscapes, and stability. Here we investigate how latent space models trained using variational auto-encoders can infer these properties from sequences. Using both simulated and real sequences, we show that the low dimensional latent space representation of sequences, calculated using the encoder model, captures both evolutionary and ancestral relationships between sequences. Together with experimental fitness data and Gaussian process regression, the latent space representation also enables learning the protein fitness landscape in a continuous low dimensional space. Moreover, the model is also useful in predicting protein mutational stability landscapes and quantifying the importance of stability in shaping protein evolution. Overall, we illustrate that the latent space models learned using variational auto-encoders provide a mechanism for exploration of the rich data contained in protein sequences regarding evolution, fitness and stability and hence are well-suited to help guide protein engineering efforts.


2019 ◽  
Vol 34 (3) ◽  
pp. 428-453 ◽  
Author(s):  
Anna L. Smith ◽  
Dena M. Asta ◽  
Catherine A. Calder

Sign in / Sign up

Export Citation Format

Share Document