scholarly journals A latent variable model for two-dimensional canonical correlation analysis and the variational inference

2020 ◽  
Vol 24 (12) ◽  
pp. 8737-8749
Author(s):  
Mehran Safayani ◽  
Saeid Momenzadeh ◽  
Abdolreza Mirzaei ◽  
Masoomeh Sadat Razavi
Author(s):  
Kumud Arora ◽  
Poonam Garg

Face pose recognition is one of the challenging areas in computer vision. Cross-pose change causes the change in the information of face appearance. The maximization of intrasubject correlation helps to widen the intersubject differences which helps further in achieving pose invariance. In this paper, for cross pose recognition, the authors propose to maximize the cross pose correlation by using the logically concatenated cross binary pattern (LC-CBP) descriptor and two dimensional canonical correlation analysis (2DCCA). The LC-CBP descriptor extracts the local texture details of face images with low computation complexity and the 2DCCA explicitly searches for the maximization of the correlated features to retain most informative content. Joint feature consideration via 2DCCA helps in setting up a better correspondence between a discrete set of nonfrontal pose and the frontal pose of the same subject. Experimental results demonstrate the two dimensional canonical correlation LC-CBP descriptor along with intensity values improve the correlation.


Author(s):  
Aghiles Salah ◽  
Hady W. Lauw

Personalized recommendation has proven to be very promising in modeling the preference of users over items. However, most existing work in this context focuses primarily on modeling user-item interactions, which tend to be very sparse. We propose to further leverage the item-item relationships that may reflect various aspects of items that guide users' choices. Intuitively, items that occur within the same "context" (e.g., browsed in the same session, purchased in the same basket) are likely related in some latent aspect. Therefore, accounting for the item's context would complement the sparse user-item interactions by extending a user's preference to other items of similar aspects. To realize this intuition, we develop Collaborative Context Poisson Factorization (C2PF), a new Bayesian latent variable model that seamlessly integrates contextual relationships among items into a personalized recommendation approach. We further derive a scalable variational inference algorithm to fit C2PF to preference data. Empirical results on real-world datasets show evident performance improvements over strong factorization models.


2018 ◽  
Vol 111 ◽  
pp. 101-108 ◽  
Author(s):  
Nandakishor Desai ◽  
Abd-Krim Seghouane ◽  
Marimuthu Palaniswami

2013 ◽  
Vol 25 (4) ◽  
pp. 979-1005 ◽  
Author(s):  
Yusuke Fujiwara ◽  
Yoichi Miyawaki ◽  
Yukiyasu Kamitani

Neural encoding and decoding provide perspectives for understanding neural representations of sensory inputs. Recent functional magnetic resonance imaging (fMRI) studies have succeeded in building prediction models for encoding and decoding numerous stimuli by representing a complex stimulus as a combination of simple elements. While arbitrary visual images were reconstructed using a modular model that combined the outputs of decoder modules for multiscale local image bases (elements), the shapes of the image bases were heuristically determined. In this work, we propose a method to establish mappings between the stimulus and the brain by automatically extracting modules from measured data. We develop a model based on Bayesian canonical correlation analysis, in which each module is modeled by a latent variable that relates a set of pixels in a visual image to a set of voxels in an fMRI activity pattern. The estimated mapping from a latent variable to pixels can be regarded as an image basis. We show that the model estimates a modular representation with spatially localized multiscale image bases. Further, using the estimated mappings, we derive encoding and decoding models that produce accurate predictions for brain activity and stimulus images. Our approach thus provides a novel means of revealing neural representations of stimuli by automatically extracting modules, which can be used to generate effective prediction models for encoding and decoding.


Sign in / Sign up

Export Citation Format

Share Document