Cross Domain Image Transformation Using Effective Latent Space Association

Author(s):  
Naeem Ul Islam ◽  
Sukhan Lee
Author(s):  
Yinan Zhang ◽  
Yong Liu ◽  
Peng Han ◽  
Chunyan Miao ◽  
Lizhen Cui ◽  
...  

Cross-domain recommendation methods usually transfer knowledge across different domains implicitly, by sharing model parameters or learning parameter mappings in the latent space. Differing from previous studies, this paper focuses on learning explicit mapping between a user's behaviors (i.e. interaction itemsets) in different domains during the same temporal period. In this paper, we propose a novel deep cross-domain recommendation model, called Cycle Generation Networks (CGN). Specifically, CGN employs two generators to construct the dual-direction personalized itemset mapping between a user's behaviors in two different domains over time. The generators are learned by optimizing the distance between the generated itemset and the real interacted itemset, as well as the cycle-consistent loss defined based on the dual-direction generation procedure. We have performed extensive experiments on real datasets to demonstrate the effectiveness of the proposed model, comparing with existing single-domain and cross-domain recommendation methods.


Author(s):  
Suk Kyoung Choi ◽  
Steve DiPaola ◽  
Hannu Töyrylä

Recent developments in neural network image processing motivate the question, how these technologies might better serve visual artists. Research goals to date have largely focused on either pastiche interpretations of what is framed as artistic “style” or seek to divulge heretofore unimaginable dimensions of algorithmic “latent space,” but have failed to address the process an artist might actually pursue, when engaged in the reflective act of developing an image from imagination and lived experience. The tools, in other words, are constituted in research demonstrations rather than as tools of creative expression. In this article, the authors explore the phenomenology of the creative environment afforded by artificially intelligent image transformation and generation, drawn from autoethnographic reviews of the authors’ individual approaches to artificial intelligence (AI) art. They offer a post-phenomenology of “neural media” such that visual artists may begin to work with AI technologies in ways that support naturalistic processes of thinking about and interacting with computationally mediated interactive creation.


Author(s):  
Suk Kyoung Choi ◽  
Steve DiPaola ◽  
Hannu Töyrylä

Recent developments in neural network image processing motivate the question, how these technologies might better serve visual artists. Research goals to date have largely focused on either pastiche interpretations of what is framed as artistic “style” or seek to divulge heretofore unimaginable dimensions of algorithmic “latent space,” but have failed to address the process an artist might actually pursue, when engaged in the reflective act of developing an image from imagination and lived experience. The tools, in other words, are constituted in research demonstrations rather than as tools of creative expression. In this article, the authors explore the phenomenology of the creative environment afforded by artificially intelligent image transformation and generation, drawn from autoethnographic reviews of the authors’ individual approaches to artificial intelligence (AI) art. They offer a post-phenomenology of “neural media” such that visual artists may begin to work with AI technologies in ways that support naturalistic processes of thinking about and interacting with computationally mediated interactive creation.


IEEE Access ◽  
2022 ◽  
pp. 1-1
Author(s):  
Taisei Hirakawa ◽  
Keisuke Maeda ◽  
Takahiro Ogawa ◽  
Satoshi Asamizu ◽  
Miki Haseyama

2019 ◽  
Author(s):  
Sadegh Mohammadi ◽  
Bing O'Dowd ◽  
Christian Paulitz-Erdmann ◽  
Linus Goerlitz

Variational autoencoders have emerged as one of the most common approaches for automating molecular generation. We seek to learn a cross-domain latent space capturing chemical and biological information, simultaneously. To do so, we introduce the Penalized Variational Autoencoder which directly operates on SMILES, a linear string representation of molecules, with a weight penalty term in the decoder to address the imbalance in the character distribution of SMILES strings. We find that this greatly improves upon previous variational autoencoder approaches in the quality of the latent space and the generalization ability of the latent space to new chemistry. Next, we organize the latent space according to chemical and biological properties by jointly training the Penalized Variational Autoencoder with linear units. Extensive experiments on a range of tasks, including reconstruction, validity, and transferability demonstrates that the proposed methods here substantially outperform previous SMILES and graph-based methods, as well as introduces a new way to generate molecules from a set of desired properties, without prior knowledge of a chemical structure.


2020 ◽  
Vol 34 (07) ◽  
pp. 11856-11864
Author(s):  
Quang-Hieu Pham ◽  
Mikaela Angelina Uy ◽  
Binh-Son Hua ◽  
Duc Thanh Nguyen ◽  
Gemma Roig ◽  
...  

In this work, we present a novel method to learn a local cross-domain descriptor for 2D image and 3D point cloud matching. Our proposed method is a dual auto-encoder neural network that maps 2D and 3D input into a shared latent space representation. We show that such local cross-domain descriptors in the shared embedding are more discriminative than those obtained from individual training in 2D and 3D domains. To facilitate the training process, we built a new dataset by collecting ≈ 1.4 millions of 2D-3D correspondences with various lighting conditions and settings from publicly available RGB-D scenes. Our descriptor is evaluated in three main experiments: 2D-3D matching, cross-domain retrieval, and sparse-to-dense depth estimation. Experimental results confirm the robustness of our approach as well as its competitive performance not only in solving cross-domain tasks but also in being able to generalize to solve sole 2D and 3D tasks. Our dataset and code are released publicly at https://hkust-vgd.github.io/lcd.


Author(s):  
Lei Guo ◽  
Li Tang ◽  
Tong Chen ◽  
Lei Zhu ◽  
Quoc Viet Hung Nguyen ◽  
...  

Shared-account Cross-domain Sequential Recommendation (SCSR) is the task of recommending the next item based on a sequence of recorded user behaviors, where multiple users share a single account, and their behaviours are available in multiple domains. Existing work on solving SCSR mainly relies on mining sequential patterns via RNN-based models, which are not expressive enough to capture the relationships among multiple entities. Moreover, all existing algorithms try to bridge two domains via knowledge transfer in the latent space, and the explicit cross-domain graph structure is unexploited. In this work, we propose a novel graph-based solution, namely DA-GCN, to address the above challenges. Specifically, we first link users and items in each domain as a graph. Then, we devise a domain-aware graph convolution network to learn user-specific node representations. To fully account for users' domain-specific preferences on items, two novel attention mechanisms are further developed to selectively guide the message passing process. Extensive experiments on two real-world datasets are conducted to demonstrate the superiority of our DA-GCN method.


Sign in / Sign up

Export Citation Format

Share Document