scholarly journals A COMPUTATIONAL APPROACH TO GENERATE DESIGN WITH SPECIFIC STYLE

2021 ◽  
Vol 1 ◽  
pp. 21-30
Author(s):  
Da Wang ◽  
Jiaqi Li ◽  
Zhen Ge ◽  
Ji Han

AbstractCreativity is crucial in design. In recent years, growing computational methods are applied to improve the creativity of design. This paper aims to explore an approach to generate creative design images with specific feature or design style. A Generative Adversarial Network model is applied in the approach to learn the specific design style. The target products will be projected into the latent space of model to transfer their styles and generate images. The generated images combine the features of the specific design style and the features of the target product. In the experiment, the approach using the generated images to inspire the human designer to generate the creative design in according styles. According to the primary verification by participants, the generated images can bring novelty and surprise to participants, which gain the positive impact on human creativity.

2020 ◽  
Vol 34 (03) ◽  
pp. 2661-2668
Author(s):  
Chuang Lin ◽  
Sicheng Zhao ◽  
Lei Meng ◽  
Tat-Seng Chua

Existing domain adaptation methods on visual sentiment classification typically are investigated under the single-source scenario, where the knowledge learned from a source domain of sufficient labeled data is transferred to the target domain of loosely labeled or unlabeled data. However, in practice, data from a single source domain usually have a limited volume and can hardly cover the characteristics of the target domain. In this paper, we propose a novel multi-source domain adaptation (MDA) method, termed Multi-source Sentiment Generative Adversarial Network (MSGAN), for visual sentiment classification. To handle data from multiple source domains, it learns to find a unified sentiment latent space where data from both the source and target domains share a similar distribution. This is achieved via cycle consistent adversarial learning in an end-to-end manner. Extensive experiments conducted on four benchmark datasets demonstrate that MSGAN significantly outperforms the state-of-the-art MDA approaches for visual sentiment classification.


Entropy ◽  
2020 ◽  
Vol 23 (1) ◽  
pp. 11
Author(s):  
Carlos Tejeda-Ocampo ◽  
Armando López-Cuevas ◽  
Hugo Terashima-Marin

Deep interactive evolution (DeepIE) combines the capacity of interactive evolutionary computation (IEC) to capture a user’s preference with the domain-specific robustness of a trained generative adversarial network (GAN) generator, allowing the user to control the GAN output through evolutionary exploration of the latent space. However, the traditional GAN latent space presents feature entanglement, which limits the practicability of possible applications of DeepIE. In this paper, we implement DeepIE within a style-based generator from a StyleGAN model trained on the WikiArt dataset and propose StyleIE, a variation of DeepIE that takes advantage of the secondary disentangled latent space in the style-based generator. We performed two AB/BA crossover user tests that compared the performance of DeepIE against StyleIE for art generation. Self-rated evaluations of the performance were collected through a questionnaire. Findings from the tests suggest that StyleIE and DeepIE perform equally in tasks with open-ended goals with relaxed constraints, but StyleIE performs better in close-ended and more constrained tasks.


2021 ◽  
Author(s):  
Sofia Valdez ◽  
Carolyn Seepersad ◽  
Sandilya Kambampati

Abstract Rapid advances in additive manufacturing and topology optimization enable unprecedented levels of design freedom for realizing complex structures. The challenge is that the increasing design freedom is accompanied by increasing complexity, such that it can become difficult for either computational algorithms or human designers alone to search these expansive design spaces effectively. Our goal is to establish an interactive design framework that is both data-driven and designer-guided so that human designers can work together with computational algorithms to search structural design spaces more effectively. The framework builds upon classical topology optimization techniques to build a library of designs for a class of problems. A conditional generative adversarial network (cGAN) is trained to establish a latent representation of the library and to support rapid exploration of candidate designs. The library of designs is clustered based on visual similarity. The user selects clusters with desirable features, and the underlying latent representation is manipulated to generate visually similar candidate designs with adjustable levels of diversity or similarity to the selected clusters. The framework enables designers to use their expertise and intuition to guide the algorithm towards promising solutions by screening designs quickly and eliminating clusters of designs that may not be desirable for reasons that are difficult to embed within the optimization itself but are recognizable and significant to a human designer (e.g., secondary functionality, aesthetics).


2020 ◽  
Vol 34 (07) ◽  
pp. 11515-11522
Author(s):  
Kaiyi Lin ◽  
Xing Xu ◽  
Lianli Gao ◽  
Zheng Wang ◽  
Heng Tao Shen

Zero-Shot Cross-Modal Retrieval (ZS-CMR) is an emerging research hotspot that aims to retrieve data of new classes across different modality data. It is challenging for not only the heterogeneous distributions across different modalities, but also the inconsistent semantics across seen and unseen classes. A handful of recently proposed methods typically borrow the idea from zero-shot learning, i.e., exploiting word embeddings of class labels (i.e., class-embeddings) as common semantic space, and using generative adversarial network (GAN) to capture the underlying multimodal data structures, as well as strengthen relations between input data and semantic space to generalize across seen and unseen classes. In this paper, we propose a novel method termed Learning Cross-Aligned Latent Embeddings (LCALE) as an alternative to these GAN based methods for ZS-CMR. Unlike using the class-embeddings as the semantic space, our method seeks for a shared low-dimensional latent space of input multimodal features and class-embeddings by modality-specific variational autoencoders. Notably, we align the distributions learned from multimodal input features and from class-embeddings to construct latent embeddings that contain the essential cross-modal correlation associated with unseen classes. Effective cross-reconstruction and cross-alignment criterions are further developed to preserve class-discriminative information in latent space, which benefits the efficiency for retrieval and enable the knowledge transfer to unseen classes. We evaluate our model using four benchmark datasets on image-text retrieval tasks and one large-scale dataset on image-sketch retrieval tasks. The experimental results show that our method establishes the new state-of-the-art performance for both tasks on all datasets.


Electronics ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 688
Author(s):  
Sung-Wook Park ◽  
Jun-Ho Huh ◽  
Jong-Chan Kim

In the field of deep learning, the generative model did not attract much attention until GANs (generative adversarial networks) appeared. In 2014, Google’s Ian Goodfellow proposed a generative model called GANs. GANs use different structures and objective functions from the existing generative model. For example, GANs use two neural networks: a generator that creates a realistic image, and a discriminator that distinguishes whether the input is real or synthetic. If there are no problems in the training process, GANs can generate images that are difficult even for experts to distinguish in terms of authenticity. Currently, GANs are the most researched subject in the field of computer vision, which deals with the technology of image style translation, synthesis, and generation, and various models have been unveiled. The issues raised are also improving one by one. In image synthesis, BEGAN (Boundary Equilibrium Generative Adversarial Network), which outperforms the previously announced GANs, learns the latent space of the image, while balancing the generator and discriminator. Nonetheless, BEGAN also has a mode collapse wherein the generator generates only a few images or a single one. Although BEGAN-CS (Boundary Equilibrium Generative Adversarial Network with Constrained Space), which was improved in terms of loss function, was introduced, it did not solve the mode collapse. The discriminator structure of BEGAN-CS is AE (AutoEncoder), which cannot create a particularly useful or structured latent space. Compression performance is not good either. In this paper, this characteristic of AE is considered to be related to the occurrence of mode collapse. Thus, we used VAE (Variational AutoEncoder), which added statistical techniques to AE. As a result of the experiment, the proposed model did not cause mode collapse but converged to a better state than BEGAN-CS.


2020 ◽  
Vol 10 (23) ◽  
pp. 8415
Author(s):  
Jeongmin Lee ◽  
Younkyoung Yoon ◽  
Junseok Kwon

We propose a novel generative adversarial network for class-conditional data augmentation (i.e., GANDA) to mitigate data imbalance problems in image classification tasks. The proposed GANDA generates minority class data by exploiting majority class information to enhance the classification accuracy of minority classes. For stable GAN training, we introduce a new denoising autoencoder initialization with explicit class conditioning in the latent space, which enables the generation of definite samples. The generated samples are visually realistic and have a high resolution. Experimental results demonstrate that the proposed GANDA can considerably improve classification accuracy, especially when datasets are highly imbalanced on standard benchmark datasets (i.e., MNIST and CelebA). Our generated samples can be easily used to train conventional classifiers to enhance their classification accuracy.


Sign in / Sign up

Export Citation Format

Share Document