Design Automation by Integrating Generative Adversarial Networks and Topology Optimization

Author(s):  
Sangeun Oh ◽  
Yongsu Jung ◽  
Ikjin Lee ◽  
Namwoo Kang

Recent advances in deep learning enable machines to learn existing designs by themselves and to create new designs. Generative adversarial networks (GANs) are widely used to generate new images and data by unsupervised learning. Certain limitations exist in applying GANs directly to product designs. It requires a large amount of data, produces uneven output quality, and does not guarantee engineering performance. To solve these problems, this paper proposes a design automation process by combining GANs and topology optimization. The suggested process has been applied to the wheel design of automobiles and has shown that an aesthetically superior and technically meaningful design can be automatically generated without human interventions.

Leonardo ◽  
2021 ◽  
pp. 1-11
Author(s):  
Emily L. Spratt

Abstract Although recent advances in artificial intelligence to generate images with deep learning techniques, especially generative adversarial networks (GANs), have offered radically new opportunities for its creative applications, there has been little investigation into its use as a tool to explore the senses beyond vision alone. In an artistic collaboration that brought Chef Alain Passard, art historian and data scientist Emily Spratt, and computer programmer Thomas Fan together, photographs of the three-star Michelin plates from the Parisian restaurant Arpège were used as a springboard to explore the art of culinary presentation in the manner of the Renaissance painter Giuseppe Arcimboldo.


Author(s):  
Amey Thakur

Abstract: Deep learning's breakthrough in the field of artificial intelligence has resulted in the creation of a slew of deep learning models. One of these is the Generative Adversarial Network, which has only recently emerged. The goal of GAN is to use unsupervised learning to analyse the distribution of data and create more accurate results. The GAN allows the learning of deep representations in the absence of substantial labelled training information. Computer vision, language and video processing, and image synthesis are just a few of the applications that might benefit from these representations. The purpose of this research is to get the reader conversant with the GAN framework as well as to provide the background information on Generative Adversarial Networks, including the structure of both the generator and discriminator, as well as the various GAN variants along with their respective architectures. Applications of GANs are also discussed with examples. Keywords: Generative Adversarial Networks (GANs), Generator, Discriminator, Supervised and Unsupervised Learning, Discriminative and Generative Modelling, Backpropagation, Loss Functions, Machine Learning, Deep Learning, Neural Networks, Convolutional Neural Network (CNN), Deep Convolutional GAN (DCGAN), Conditional GAN (cGAN), Information Maximizing GAN (InfoGAN), Stacked GAN (StackGAN), Pix2Pix, Wasserstein GAN (WGAN), Progressive Growing GAN (ProGAN), BigGAN, StyleGAN, CycleGAN, Super-Resolution GAN (SRGAN), Image Synthesis, Image-to-Image Translation.


2021 ◽  
pp. 1-32
Author(s):  
Mohammad Mahdi Behzadi ◽  
Horea T. Ilies

Abstract Many machine learning methods have been recently developed to circumvent the high computational cost of the gradient-based topology optimization. These methods typically require extensive and costly datasets for training, have a difficult time generalizing to unseen boundary and loading conditions and to new domains, and do not take into consideration topological constraints of the predictions, which produces predictions with inconsistent topologies. We present a deep learning method based on generative adversarial networks for generative design exploration. The proposed method combines the generative power of conditional GANs with the knowledge transfer capabilities of transfer learning methods to predict optimal topologies for unseen boundary conditions. We also show that the knowledge transfer capabilities embedded in the design of the proposed algorithm significantly reduces the size of the training dataset compared to the traditional deep learning neural or adversarial networks. Moreover, we formulate a topological loss function based on the bottleneck distance obtained from the persistent diagram of the structures and demonstrate a significant improvement in the topological connectivity of the predicted structures. We use numerous examples to explore the efficiency and accuracy of the proposed approach for both seen and unseen boundary conditions in 2D.


2020 ◽  
Vol 8 (6) ◽  
pp. 2037-2040

Any image we perceive through a screen is made of three separate channels, R, G, and B. With the help of these three channels; an image comes to colour. Any pictures taken during the old times were in grayscale format. To convert any given grayscale image into colour, we need the help of a photoshop professional, which might take hours of the workforce. In a revolution to this, we propose an utterly programmed methodology that produces lively and practical colourizations. Generative adversarial networks are an unsupervised learning task in machine learning that involves automatically discovering and learning the regularities or patterns in input file to get an output. In our case, a grayscale image can be converted to colour with the help of GANs.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Karim Armanious ◽  
Tobias Hepp ◽  
Thomas Küstner ◽  
Helmut Dittmann ◽  
Konstantin Nikolaou ◽  
...  

2021 ◽  
Author(s):  
Van Bettauer ◽  
Anna CBP Costa ◽  
Raha Parvizi Omran ◽  
Samira Massahi ◽  
Eftyhios Kirbizakis ◽  
...  

We present deep learning-based approaches for exploring the complex array of morphologies exhibited by the opportunistic human pathogen C. albicans. Our system entitled Candescence automatically detects C. albicans cells from Differential Image Contrast microscopy, and labels each detected cell with one of nine vegetative, mating-competent or filamentous morphologies. The software is based upon a fully convolutional one-stage object detector and exploits a novel cumulative curriculum-based learning strategy that stratifies our images by difficulty from simple vegetative forms to more complex filamentous architectures. Candescence achieves very good performance on this difficult learning set which has substantial intermixing between the predicted classes. To capture the essence of each C. albicans morphology, we develop models using generative adversarial networks and identify subcomponents of the latent space which control technical variables, developmental trajectories or morphological switches. We envision Candescence as a community meeting point for quantitative explorations of C. albicans morphology.


Author(s):  
Priyanka Nandal

This work represents a simple method for motion transfer (i.e., given a source video of a subject [person] performing some movements or in motion, that movement/motion is transferred to amateur target in different motion). The pose is used as an intermediate representation to perform this translation. To transfer the motion of the source subject to the target subject, the pose is extracted from the source subject, and then the target subject is generated by applying the learned pose to-appearance mapping. To perform this translation, the video is considered as a set of images consisting of all the frames. Generative adversarial networks (GANs) are used to transfer the motion from source subject to the target subject. GANs are an evolving field of deep learning.


Sign in / Sign up

Export Citation Format

Share Document