scholarly journals Convergence and Optimality Analysis of Low-Dimensional Generative Adversarial Networks using Error Function Integrals

IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Graham W. Pulford ◽  
Kirill Kondrashov
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Hui Liu ◽  
Tinglong Tang ◽  
Jake Luo ◽  
Meng Zhao ◽  
Baole Zheng ◽  
...  

Purpose This study aims to address the challenge of training a detection model for the robot to detect the abnormal samples in the industrial environment, while abnormal patterns are very rare under this condition. Design/methodology/approach The authors propose a new model with double encoder–decoder (DED) generative adversarial networks to detect anomalies when the model is trained without any abnormal patterns. The DED approach is used to map high-dimensional input images to a low-dimensional space, through which the latent variables are obtained. Minimizing the change in the latent variables during the training process helps the model learn the data distribution. Anomaly detection is achieved by calculating the distance between two low-dimensional vectors obtained from two encoders. Findings The proposed method has better accuracy and F1 score when compared with traditional anomaly detection models. Originality/value A new architecture with a DED pipeline is designed to capture the distribution of images in the training process so that anomalous samples are accurately identified. A new weight function is introduced to control the proportion of losses in the encoding reconstruction and adversarial phases to achieve better results. An anomaly detection model is proposed to achieve superior performance against prior state-of-the-art approaches.


2019 ◽  
Vol 141 (11) ◽  
Author(s):  
Wei Chen ◽  
Mark Fuge

Abstract Real-world designs usually consist of parts with interpart dependencies, i.e., the geometry of one part is dependent on one or multiple other parts. We can represent such dependency in a part dependency graph. This paper presents a method for synthesizing these types of hierarchical designs using generative models learned from examples. It decomposes the problem of synthesizing the whole design into synthesizing each part separately but keeping the interpart dependencies satisfied. Specifically, this method constructs multiple generative models, the interaction of which is based on the part dependency graph. We then use the trained generative models to synthesize or explore each part design separately via a low-dimensional latent representation, conditioned on the corresponding parent part(s). We verify our model on multiple design examples with different interpart dependencies. We evaluate our model by analyzing the constraint satisfaction performance, the synthesis quality, the latent space quality, and the effects of part dependency depth and branching factor. This paper’s techniques for capturing dependencies among parts lay the foundation for learned generative models to extend to more realistic engineering systems where such relationships are widespread.


2021 ◽  
Vol 6 (1) ◽  
pp. 1-5
Author(s):  
Adam Balint ◽  
Graham Taylor

Recent advances in Generative Adversarial Networks (GANs) have shown great progress on a large variety of tasks. A common technique used to yield greater diversity of samples is conditioning on class labels. Conditioning on high-dimensional structured or unstructured information has also been shown to improve generation results, e.g. Image-to-Image translation. The conditioning information is provided in the form of human annotations, which can be expensive and difficult to obtain in cases where domain knowledge experts are needed. In this paper, we present an alternative: conditioning on low-dimensional structured information that can be automatically extracted from the input without the need for human annotators. Specifically, we propose a Palette-conditioned Generative Adversarial Network (Pal-GAN), an architecture-agnostic model that conditions on both a colour palette and a segmentation mask for high quality image synthesis. We show improvements on conditional consistency, intersection-over-union, and Fréchet inception distance scores. Additionally, we show that sampling colour palettes significantly changes the style of the generated images.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4662
Author(s):  
Bowen Zheng ◽  
Jianping Zhang ◽  
Guiling Sun ◽  
Xiangnan Ren

This study primarily investigates image sensing at low sampling rates with convolutional neural networks (CNN) for specific applications. To improve the image acquisition efficiency in energy-limited systems, this study, inspired by compressed sensing, proposes a fully learnable model for task-driven image-compressed sensing (FLCS). The FLCS, based on Deep Convolution Generative Adversarial Networks (DCGAN) and Variational Auto-encoder (VAE), divides the image-compressed sensing model into three learnable parts, i.e., the Sampler, the Solver and the Rebuilder. To be specific, a measurement matrix suitable for a type of image is obtained by training the Sampler. The Solver calculates the image’s low-dimensional representation with the measurements. The Rebuilder learns a mapping from the low-dimensional latent space to the image space. All the mentioned could be trained jointly or individually for a range of application scenarios. The pre-trained FLCS reconstructs images with few iterations for task-driven compressed sensing. As indicated from the experimental results, compared with existing approaches, the proposed method could significantly improve the reconstructed images’ quality while decreasing the running time. This study is of great significance for the application of image-compressed sensing at low sampling rates.


Author(s):  
Dongxiao He ◽  
Lu Zhai ◽  
Zhigang Li ◽  
Di Jin ◽  
Liang Yang ◽  
...  

Network embedding which is to learn a low dimensional representation of nodes in a network has been used in many network analysis tasks. Some network embedding methods, including those based on generative adversarial networks (GAN) (a promising deep learning technique), have been proposed recently. Existing GAN-based methods typically use GAN to learn a Gaussian distribution as a priori for network embedding. However, this strategy makes it difficult to distinguish the node representation from Gaussian distribution. Moreover, it does not make full use of the essential advantage of GAN (that is to adversarially learn the representation mechanism rather than the representation itself), leading to compromised performance of the method. To address this problem, we propose to use the adversarial idea on the representation mechanism, i.e. on the encoding mechanism under the framework of autoencoder. Specifically, we use the mutual information between node attributes and embedding as a reasonable alternative of this encoding mechanism (which is much easier to track). Additionally, we introduce another mapping mechanism (which is based on GAN) as a competitor into the adversarial learning system. A range of empirical results demonstrate the effectiveness of the proposed approach.


2020 ◽  
Vol 8 (6) ◽  
pp. 3492-3495

Mobile Photography has been brought to a significantly new level in the last several years. The quality of images taken by the compact lenses of a smartphone have now appreciably increased. Now, even some of the low- end phones of the market spectrum are able to take exceedingly good photos in suitable availability of lighting, due to the advancement in numerous software methods for processing the images post capture. However, despite these tools, these cam- eras still fall behind the aesthetic capabilities of their DSLR counterparts. In the quest to achieve high quality images through a smartphone camera, various image semantics are inadvertently ignored leading to a less artistic image quality than a pro- fessional camera. Although numerous techniques for manual as well as computerized image en- hancement tasks do exist, they are generally only focused on brightness or contrast and other such global parameters of the image and does not go on to improve the content or texture of the image and neither do they take the various semantics of the image into account. Moreover, they are usually based on a predetermined set of rules that never considers the actual device specifics that is capturing the image — the smartphone camera. For our enhancement, we have endeavored to use a unique deep learning technique to transform lower quality images from a smartphone camera into DSLRquality images. To enhance the image sharpness, we have used an error function that combines the three losses - the content, texture and color loss from the given image. By training on the large-scale DSLR Photo Enhancement Dataset, we have optimized the loss function using Generative Adversarial Networks. The end results produced after testing on a number of smartphone images yield enhanced quality images comparable to the DSLR images with an average SSIM score of approximately 0.95.


Sign in / Sign up

Export Citation Format

Share Document