scholarly journals Synthetic aircraft RS image modelling based on improved conditional GAN joint embedding network

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Junyu Chen ◽  
Haiwei Li ◽  
Liyao Song ◽  
Geng Zhang ◽  
Bingliang Hu ◽  
...  

AbstractDeveloping an efficient and quality remote sensing (RS) technology using volume and efficient modelling in different aircraft RS images is challenging. Generative models serve as a natural and convenient simulation method. Because aircraft types belong to the fine class under the rough class, the issue of feature entanglement may occur while modelling multiple aircraft classes. Our solution to this issue was a novel first-generation realistic aircraft type simulation system (ATSS-1) based on the RS images. It realised fine modelling of the seven aircraft types based on a real scene by establishing an adaptive weighted conditional attention generative adversarial network and joint geospatial embedding (GE) network. An adaptive weighted conditional batch normalisation attention block solved the subclass entanglement by reassigning the intra-class-wise characteristic responses. Subsequently, an asymmetric residual self-attention module was developed by establishing a remote region asymmetric relationship for mining the finer potential spatial representation. The mapping relationship between the input RS scene and the potential space of the generated samples was explored through the GE network construction that used the selected prior distribution z, as an intermediate representation. A public RS dataset (OPT-Aircraft_V1.0) and two public datasets (MNIST and Fashion-MNIST) were used for simulation model testing. The results demonstrated the effectiveness of ATSS-1, promoting further development of realistic automatic RS simulation.

2021 ◽  
Vol 11 (4) ◽  
pp. 1380
Author(s):  
Yingbo Zhou ◽  
Pengcheng Zhao ◽  
Weiqin Tong ◽  
Yongxin Zhu

While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.


Author(s):  
Amey Thakur ◽  
Hasan Rizvi ◽  
Mega Satish

In the present study, we propose to implement a new framework for estimating generative models via an adversarial process to extend an existing GAN framework and develop a white-box controllable image cartoonization, which can generate high-quality cartooned images/videos from real-world photos and videos. The learning purposes of our system are based on three distinct representations: surface representation, structure representation, and texture representation. The surface representation refers to the smooth surface of the images. The structure representation relates to the sparse colour blocks and compresses generic content. The texture representation shows the texture, curves, and features in cartoon images. Generative Adversarial Network (GAN) framework decomposes the images into different representations and learns from them to generate cartoon images. This decomposition makes the framework more controllable and flexible which allows users to make changes based on the required output. This approach overcomes any previous system in terms of maintaining clarity, colours, textures, shapes of images yet showing the characteristics of cartoon images.


2021 ◽  
Vol 263 (5) ◽  
pp. 1527-1538
Author(s):  
Xenofon Karakonstantis ◽  
Efren Fernandez Grande

The characterization of Room Impulse Responses (RIR) over an extended region in a room by means of measurements requires dense spatial with many microphones. This can often become intractable and time consuming in practice. Well established reconstruction methods such as plane wave regression show that the sound field in a room can be reconstructed from sparsely distributed measurements. However, these reconstructions usually rely on assuming physical sparsity (i.e. few waves compose the sound field) or trait in the measured sound field, making the models less generalizable and problem specific. In this paper we introduce a method to reconstruct a sound field in an enclosure with the use of a Generative Adversarial Network (GAN), which s new variants of the data distributions that it is trained upon. The goal of the proposed GAN model is to estimate the underlying distribution of plane waves in any source free region, and map these distributions from a stochastic, latent representation. A GAN is trained on a large number of synthesized sound fields represented by a random wave field and then tested on both simulated and real data sets, of lightly damped and reverberant rooms.


Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1810
Author(s):  
Dat Tien Nguyen ◽  
Tuyen Danh Pham ◽  
Ganbayar Batchuluun ◽  
Kyoung Jun Noh ◽  
Kang Ryoung Park

Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.


2019 ◽  
Vol 19 (2) ◽  
pp. 390-411 ◽  
Author(s):  
David Benjamin Verstraete ◽  
Enrique López Droguett ◽  
Viviana Meruane ◽  
Mohammad Modarres ◽  
Andrés Ferrada

With the availability of cheaper multisensor suites, one has access to massive and multidimensional datasets that can and should be used for fault diagnosis. However, from a time, resource, engineering, and computational perspective, it is often cost prohibitive to label all the data streaming into a database in the context of big machinery data, that is, massive multidimensional data. Therefore, this article proposes both a fully unsupervised and a semi-supervised deep learning enabled generative adversarial network-based methodology for fault diagnostics. Two public datasets of vibration data from rolling element bearings are used to evaluate the performance of the proposed methodology for fault diagnostics. The results indicate that the proposed methodology is a promising approach for both unsupervised and semi-supervised fault diagnostics.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0253868
Author(s):  
Luca Rossi ◽  
Andrea Ajmar ◽  
Marina Paolanti ◽  
Roberto Pierdicca

Vehicles’ trajectory prediction is a topic with growing interest in recent years, as there are applications in several domains ranging from autonomous driving to traffic congestion prediction and urban planning. Predicting trajectories starting from Floating Car Data (FCD) is a complex task that comes with different challenges, namely Vehicle to Infrastructure (V2I) interaction, Vehicle to Vehicle (V2V) interaction, multimodality, and generalizability. These challenges, especially, have not been completely explored by state-of-the-art works. In particular, multimodality and generalizability have been neglected the most, and this work attempts to fill this gap by proposing and defining new datasets, metrics, and methods to help understand and predict vehicle trajectories. We propose and compare Deep Learning models based on Long Short-Term Memory and Generative Adversarial Network architectures; in particular, our GAN-3 model can be used to generate multiple predictions in multimodal scenarios. These approaches are evaluated with our newly proposed error metrics N-ADE and N-FDE, which normalize some biases in the standard Average Displacement Error (ADE) and Final Displacement Error (FDE) metrics. Experiments have been conducted using newly collected datasets in four large Italian cities (Rome, Milan, Naples, and Turin), considering different trajectory lengths to analyze error growth over a larger number of time-steps. The results prove that, although LSTM-based models are superior in unimodal scenarios, generative models perform best in those where the effects of multimodality are higher. Space-time and geographical analysis are performed, to prove the suitability of the proposed methodology for real cases and management services.


2021 ◽  
Vol 8 ◽  
Author(s):  
Rodrigo F. Cádiz ◽  
Agustín Macaya ◽  
Manuel Cartagena ◽  
Denis Parra

Deep learning, one of the fastest-growing branches of artificial intelligence, has become one of the most relevant research and development areas of the last years, especially since 2012, when a neural network surpassed the most advanced image classification techniques of the time. This spectacular development has not been alien to the world of the arts, as recent advances in generative networks have made possible the artificial creation of high-quality content such as images, movies or music. We believe that these novel generative models propose a great challenge to our current understanding of computational creativity. If a robot can now create music that an expert cannot distinguish from music composed by a human, or create novel musical entities that were not known at training time, or exhibit conceptual leaps, does it mean that the machine is then creative? We believe that the emergence of these generative models clearly signals that much more research needs to be done in this area. We would like to contribute to this debate with two case studies of our own: TimbreNet, a variational auto-encoder network trained to generate audio-based musical chords, and StyleGAN Pianorolls, a generative adversarial network capable of creating short musical excerpts, despite the fact that it was trained with images and not musical data. We discuss and assess these generative models in terms of their creativity and we show that they are in practice capable of learning musical concepts that are not obvious based on the training data, and we hypothesize that these deep models, based on our current understanding of creativity in robots and machines, can be considered, in fact, creative.


2021 ◽  
Vol 38 (5) ◽  
pp. 1309-1317
Author(s):  
Jie Zhao ◽  
Qianjin Feng

Retinal vessel segmentation plays a significant role in the diagnosis and treatment of ophthalmological diseases. Recent studies have proved that deep learning can effectively segment the retinal vessel structure. However, the existing methods have difficulty in segmenting thin vessels, especially when the original image contains lesions. Based on generative adversarial network (GAN), this paper proposes a deep network with residual module and attention module (Deep Att-ResGAN). The network consists of four identical subnetworks. The output of each subnetwork is imported to the next subnetwork as contextual features that guide the segmentation. Firstly, the problems of the original image, namely, low contrast, uneven illumination, and data insufficiency, were solved through image enhancement and preprocessing. Next, an improved U-Net was adopted to serve as the generator, which stacks the residual and attention modules. These modules optimize the weight of the generator, and enhance the generalizability of the network. Further, the segmentation was refined iteratively by the discriminator, which contributes to the performance of vessel segmentation. Finally, comparative experiments were carried out on two public datasets: Digital Retinal Images for Vessel Extraction (DRIVE) and Structured Analysis of the Retina (STARE). The experimental results show that Deep Att-ResGAN outperformed the equivalent models like U-Net and GAN in most metrics. Our network achieved accuracy of 0.9565 and F1 of 0.829 on DRIVE, and accuracy of 0.9690 and F1 of 0.841 on STARE.


2020 ◽  
Vol 34 (07) ◽  
pp. 10745-10753
Author(s):  
Marzieh Edraki ◽  
Nazanin Rahnavard ◽  
Mubarak Shah

Convolutional neural networks (CNNs) have become a key asset to most of fields in AI. Despite their successful performance, CNNs suffer from a major drawback. They fail to capture the hierarchy of spatial relation among different parts of an entity. As a remedy to this problem, the idea of capsules was proposed by Hinton. In this paper, we propose the SubSpace Capsule Network (SCN) that exploits the idea of capsule networks to model possible variations in the appearance or implicitly-defined properties of an entity through a group of capsule subspaces instead of simply grouping neurons to create capsules. A capsule is created by projecting an input feature vector from a lower layer onto the capsule subspace using a learnable transformation. This transformation finds the degree of alignment of the input with the properties modeled by the capsule subspace.We show that SCN is a general capsule network that can successfully be applied to both discriminative and generative models without incurring computational overhead compared to CNN during test time. Effectiveness of SCN is evaluated through a comprehensive set of experiments on supervised image classification, semi-supervised image classification and high-resolution image generation tasks using the generative adversarial network (GAN) framework. SCN significantly improves the performance of the baseline models in all 3 tasks.


2020 ◽  
Vol 143 (3) ◽  
Author(s):  
Wei Chen ◽  
Faez Ahmed

Abstract Deep generative models are proven to be a useful tool for automatic design synthesis and design space exploration. When applied in engineering design, existing generative models face three challenges: (1) generated designs lack diversity and do not cover all areas of the design space, (2) it is difficult to explicitly improve the overall performance or quality of generated designs, and (3) existing models generally do not generate novel designs, outside the domain of the training data. In this article, we simultaneously address these challenges by proposing a new determinantal point process-based loss function for probabilistic modeling of diversity and quality. With this new loss function, we develop a variant of the generative adversarial network, named “performance augmented diverse generative adversarial network” (PaDGAN), which can generate novel high-quality designs with good coverage of the design space. By using three synthetic examples and one real-world airfoil design example, we demonstrate that PaDGAN can generate diverse and high-quality designs. In comparison to a vanilla generative adversarial network, on average, it generates samples with a 28% higher mean quality score with larger diversity and without the mode collapse issue. Unlike typical generative models that usually generate new designs by interpolating within the boundary of training data, we show that PaDGAN expands the design space boundary outside the training data towards high-quality regions. The proposed method is broadly applicable to many tasks including design space exploration, design optimization, and creative solution recommendation.


Sign in / Sign up

Export Citation Format

Share Document