scholarly journals Automatic Large-Scale 3D Building Shape Refinement Using Conditional Generative Adversarial Networks

Author(s):  
Ksenia Bittner ◽  
Marco Korner
2020 ◽  
Author(s):  
Congmei Jiang ◽  
Yongfang Mao ◽  
Yi Chai ◽  
Mingbiao Yu

<p>With the increasing penetration of renewable resources such as wind and solar, the operation and planning of power systems, especially in terms of large-scale integration, are faced with great risks due to the inherent stochasticity of natural resources. Although this uncertainty can be anticipated, the timing, magnitude, and duration of fluctuations cannot be predicted accurately. In addition, the outputs of renewable power sources are correlated in space and time, and this brings further challenges for predicting the characteristics of their future behavior. To address these issues, this paper describes an unsupervised method for renewable scenario forecasts that considers spatiotemporal correlations based on generative adversarial networks (GANs), which have been shown to generate high-quality samples. We first utilized an improved GAN to learn unknown data distributions and model the dynamic processes of renewable resources. We then generated a large number of forecasted scenarios using stochastic constrained optimization. For validation, we used power-generation data from the National Renewable Energy Laboratory wind and solar integration datasets. The experimental results validated the effectiveness of our proposed method and indicated that it has significant potential in renewable scenario analysis.</p>


2020 ◽  
Vol 128 (10-11) ◽  
pp. 2665-2683 ◽  
Author(s):  
Grigorios G. Chrysos ◽  
Jean Kossaifi ◽  
Stefanos Zafeiriou

Abstract Conditional image generation lies at the heart of computer vision and conditional generative adversarial networks (cGAN) have recently become the method of choice for this task, owing to their superior performance. The focus so far has largely been on performance improvement, with little effort in making cGANs more robust to noise. However, the regression (of the generator) might lead to arbitrarily large errors in the output, which makes cGANs unreliable for real-world applications. In this work, we introduce a novel conditional GAN model, called RoCGAN, which leverages structure in the target space of the model to address the issue. Specifically, we augment the generator with an unsupervised pathway, which promotes the outputs of the generator to span the target manifold, even in the presence of intense noise. We prove that RoCGAN share similar theoretical properties as GAN and establish with both synthetic and real data the merits of our model. We perform a thorough experimental validation on large scale datasets for natural scenes and faces and observe that our model outperforms existing cGAN architectures by a large margin. We also empirically demonstrate the performance of our approach in the face of two types of noise (adversarial and Bernoulli).


2019 ◽  
Vol 214 ◽  
pp. 06025
Author(s):  
Jean-Roch Vlimant ◽  
Felice Pantaleo ◽  
Maurizio Pierini ◽  
Vladimir Loncar ◽  
Sofia Vallecorsa ◽  
...  

In recent years, several studies have demonstrated the benefit of using deep learning to solve typical tasks related to high energy physics data taking and analysis. In particular, generative adversarial networks are a good candidate to supplement the simulation of the detector response in a collider environment. Training of neural network models has been made tractable with the improvement of optimization methods and the advent of GP-GPU well adapted to tackle the highly-parallelizable task of training neural nets. Despite these advancements, training of large models over large data sets can take days to weeks. Even more so, finding the best model architecture and settings can take many expensive trials. To get the best out of this new technology, it is important to scale up the available network-training resources and, consequently, to provide tools for optimal large-scale distributed training. In this context, our development of a new training workflow, which scales on multi-node/multi-GPU architectures with an eye to deployment on high performance computing machines is described. We describe the integration of hyper parameter optimization with a distributed training framework using Message Passing Interface, for models defined in keras [12] or pytorch [13]. We present results on the speedup of training generative adversarial networks trained on a data set composed of the energy deposition from electron, photons, charged and neutral hadrons in a fine grained digital calorimeter.


2020 ◽  
Vol 496 (1) ◽  
pp. L54-L58 ◽  
Author(s):  
Kana Moriwaki ◽  
Nina Filippova ◽  
Masato Shirasaki ◽  
Naoki Yoshida

ABSTRACT Line intensity mapping (LIM) is an emerging observational method to study the large-scale structure of the Universe and its evolution. LIM does not resolve individual sources but probes the fluctuations of integrated line emissions. A serious limitation with LIM is that contributions of different emission lines from sources at different redshifts are all confused at an observed wavelength. We propose a deep learning application to solve this problem. We use conditional generative adversarial networks to extract designated information from LIM. We consider a simple case with two populations of emission-line galaxies; H $\rm \alpha$ emitting galaxies at $z$ = 1.3 are confused with [O iii] emitters at $z$ = 2.0 in a single observed waveband at 1.5 $\mu{\textrm m}$. Our networks trained with 30 000 mock observation maps are able to extract the total intensity and the spatial distribution of H $\rm \alpha$ emitting galaxies at $z$ = 1.3. The intensity peaks are successfully located with 74 per cent precision. The precision increases to 91 per cent when we combine five networks. The mean intensity and the power spectrum are reconstructed with an accuracy of ∼10 per cent. The extracted galaxy distributions at a wider range of redshift can be used for studies on cosmology and on galaxy formation and evolution.


Author(s):  
K. Bittner ◽  
P. d’Angelo ◽  
M. Körner ◽  
P. Reinartz

<p><strong>Abstract.</strong> Three-dimensional building reconstruction from remote sensing imagery is one of the most difficult and important 3D modeling problems for complex urban environments. The main data sources provided the digital representation of the Earths surface and related natural, cultural, and man-made objects of the urban areas in remote sensing are the <i>digital surface models (DSMs)</i>. The DSMs can be obtained either by <i>light detection and ranging (LIDAR)</i>, SAR interferometry or from stereo images. Our approach relies on automatic global 3D building shape refinement from stereo DSMs using deep learning techniques. This refinement is necessary as the DSMs, which are extracted from image matching point clouds, suffer from occlusions, outliers, and noise. Though most previous works have shown promising results for building modeling, this topic remains an open research area. We present a new methodology which not only generates images with continuous values representing the elevation models but, at the same time, enhances the 3D object shapes, buildings in our case. Mainly, we train a <i>conditional generative adversarial network (cGAN)</i> to generate accurate LIDAR-like DSM height images from the noisy stereo DSM input. The obtained results demonstrate the strong potential of creating large areas remote sensing depth images where the buildings exhibit better-quality shapes and roof forms.</p>


2020 ◽  
Vol 10 (14) ◽  
pp. 4913
Author(s):  
Tin Kramberger ◽  
Božidar Potočnik

Currently there is no publicly available adequate dataset that could be used for training Generative Adversarial Networks (GANs) on car images. All available car datasets differ in noise, pose, and zoom levels. Thus, the objective of this work was to create an improved car image dataset that would be better suited for GAN training. To improve the performance of the GAN, we coupled the LSUN and Stanford car datasets. A new merged dataset was then pruned in order to adjust zoom levels and reduce the noise of images. This process resulted in fewer images that could be used for training, with increased quality though. This pruned dataset was evaluated by training the StyleGAN with original settings. Pruning the combined LSUN and Stanford datasets resulted in 2,067,710 images of cars with less noise and more adjusted zoom levels. The training of the StyleGAN on the LSUN-Stanford car dataset proved to be superior to the training with just the LSUN dataset by 3.7% using the Fréchet Inception Distance (FID) as a metric. Results pointed out that the proposed LSUN-Stanford car dataset is more consistent and better suited for training GAN neural networks than other currently available large car datasets.


2019 ◽  
Vol 214 ◽  
pp. 09005 ◽  
Author(s):  
Steven Farrell ◽  
Wahid Bhimji ◽  
Thorsten Kurth ◽  
Mustafa Mustafa ◽  
Deborah Bard ◽  
...  

Initial studies have suggested generative adversarial networks (GANs) have promise as fast simulations within HEP. These studies, while promising, have been insufficiently precise and also, like GANs in general, suffer from stability issues.We apply GANs to to generate full particle physics events (not individual physics objects), explore conditioning of generated events based on physics theory parameters and evaluate the precision and generalization of the produced datasets. We apply this to SUSY mass parameter interpolation and pileup generation. We also discuss recent developments in convergence and representations that match the structure of the detector better than images.In addition we describe on-going work making use of large-scale distributed resources on the Cori supercomputer at NERSC, and developments to control distributed training via interactive jupyter notebook sessions. This will allow tackling high-resolution detector data; model selection and hyper-parameter tuning in a productive yet scalable deep learning environment.


2021 ◽  
Vol 13 (6) ◽  
pp. 1104
Author(s):  
Yuanfu Gong ◽  
Puyun Liao ◽  
Xiaodong Zhang ◽  
Lifei Zhang ◽  
Guanzhou Chen ◽  
...  

Previously, generative adversarial networks (GAN) have been widely applied on super resolution reconstruction (SRR) methods, which turn low-resolution (LR) images into high-resolution (HR) ones. However, as these methods recover high frequency information with what they observed from the other images, they tend to produce artifacts when processing unfamiliar images. Optical satellite remote sensing images are of a far more complicated scene than natural images. Therefore, applying the previous networks on remote sensing images, especially mid-resolution ones, leads to unstable convergence and thus unpleasing artifacts. In this paper, we propose Enlighten-GAN for SRR tasks on large-size optical mid-resolution remote sensing images. Specifically, we design the enlighten blocks to induce network converging to a reliable point, and bring the Self-Supervised Hierarchical Perceptual Loss to attain performance improvement overpassing the other loss functions. Furthermore, limited by memory, large-scale images need to be cropped into patches to get through the network separately. To merge the reconstructed patches into a whole, we employ the internal inconsistency loss and cropping-and-clipping strategy, to avoid the seam line. Experiment results certify that Enlighten-GAN outperforms the state-of-the-art methods in terms of gradient similarity metric (GSM) on mid-resolution Sentinel-2 remote sensing images.


Sign in / Sign up

Export Citation Format

Share Document