Fine-grained Image Inpainting with Scale-Enhanced Generative Adversarial Network

Author(s):  
Weirong Liu ◽  
ChengruiJie CaoLiu ◽  
Chenwen Ren ◽  
Yulin Wei ◽  
Honglin Guo
Author(s):  
Wenqi Zhao ◽  
Satoshi Oyama ◽  
Masahito Kurihara

Counterfactual explanations help users to understand the behaviors of machine learning models by changing the inputs for the existing outputs. For an image classification task, an example counterfactual visual explanation explains: "for an example that belongs to class A, what changes do we need to make to the input so that the output is more inclined to class B." Our research considers changing the attribute description text of class A on the basis of the attributes of class B and generating counterfactual images on the basis of the modified text. We can use the prediction results of the model on counterfactual images to find the attributes that have the greatest effect when the model is predicting classes A and B. We applied our method to a fine-grained image classification dataset and used the generative adversarial network to generate natural counterfactual visual explanations. To evaluate these explanations, we used them to assist crowdsourcing workers in an image classification task. We found that, within a specific range, they improved classification accuracy.


2020 ◽  
Vol 2020 ◽  
pp. 1-10
Author(s):  
Linyan Li ◽  
Yu Sun ◽  
Fuyuan Hu ◽  
Tao Zhou ◽  
Xuefeng Xi ◽  
...  

In this paper, we propose an Attentional Concatenation Generative Adversarial Network (ACGAN) aiming at generating 1024 × 1024 high-resolution images. First, we propose a multilevel cascade structure, for text-to-image synthesis. During training progress, we gradually add new layers and, at the same time, use the results and word vectors from the previous layer as inputs to the next layer to generate high-resolution images with photo-realistic details. Second, the deep attentional multimodal similarity model is introduced into the network, and we match word vectors with images in a common semantic space to compute a fine-grained matching loss for training the generator. In this way, we can pay attention to the fine-grained information of the word level in the semantics. Finally, the measure of diversity is added to the discriminator, which enables the generator to obtain more diverse gradient directions and improve the diversity of generated samples. The experimental results show that the inception scores of the proposed model on the CUB and Oxford-102 datasets have reached 4.48 and 4.16, improved by 2.75% and 6.42% compared to Attentional Generative Adversarial Networks (AttenGAN). The ACGAN model has a better effect on text-generated images, and the resulting image is closer to the real image.


2020 ◽  
Author(s):  
Mingwu Jin ◽  
Yang Pan ◽  
Shunrong Zhang ◽  
Yue Deng

<p>Because of the limited coverage of receiver stations, current measurements of Total Electron Content (TEC) by ground-based GNSS receivers are not complete with large portions of data gaps. The processing to obtain complete TEC maps for space science research is time consuming and needs the collaboration of five International GNSS Service (IGS) Ionosphere Associate Analysis Centers (IAACs) to use different data processing and filling algorithms and to consolidate their results into final IGS completed TEC maps. In this work, we developed a Deep Convolutional Generative Adversarial Network (DCGAN) and Poisson blending model (DCGAN-PB) to learn IGS completion process for automatic completion of TEC maps. Using 10-fold cross validation of 20-year IGS TEC data, DCGAN-PB achieves the average root mean squared error (RMSE) about 4 absolute TEC units (TECu) of the high solar activity years and around 2 TECu for low solar activity years, which is about 50% reduction of RMSE for recovered TEC values compared to two conventional single-image inpainting methods. The developed DCGAN-PB model can lead to an efficient automatic completion tool for TEC maps.</p>


2020 ◽  
Vol 38 (6) ◽  
pp. 2558-2578
Author(s):  
Honggeun Jo ◽  
Javier E Santos ◽  
Michael J Pyrcz

Rule-based reservoir modeling methods integrate geological depositional process concepts to generate reservoir models that capture realistic geologic features for improved subsurface predictions and uncertainty models to support development decision making. However, the robust and direct conditioning of these models to subsurface data, such as well logs, core descriptions, and seismic inversions and interpretations, remains as an obstacle for the broad application as a standard subsurface modeling technology. We implement a machine learning-based method for fast and flexible data conditioning of rule-based models. This study builds on a rule-based modeling method for deep-water lobe reservoirs. The model has three geological inputs: (1) the depositional element geometry, (2) the compositional exponent for element stacking pattern, and (3) the distribution of petrophysical properties with hierarchical trends conformable to the surfaces. A deep learning-based workflow is proposed for robust and non-iterative data conditioning. First, a generative adversarial network learns salient geometric features from the ensemble of the training rule-based models. Then, a new rule-based model is generated and a mask is applied to remove the model near local data along the well trajectories. Last, semantic image inpainting restores the mask with the optimum generative adversarial network realization that is consistent with both local data and the surrounding model. For the deep-water lobe example, the generative adversarial network learns the primary geological spatial features to generate reservoir realizations that reproduce hierarchical trend as well as the surface geometries and stacking pattern. Moreover, the trained generative adversarial network explores the latent reservoir manifold and identifies the ensemble of models to represent an uncertainty model. Semantic image inpainting determines the optimum replacement for the near-data mask that is consistent with the local data and the rest of the model. This work results in subsurface models that accurately reproduce reservoir heterogeneity, continuity, and spatial distribution of petrophysical parameters while honoring the local well data constraints.


Author(s):  
Zhao Qiu ◽  
Lin Yuan ◽  
Lihao Liu ◽  
Zheng Yuan ◽  
Tao Chen ◽  
...  

The image generation and completion model complement the missing area of the image to be repaired according to the image itself or the information of the image library so that the repaired image looks very natural and difficult to distinguish from the undamaged image. The difficulty of image generation and completion lies in the reasonableness of image semantics and the clear and true texture of the generated image. In this paper, a Wasserstein generative adversarial network with dilated convolution and deformable convolution (DDC-WGAN) is proposed for image completion. A deformable offset is added based on dilated convolution, which enlarges the receptive field and provides a more stable representation of geometric deformation. Experiments show that the DDC-WGAN method proposed in this paper has better performance in image generation and complementation than the traditional generative adversarial complementation network.


Sign in / Sign up

Export Citation Format

Share Document