Generative Image Inpainting with Dilated Deformable Convolution

Author(s):  
Zhao Qiu ◽  
Lin Yuan ◽  
Lihao Liu ◽  
Zheng Yuan ◽  
Tao Chen ◽  
...  

The image generation and completion model complement the missing area of the image to be repaired according to the image itself or the information of the image library so that the repaired image looks very natural and difficult to distinguish from the undamaged image. The difficulty of image generation and completion lies in the reasonableness of image semantics and the clear and true texture of the generated image. In this paper, a Wasserstein generative adversarial network with dilated convolution and deformable convolution (DDC-WGAN) is proposed for image completion. A deformable offset is added based on dilated convolution, which enlarges the receptive field and provides a more stable representation of geometric deformation. Experiments show that the DDC-WGAN method proposed in this paper has better performance in image generation and complementation than the traditional generative adversarial complementation network.

2021 ◽  
Vol 11 (4) ◽  
pp. 1380
Author(s):  
Yingbo Zhou ◽  
Pengcheng Zhao ◽  
Weiqin Tong ◽  
Yongxin Zhu

While Generative Adversarial Networks (GANs) have shown promising performance in image generation, they suffer from numerous issues such as mode collapse and training instability. To stabilize GAN training and improve image synthesis quality with diversity, we propose a simple yet effective approach as Contrastive Distance Learning GAN (CDL-GAN) in this paper. Specifically, we add Consistent Contrastive Distance (CoCD) and Characteristic Contrastive Distance (ChCD) into a principled framework to improve GAN performance. The CoCD explicitly maximizes the ratio of the distance between generated images and the increment between noise vectors to strengthen image feature learning for the generator. The ChCD measures the sampling distance of the encoded images in Euler space to boost feature representations for the discriminator. We model the framework by employing Siamese Network as a module into GANs without any modification on the backbone. Both qualitative and quantitative experiments conducted on three public datasets demonstrate the effectiveness of our method.


2021 ◽  
Author(s):  
Jialu Huang ◽  
Ying Huang ◽  
Yan-ting Lin ◽  
Zi-yang Liu ◽  
Yang Lin ◽  
...  

Sensors ◽  
2020 ◽  
Vol 20 (7) ◽  
pp. 1810
Author(s):  
Dat Tien Nguyen ◽  
Tuyen Danh Pham ◽  
Ganbayar Batchuluun ◽  
Kyoung Jun Noh ◽  
Kang Ryoung Park

Although face-based biometric recognition systems have been widely used in many applications, this type of recognition method is still vulnerable to presentation attacks, which use fake samples to deceive the recognition system. To overcome this problem, presentation attack detection (PAD) methods for face recognition systems (face-PAD), which aim to classify real and presentation attack face images before performing a recognition task, have been developed. However, the performance of PAD systems is limited and biased due to the lack of presentation attack images for training PAD systems. In this paper, we propose a method for artificially generating presentation attack face images by learning the characteristics of real and presentation attack images using a few captured images. As a result, our proposed method helps save time in collecting presentation attack samples for training PAD systems and possibly enhance the performance of PAD systems. Our study is the first attempt to generate PA face images for PAD system based on CycleGAN network, a deep-learning-based framework for image generation. In addition, we propose a new measurement method to evaluate the quality of generated PA images based on a face-PAD system. Through experiments with two public datasets (CASIA and Replay-mobile), we show that the generated face images can capture the characteristics of presentation attack images, making them usable as captured presentation attack samples for PAD system training.


Sign in / Sign up

Export Citation Format

Share Document