2.5D Pose Guided Human Image Generation

2021 ◽  
Author(s):  
Kang Yuan ◽  
Sheng Li
Keyword(s):  
Author(s):  
Aliaksandr Siarohin ◽  
Enver Sangineto ◽  
Stephane Lathuiliere ◽  
Nicu Sebe
Keyword(s):  

2021 ◽  
Author(s):  
Yusuke Horiuchi ◽  
Edgar Simo-Serra ◽  
Satoshi Iizuka ◽  
Hiroshi Ishikawa
Keyword(s):  

2021 ◽  
pp. 1-11
Author(s):  
Haoran Wu ◽  
Fazhi He ◽  
Yansong Duan ◽  
Xiaohu Yan

Pose transfer, which synthesizes a new image of a target person in a novel pose, is valuable in several applications. Generative adversarial networks (GAN) based pose transfer is a new way for person re-identification (re-ID). Typical perceptual metrics, like Detection Score (DS) and Inception Score (IS), were employed to assess the visual quality after generation in pose transfer task. Thus, the existing GAN-based methods do not directly benefit from these metrics which are highly associated with human ratings. In this paper, a perceptual metrics guided GAN (PIGGAN) framework is proposed to intrinsically optimize generation processing for pose transfer task. Specifically, a novel and general model-Evaluator that matches well the GAN is designed. Accordingly, a new Sort Loss (SL) is constructed to optimize the perceptual quality. Morevover, PIGGAN is highly flexible and extensible and can incorporate both differentiable and indifferentiable indexes to optimize the attitude migration process. Extensive experiments show that PIGGAN can generate photo-realistic results and quantitatively outperforms state-of-the-art (SOTA) methods.


Author(s):  
Aliaksandr Siarohin ◽  
Stephane Lathuiliere ◽  
Enver Sangineto ◽  
Nicu Sebe
Keyword(s):  

Author(s):  
Dong Liang ◽  
Rui Wang ◽  
Xiaowei Tian ◽  
Cong Zou

Human image generation is a very challenging task since it is affected by many factors. Many human image generation methods focus on generating human images conditioned on a given pose, while the generated backgrounds are often blurred. In this paper, we propose a novel Partition-Controlled GAN to generate human images according to target pose and background. Firstly, human poses in the given images are extracted, and foreground/background are partitioned for further use. Secondly, we extract and fuse appearance features, pose features and background features to generate the desired images. Experiments on Market-1501 and DeepFashion datasets show that our model not only generates realistic human images but also produce the human pose and background as we want. Extensive experiments on COCO and LIP datasets indicate the potential of our method.


Sign in / Sign up

Export Citation Format

Share Document