blendshape model
Recently Published Documents


TOTAL DOCUMENTS

5
(FIVE YEARS 2)

H-INDEX

2
(FIVE YEARS 0)

2020 ◽  
Vol 128 (10-11) ◽  
pp. 2629-2650
Author(s):  
Evangelos Ververas ◽  
Stefanos Zafeiriou

Abstract Image-to-image (i2i) translation is the dense regression problem of learning how to transform an input image into an output using aligned image pairs. Remarkable progress has been made in i2i translation with the advent of deep convolutional neural networks and particular using the learning paradigm of generative adversarial networks (GANs). In the absence of paired images, i2i translation is tackled with one or multiple domain transformations (i.e., CycleGAN, StarGAN etc.). In this paper, we study the problem of image-to-image translation, under a set of continuous parameters that correspond to a model describing a physical process. In particular, we propose the SliderGAN which transforms an input face image into a new one according to the continuous values of a statistical blendshape model of facial motion. We show that it is possible to edit a facial image according to expression and speech blendshapes, using sliders that control the continuous values of the blendshape model. This provides much more flexibility in various tasks, including but not limited to face editing, expression transfer and face neutralisation, comparing to models based on discrete expressions or action units.


Author(s):  
Ziqi Tu ◽  
Dongdong Weng ◽  
Dewen Cheng ◽  
Yihua Bao ◽  
Bin Liang ◽  
...  

2012 ◽  
Vol 23 (3-4) ◽  
pp. 235-243 ◽  
Author(s):  
Wan-Chun Ma ◽  
Yi-Hua Wang ◽  
Graham Fyffe ◽  
Bing-Yu Chen ◽  
Paul Debevec

Author(s):  
Wan-Chun Ma ◽  
Yi-Hua Wang ◽  
Graham Fyffe ◽  
Jernej Barbič ◽  
Bing-Yu Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document