BiLSTM regression model for face sketch synthesis using sequential patterns

Author(s):  
Abduljalil Radman ◽  
Shahrel Azmin Suandi
Author(s):  
Hongbo Bi ◽  
Ziqi Liu ◽  
Lina Yang ◽  
Kang Wang ◽  
Ning Li

2021 ◽  
Vol 438 ◽  
pp. 107-121
Author(s):  
Weiguo Wan ◽  
Yong Yang ◽  
Hyo Jong Lee

2018 ◽  
Vol 28 (9) ◽  
pp. 2154-2163 ◽  
Author(s):  
Nannan Wang ◽  
Xinbo Gao ◽  
Leiyu Sun ◽  
Jie Li

Author(s):  
Mingjin Zhang ◽  
Nannan Wang ◽  
Xinbo Gao ◽  
Yunsong Li

Synthesizing face sketches with both common and specific information from photos has been recently attracting considerable attentions in digital entertainment. However, the existing approaches either make the strict similarity assumption on face sketches and photos, leading to lose some identity-specific information, or learn the direct mapping relationship from face photos to sketches by the simple neural network, resulting in the lack of some common information. In this paper, we propose a novel face sketch synthesis based on the Markov random neural fields including two structures. In the first structure, we utilize the neural network to learn the non-linear photo-sketch relationship and obtain the identity-specific information of the test photo, such as glasses, hairpins and hairstyles. In the second structure, we choose the nearest neighbors of the test photo patch and the sketch pixel synthesized in the first structure from the training data which ensure the common information of Miss or Mr Average. Experimental results on the Chinese University of Hong Kong face sketch database illustrate that our proposed framework can preserve the common structure and capture the characteristic features. Compared with the state-of-the-art methods, our method achieves better results in terms of both quantitative and qualitative experimental evaluations.


Sign in / Sign up

Export Citation Format

Share Document