Fast Facial Image Super-Resolution via Local Linear Transformations for Resource-Limited Applications

2011 ◽  
Vol 21 (10) ◽  
pp. 1363-1377 ◽  
Author(s):  
Hua Huang ◽  
Ning Wu
Author(s):  
Hyunduk KIM ◽  
Sang-Heon LEE ◽  
Myoung-Kyu SOHN ◽  
Dong-Ju KIM ◽  
Byungmin KIM

2018 ◽  
Vol 49 (4) ◽  
pp. 1324-1338 ◽  
Author(s):  
Shyam Singh Rajput ◽  
Vijay Kumar Bohat ◽  
K. V. Arya

Author(s):  
Payman Moallem ◽  
Sayed Mohammad Mostafavi Isfahani ◽  
Javad Haddadnia

2021 ◽  
Vol 21 (3) ◽  
pp. 1-15
Author(s):  
Guangwei Gao ◽  
Dong Zhu ◽  
Huimin Lu ◽  
Yi Yu ◽  
Heyou Chang ◽  
...  

Super-resolution methods for facial image via representation learning scheme have become very effective methods due to their efficiency. The key problem for the super-resolution of facial image is to reveal the latent relationship between the low-resolution ( LR ) and the corresponding high-resolution ( HR ) training patch pairs. To simultaneously utilize the contextual information of the target position and the manifold structure of the primitive HR space, in this work, we design a robust context-patch facial image super-resolution scheme via a kernel locality-constrained coupled-layer regression (KLC2LR) scheme to obtain the desired HR version from the acquired LR image. Here, KLC2LR proposes to acquire contextual surrounding patches to represent the target patch and adds an HR layer constraint to compensate the detail information. Additionally, KLC2LR desires to acquire more high-frequency information by searching for nearest neighbors in the HR sample space. We also utilize kernel function to map features in original low-dimensional space into a high-dimensional one to obtain potential nonlinear characteristics. Our compared experiments in the noisy and noiseless cases have verified that our suggested methodology performs better than many existing predominant facial image super-resolution methods.


Author(s):  
Yanbo Wang ◽  
Shaohui Lin ◽  
Yanyun Qu ◽  
Haiyan Wu ◽  
Zhizhong Zhang ◽  
...  

Convolutional neural networks (CNNs) are highly successful for super-resolution (SR) but often require sophisticated architectures with heavy memory cost and computational overhead significantly restricts their practical deployments on resource-limited devices. In this paper, we proposed a novel contrastive self-distillation (CSD) framework to simultaneously compress and accelerate various off-the-shelf SR models. In particular, a channel-splitting super-resolution network can first be constructed from a target teacher network as a compact student network. Then, we propose a novel contrastive loss to improve the quality of SR images and PSNR/SSIM via explicit knowledge transfer. Extensive experiments demonstrate that the proposed CSD scheme effectively compresses and accelerates several standard SR models such as EDSR, RCAN and CARN. Code is available at https://github.com/Booooooooooo/CSD.


Author(s):  
Zhenfeng Fan ◽  
Xiyuan Hu ◽  
Chen Chen ◽  
Xiaolian Wang ◽  
Silong Peng

Sign in / Sign up

Export Citation Format

Share Document