SVDD-Based Face Reconstruction in Degraded Images

2008 ◽  
pp. 323-337
Author(s):  
Sang-Woong Lee ◽  
Seong-Whan Lee
2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Author(s):  
Hung Phuoc Truong ◽  
Thanh Phuong Nguyen ◽  
Yong-Guk Kim

AbstractWe present a novel framework for efficient and robust facial feature representation based upon Local Binary Pattern (LBP), called Weighted Statistical Binary Pattern, wherein the descriptors utilize the straight-line topology along with different directions. The input image is initially divided into mean and variance moments. A new variance moment, which contains distinctive facial features, is prepared by extracting root k-th. Then, when Sign and Magnitude components along four different directions using the mean moment are constructed, a weighting approach according to the new variance is applied to each component. Finally, the weighted histograms of Sign and Magnitude components are concatenated to build a novel histogram of Complementary LBP along with different directions. A comprehensive evaluation using six public face datasets suggests that the present framework outperforms the state-of-the-art methods and achieves 98.51% for ORL, 98.72% for YALE, 98.83% for Caltech, 99.52% for AR, 94.78% for FERET, and 99.07% for KDEF in terms of accuracy, respectively. The influence of color spaces and the issue of degraded images are also analyzed with our descriptors. Such a result with theoretical underpinning confirms that our descriptors are robust against noise, illumination variation, diverse facial expressions, and head poses.


2021 ◽  
Vol 70 ◽  
pp. 1-10
Author(s):  
Bharath Subramani ◽  
Ashish Kumar Bhandari ◽  
Magudeeswaran Veluchamy

Author(s):  
Hengmin Zhang ◽  
Wenli Du ◽  
Zhongmei Li ◽  
Xiaoqian Liu ◽  
Jian Long ◽  
...  
Keyword(s):  

2015 ◽  
Vol 149 ◽  
pp. 1535-1543 ◽  
Author(s):  
Jian Zhang ◽  
Dapeng Tao ◽  
Xiangjuan Bian ◽  
Xiaosi Zhan

2008 ◽  
Author(s):  
Dongxing Li ◽  
Xueyi Zhang ◽  
Dong Xu ◽  
Yan Zhao
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document