Face occlusion removal for face recognition using the related face by structural similarity index measure and principal component analysis

2021 ◽  
pp. 1-16
Author(s):  
G. Rajeswari ◽  
P. Ithaya Rani

Facial occlusions like sunglasses, masks, caps etc. have severe consequences when reconstructing the partially occluded regions of a facial picture. This paper proposes a novel hybrid machine learning approach for occlusion removal based on Structural Similarity Index Measure (SSIM) and Principal Component Analysis (PCA), called SSIM_PCA. The proposed system comprises two stages. In the first stage, a Face Similar Matrix (FSM) guided by the Structural Similarity Index Measure is generated to provide the necessary information to recover from the lost regions of the face image. The FSM generates Related Face (RF) images similar to the probe image. In the second stage, these RF images are considered as related information and used as input data to generate eigenspaces using PCA to reconstruct the occluded face region exploiting the relationship between the occluded region and related face images, which contain relevant data to recover from the occluded area. Experimental results with three standard datasets viz. Caspeal-R1, IMFDB, and FEI have proven that the proposed method works well under illumination changes and occlusion of facial images.

2018 ◽  
Vol 15 (3) ◽  
pp. 172988141878311 ◽  
Author(s):  
Nannan Wang ◽  
Wenxuan Shi ◽  
Ci’en Fan ◽  
Lian Zou

Image deblurring is a challenging problem in image processing, which aims to reconstruct an original high-quality image from its blurred measurement caused by various factors, for example, imperfect focusing caused by the imaging system or different depths of scene appearing commonly in our daily photos. Recently, sparse representation whose basic idea is to code an image patch as a linear combination of a few atoms chosen out from an over-complete dictionary has shown uplifting results in image deblurring. Based on this and another heart-stirring property called nonlocal self-similarity, some researchers have developed nonlocal sparse regularization models to unify the local sparsity and the nonlocal self-similarity into a variational framework for image deblurring. In such models, the similarity evaluation for searching similar image patches is indispensable and influential in deblurring performance. Though the traditional Euclidean distance is generally a choice as a similarity metric, its application might give rise to inferior performance since it fails to capture the intrinsic structure of image patches. Consequently, in this article, based on structural similarity index and principal component analysis, we propose the nonlocal sparse regularization-based image deblurring with novel similarity criteria called structural similarity distance and principal component analysis-subspace Euclidean distance to improve the accuracy of deblurring. The structural similarity index is commonly used for assessing perceptual image quality, and principal component analysis is pervasively used in pattern recognition and dimensionality reduction. In our comprehensive experiments, the nonlocal sparse regularization-based image deblurring with our novel similarity criteria has achieved higher peak signal-to-noise and favorable consistency with subjective vision perception compared with state-of-the-art deblurring algorithms.


Author(s):  
Maryam Abedini ◽  
Horriyeh Haddad ◽  
Marzieh Faridi Masouleh ◽  
Asadollah Shahbahrami

This study proposes an image denoising algorithm based on sparse representation and Principal Component Analysis (PCA). The proposed algorithm includes the following steps. First, the noisy image is divided into overlapped [Formula: see text] blocks. Second, the discrete cosine transform is applied as a dictionary for the sparse representation of the vectors created by the overlapped blocks. To calculate the sparse vector, the orthogonal matching pursuit algorithm is used. Then, the dictionary is updated by means of the PCA algorithm to achieve the sparsest representation of vectors. Since the signal energy, unlike the noise energy, is concentrated on a small dataset by transforming into the PCA domain, the signal and noise can be well distinguished. The proposed algorithm was implemented in a MATLAB environment and its performance was evaluated on some standard grayscale images under different levels of standard deviations of white Gaussian noise by means of peak signal-to-noise ratio, structural similarity indexes, and visual effects. The experimental results demonstrate that the proposed denoising algorithm achieves significant improvement compared to dual-tree complex discrete wavelet transform and K-singular value decomposition image denoising methods. It also obtains competitive results with the block-matching and 3D filtering method, which is the current state-of-the-art for image denoising.


2005 ◽  
Vol 13 (3) ◽  
pp. 459-479 ◽  
Author(s):  
Graham Pike ◽  
Nicola Brace ◽  
Jim Turner ◽  
Sally Kynan

Knowledge concerning the cognition involved in perceiving and remembering faces has informed the design of at least two generations of facial compositing technology. These systems allow a witness to work with a computer (and a police operator) in order to construct an image of a perpetrator. Research conducted with systems currently in use has suggested that basing the construction process on the witness recalling and verbally describing the face can be problematic. To overcome these problems and make better use of witness cognition, the latest systems use a combination of Principal Component Analysis (PCA) facial synthesis and an array-based interface. The present paper describes a preliminary study conducted to determine whether the use of an array-based interface really does make appropriate use of witness cognition and what issues need to be considered in the design of emerging compositing technology.


Author(s):  
Hayder Ansaf ◽  
Hayder Najm ◽  
Jasim Mohammed Atiyah ◽  
Oday A. Hassen

The smile detection approach is quite prominent with the face detection and thereby the enormous implementations are prevalent so that the higher degree of accuracy can be achieved. The face smile detection is widely associated to have the forensic of faces of human beings so that the future predictions can be done. In chaos theory, the main strategy is to have the cavernous analytics on the single change and then to predict the actual faces in the analysis. In addition, the integration of Principal Component Analysis (PCA) is integrated to have the predictions with more accuracy. This work proposes to use the analytics on the parallel integration of PCA and chaos theory to enable the face smile and fake identifications to be made possible. The projected work is analyzed using assorted parameters and it has been found that the deep learning integration approach for chaos and PCA is quite important and performance aware in the multiple parameters with the different datasets in evaluations.


2019 ◽  
Vol 3 (2) ◽  
pp. 80-84 ◽  
Author(s):  
Mustafa H. Mohammed Alhabib ◽  
Mustafa Zuhaer Nayef Al-Dabagh ◽  
Firas H. AL-Mukhtar ◽  
Hussein Ibrahim Hussein

Facial analysis has evolved to be a process of considerable importance due to its consequence on the safety and security, either individually or generally on the society level, especially in personal identification. The paper in hand applies facial identification on a facial image dataset by examining partial facial images before allocating a set of distinctive characteristics to them. Extracting the desired features from the input image is achieved by means of wavelet transform. Principal component analysis is used for feature selection, which specifies several aspects in the input image; these features are fed to two stages of classification using a support vector machine and K-nearest neighborhood to classify the face. The images used to test the strength of the suggested method are taken from the well-known (Yale) database. Test results showed the eligibility of the system when it comes to identify images and assign the correct face and name.


2021 ◽  
Vol 7 (2) ◽  
pp. 75
Author(s):  
Halim Bayuaji Sumarna ◽  
Ema Utami ◽  
Anggit Dwi Hartanto

Image enhancement merupakan prosedur yang digunakan untuk memproses gambar sehingga dapat memperbaiki atau meningkatkan kualitas gambar agar selanjutnya dapat dianalis untuk tujuan tertentu. Ada banyak algoritma image enhancement yang dapat diterapkan pada suatu gambar, salah satunya dapat menggunakan algoritma structural similarity index measure (SSIM), algoritma ini berfungsi sebagai alat ukur dalam menilai kualitas gambar, bekerja dengan membandingkan fitur structural dari gambar, dan kualitas gambar dijelaskan oleh kesamaan structural. Selain untuk menilai kualitas suatu gambar, SSIM dapat menjadi metode dalam menganalisis perbedaan gambar, sehingga diketahui anomali dari perbandingan dua gambar berdasarkan data structural dari sebuah gambar. Tinjauan literature sistematis ini digunakan untuk menganalisis dan fokus pada algoritma SSIM dalam mengetahui anomaly 2 gambar yang terlihat mirip secara human visual system. Hasil sistematis review menunjukkan bahwa penggunaan algoritma SSIM dalam menilai kualitas gambar berkorelasi kuat dengan HVS (Human Vision System) dan dalam deteksi anomaly gambar menghasilkan akurasi yang berbeda, karena terpengaruh intensitas cahaya dan posisi kamera dalam mengambil gambar sebagai dataset.Kata Kunci— SSIM, anomaly, gambar, deteksiImage enhancement is a procedure used to process images so that they can correct or improve image quality so that they can then be analyzed for specific purposes. Many image enhancement algorithms can be applied to an image. one of the usable methods is the structural similarity index measure (SSIM) algorithm, this algorithm serves as a measuring tool in assessing image quality. It works by comparing the structural features of images, and the image quality is explained by structural similarity. In addition to assessing the quality of an image, SSIM can be a method of analyzing image differences. So, the anomalies are known from the comparison of two images based on the structural data from an image. This systematic literature review is used to analyze and focus on the SSIM algorithm in knowing anomaly 2 images that look similar to the human visual system. Systematic review results show that the use of the SSIM algorithm in assessing image quality is strongly correlated with HVS (Human Vision System). In anomaly detection of images produces different accuracy because it is affected by light intensity and camera position in taking pictures as a dataset.Keywords— SSIM, anomaly, gambar, deteksi


2020 ◽  
Vol 9 (4) ◽  
pp. 1461-1467
Author(s):  
Indrarini Dyah Irawati ◽  
Sugondo Hadiyoso ◽  
Yuli Sun Hariyani

In this study, we proposed compressive sampling for MRI reconstruction based on sparse representation using multi-wavelet transformation. Comparing the performance of wavelet decomposition level, which are Level 1, Level 2, Level 3, and Level 4. We used gaussian random process to generate measurement matrix. The algorithm used to reconstruct the image is . The experimental results showed that the use of wavelet multi-level can generate higher compression ratio but requires a longer processing time. MRI reconstruction results based on the parameters of the peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM) show that the higher the level of decomposition in wavelets, the value of both decreases.


Sign in / Sign up

Export Citation Format

Share Document