scholarly journals The Sabancı University Dynamic Face Database (SU DFace)

2021 ◽  
Vol 21 (9) ◽  
pp. 2964
Author(s):  
Mahnoor Nadeem ◽  
Nihan Alp ◽  
Yagmur Damla Sentürk
Keyword(s):  
2010 ◽  
Author(s):  
Jessie J. Peissig ◽  
Gabriela I. Vicente ◽  
Maria Bouz ◽  
Anissa Lujan

2004 ◽  
Author(s):  
Meredith Minear ◽  
Denise C. Park
Keyword(s):  

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Seyed Muhammad Hossein Mousavi ◽  
S. Younes Mirinezhad

AbstractThis study presents a new color-depth based face database gathered from different genders and age ranges from Iranian subjects. Using suitable databases, it is possible to validate and assess available methods in different research fields. This database has application in different fields such as face recognition, age estimation and Facial Expression Recognition and Facial Micro Expressions Recognition. Image databases based on their size and resolution are mostly large. Color images usually consist of three channels namely Red, Green and Blue. But in the last decade, another aspect of image type has emerged, named “depth image”. Depth images are used in calculating range and distance between objects and the sensor. Depending on the depth sensor technology, it is possible to acquire range data differently. Kinect sensor version 2 is capable of acquiring color and depth data simultaneously. Facial expression recognition is an important field in image processing, which has multiple uses from animation to psychology. Currently, there is a few numbers of color-depth (RGB-D) facial micro expressions recognition databases existing. With adding depth data to color data, the accuracy of final recognition will be increased. Due to the shortage of color-depth based facial expression databases and some weakness in available ones, a new and almost perfect RGB-D face database is presented in this paper, covering Middle-Eastern face type. In the validation section, the database will be compared with some famous benchmark face databases. For evaluation, Histogram Oriented Gradients features are extracted, and classification algorithms such as Support Vector Machine, Multi-Layer Neural Network and a deep learning method, called Convolutional Neural Network or are employed. The results are so promising.


Author(s):  
Milind E Rane ◽  
Umesh S Bhadade

The paper proposes a t-norm-based matching score fusion approach for a multimodal heterogenous biometric recognition system. Two trait-based multimodal recognition system is developed by using biometrics traits like palmprint and face. First, palmprint and face are pre-processed, extracted features and calculated matching score of each trait using correlation coefficient and combine matching scores using t-norm based score level fusion. Face database like Face 94, Face 95, Face 96, FERET, FRGC and palmprint database like IITD are operated for training and testing of algorithm. The results of experimentation show that the proposed algorithm provides the Genuine Acceptance Rate (GAR) of 99.7% at False Acceptance Rate (FAR) of 0.1% and GAR of 99.2% at FAR of 0.01% significantly improves the accuracy of a biometric recognition system. The proposed algorithm provides the 0.53% more accuracy at FAR of 0.1% and 2.77% more accuracy at FAR of 0.01%, when compared to existing works.


Author(s):  
AYAN SEAL ◽  
DEBOTOSH BHATTACHARJEE ◽  
MITA NASIPURI ◽  
CONSUELO GONZALO-MARTIN

This paper presents a robust approach for recognition of thermal face images based on decision level fusion of 34 different region classifiers. The region classifiers concentrate on local variations. They use singular value decomposition (SVD) for feature extraction. Fusion of decisions of the region classifier is done by using majority voting technique. The algorithm is tolerant against false exclusion of thermal information produced by the presence of inconsistent distribution of temperature statistics which generally make the identification process difficult. The algorithm is extensively evaluated on UGC-JU thermal face database, and Terravic facial infrared database and the recognition performance are found to be 95.83% and 100%, respectively. A comparative study has also been made with the existing works in the literature.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4237 ◽  
Author(s):  
Yu-Xin Yang ◽  
Chang Wen ◽  
Kai Xie ◽  
Fang-Qing Wen ◽  
Guan-Qun Sheng ◽  
...  

In order to solve the problem of face recognition in complex environments being vulnerable to illumination change, object rotation, occlusion, and so on, which leads to the imprecision of target position, a face recognition algorithm with multi-feature fusion is proposed. This study presents a new robust face-matching method named SR-CNN, combining the rotation-invariant texture feature (RITF) vector, the scale-invariant feature transform (SIFT) vector, and the convolution neural network (CNN). Furthermore, a graphics processing unit (GPU) is used to parallelize the model for an optimal computational performance. The Labeled Faces in the Wild (LFW) database and self-collection face database were selected for experiments. It turns out that the true positive rate is improved by 10.97–13.24% and the acceleration ratio (the ratio between central processing unit (CPU) operation time and GPU time) is 5–6 times for the LFW face database. For the self-collection, the true positive rate increased by 12.65–15.31%, and the acceleration ratio improved by a factor of 6–7.


2017 ◽  
Vol 12 (3) ◽  
pp. 252-260 ◽  
Author(s):  
Chayanut Petpairote ◽  
Suthep Madarasmi ◽  
Kosin Chamnongthai

Sign in / Sign up

Export Citation Format

Share Document