Person Authentication Based on the Difference of Deep Features Extracted from the Ocular and Face Regions
This paper presents a new method for person authentication that relies on the fusion of two biometric authentication methods based, respectively, on ocular deep features and facial deep features. In our work, the deep features are extracted from the regions of interest by using a very deep CNN (Convolutional Neural Network). Another interesting aspect of our work is that, instead of using directly the deep features as input for the authentication methods, we use the difference between the probe and gallery deep features. So, our method adopts a pairwise strategy. Support Vector Machine classifiers are trained separately for each approach. The fusion of the ocular and the facial based methods are carried out in the score level. The proposed method was assessed with a facial database taken under uncontrolled environment and reached good results. Besides, the fusion strategy proposed in this work showed better results than the results obtained by each individual method.