Transferable deep convolutional neural network features for fingervein presentation attack detection

Author(s):  
R. Raghavendra ◽  
Sushma Venkatesh ◽  
Kiran B. Raja ◽  
Christoph Busch
Author(s):  
Anjith George ◽  
Zohreh Mostaani ◽  
David Geissenbuhler ◽  
Olegs Nikisins ◽  
Andre Anjos ◽  
...  

Entropy ◽  
2020 ◽  
Vol 22 (11) ◽  
pp. 1296
Author(s):  
M. Araceli Sánchez-Sánchez ◽  
Cristina Conde ◽  
Beatriz Gómez-Ayllón ◽  
David Ortega-DelCampo ◽  
Aristeidis Tsitiridis ◽  
...  

Automated border control systems are the first critical infrastructure point when crossing a border country. Crossing border lines for unauthorized passengers is a high security risk to any country. This paper presents a multispectral analysis of presentation attack detection for facial biometrics using the learned features from a convolutional neural network. Three sensors are considered to design and develop a new database that is composed of visible (VIS), near-infrared (NIR), and thermal images. Most studies are based on laboratory or ideal conditions-controlled environments. However, in a real scenario, a subject’s situation is completely modified due to diverse physiological conditions, such as stress, temperature changes, sweating, and increased blood pressure. For this reason, the added value of this study is that this database was acquired in situ. The attacks considered were printed, masked, and displayed images. In addition, five classifiers were used to detect the presentation attack. Note that thermal sensors provide better performance than other solutions. The results present better outputs when all sensors are used together, regardless of whether classifier or feature-level fusion is considered. Finally, classifiers such as KNN or SVM show high performance and low computational level.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document