scholarly journals Modeling Fingerprint Presentation Attack Detection Through Transient Liveness Factor-A Person Specific Approach

2021 ◽  
Vol 38 (2) ◽  
pp. 299-307
Author(s):  
Akhilesh Verma ◽  
Vijay Kumar Gupta ◽  
Savita Goel ◽  
Akbar ◽  
Arun Kumar Yadav ◽  
...  

A self-learning, secure and independent open-set solution is essential to be explored to characterise the liveness of fingerprint presentation. Fingerprint spoof presentation classified as live (a Type-I error) is a major problem in a high-security establishment. Type-I error are manifestation of small number of spoof sample. We propose to use only live sample to overcome above challenge. We put forward an adaptive ‘fingerprint presentation attack detection’ (FPAD) scheme using interpretation of live sample. It requires initial high-quality live fingerprint sample of the concerned person. It uses six different image quality metrics as a transient attribute from each live sample and record it as ‘Transient Liveness Factor’ (TLF). Our study also proposes to apply fusion rule to validate scheme with three outlier detection algorithms, one-class support vector machine (SVM), isolation forest and local outlier factor. Proposed study got phenomenal accuracy of 100% in terms of spoof detection, which is an open-set method. Further, this study proposes and discuss open issues on person specific spoof detection on cloud-based solutions.

Author(s):  
Akhilesh Verma ◽  
Vijay Kumar Gupta ◽  
Savita Goel

Background: In recent history, fingerprint presentation attack detection (FPAD) proposal came out in a variety of ways. A close-set approach uses pattern classification technique that best suits to a specific context and goal. Openset approach works fine in wider context, which is relatively robust with new fabrication material and independent of sensor type. In both case results were promising but not too generalizable because of unseen condition not fitting into method used. It is clear, the two key challenges in FPAD system, sensor interoperability and robustness with new fabrication materials not addressed to date. Objective: To address above challenge a liveness detection model is proposed using live sample using transient liveness factor and one-class CNN. Methods: In our architecture, liveness is predicted by using the fusion rule, score level fusion of two decisions. Here, ‘n’ high quality live samples are initially trained for quality. We have observed that fingerprint liveness information is ‘transitory’ in nature, a variation in the different live sample is natural. Thus, each live sample has a ‘transient liveness’ (TL) information. We use no-reference (NR) image quality measure (IQM) as a transient value corresponding to each live sample. A consensus agreement is collectively reached in transient value to predict adversarial input. Further, live sample at server are trained with augmented inputs on the one-class classifier to predict the outlier. So, by using the fusion rule, score level fusion of consensus agreement and appropriately characterized negative cases (or outliers) predicts liveness. Results: Our approach uses high quality 30-live sample only, out of 90 images available in dataset to reduce learning time. We used Time Series images from LivDet competition 2015. It has 90-live images and 45-spoof images made from Bodydouble, Ecoflex and Playdoh of each person. Fusion rule results in 100% accuracy in recognising live as live. Conclusion: We have presented an architecture for liveness-server for extraction/updating transient liveness factor. Our work explained here a significant step forward towards generalized and reproducible process with a consideration towards the provision for the universal scheme as a need of today. The proposed TLF approach has a solid presumption; it will address dataset heterogeneity as it incorporates wider scope-context. Similar results with other dataset are under validation. Implementation seems difficult now but have several advantages when carried out during the transformative process.


2021 ◽  
Author(s):  
David S. Watson ◽  
Marvin N. Wright

AbstractWe propose the conditional predictive impact (CPI), a consistent and unbiased estimator of the association between one or several features and a given outcome, conditional on a reduced feature set. Building on the knockoff framework of Candès et al. (J R Stat Soc Ser B 80:551–577, 2018), we develop a novel testing procedure that works in conjunction with any valid knockoff sampler, supervised learning algorithm, and loss function. The CPI can be efficiently computed for high-dimensional data without any sparsity constraints. We demonstrate convergence criteria for the CPI and develop statistical inference procedures for evaluating its magnitude, significance, and precision. These tests aid in feature and model selection, extending traditional frequentist and Bayesian techniques to general supervised learning tasks. The CPI may also be applied in causal discovery to identify underlying multivariate graph structures. We test our method using various algorithms, including linear regression, neural networks, random forests, and support vector machines. Empirical results show that the CPI compares favorably to alternative variable importance measures and other nonparametric tests of conditional independence on a diverse array of real and synthetic datasets. Simulations confirm that our inference procedures successfully control Type I error with competitive power in a range of settings. Our method has been implemented in an package, , which can be downloaded from https://github.com/dswatson/cpi.


F1000Research ◽  
2019 ◽  
Vol 8 ◽  
pp. 1508
Author(s):  
Gian Marco Duma ◽  
Giovanni Mento ◽  
Luca Semenzato ◽  
Patrizio Tressoldi

Background: In this study, we investigated the neural correlates of the anticipatory activity of randomly presented faces and sounds of both high and low arousal level by recording EEG activity with a high spatial resolution EEG system. Methods: We preregistered the following three hypotheses: 1) a contingent Negative Variation (CNV) difference in the amplitude voltage between auditory vs faces stimuli; 2) a greater amplitude voltage in the CNV, in high arousal stimuli vs low arousal stimuli, both in auditory and faces stimuli, in the temporal  window from 0 to 1000 ms before the stimulus presentation; 3) in the time window from 0 to 1000 ms a sensory specific activation at the brain source level in the temporal lobe and auditory cortex before the presentation of an auditory stimulus and an activation of occipital area, dedicated to the elaboration of visual stimuli, before the presentation of faces . Results: Using a preregistered, hypothesis-driven approach, we found no statistically significant differences in the CNV due to an overly conservative correction for multiple comparisons for the control of Type I error. By contrast, using a data-driven approach based on a machine learning algorithm (Support Vector Machine), we found a significantly larger amplitude in the occipital cluster of electrodes before the presentation of faces with respect to sounds, along with a larger amplitude in the right auditory cortex before the presentation of sounds with respect to faces. Furthermore, we found greater CNV activity in the late prestimulus interval for high vs. low-arousal sounds stimuli in the left centro-posterior scalp regions. Conclusions: These findings, although preliminary, seem to support the hypothesis that the neurophysiological anticipatory activity of random events is specifically driven by either the sensory characteristics or the arousal level of future stimuli.


2021 ◽  
Author(s):  
Marc J Lanovaz ◽  
Rachel Primiani

Researchers and practitioners often use single-case designs (SCDs), or n-of-1 trials, to develop and validate novel treatments. Standards and guidelines have been published to provide guidance as to how to implement SCDs, but many of their recommendations are not derived from the research literature. For example, one of these recommendations suggests that researchers and practitioners should wait for baseline stability prior to introducing an independent variable. However, this recommendation is not strongly supported by empirical evidence. To address this issue, we used a Monte Carlo simulation to generate a total of 480,000 AB graphs with fixed, response-guided, and random baseline lengths. Then, our analyses compared the Type I error rate and power produced by two methods of analysis: the conservative dual-criteria method (a structured visual aid) and a support vector classifier (a model derived from machine learning). The conservative dual-criteria method produced more power when using response-guided decision-making (i.e., waiting for stability) with negligeable effects on Type I error rate. In contrast, waiting for stability did not reduce decision-making errors with the support vector classifier. Our findings question the necessity of waiting for baseline stability when using SCDs with machine learning, but the study must be replicated with other designs to support our results.


2012 ◽  
pp. 1108-1127 ◽  
Author(s):  
Gang Wang ◽  
Jin-xing Hao ◽  
Jian Ma ◽  
Li-hua Huang

Credit scoring is an important finance activity. Both statistical techniques and Artificial Intelligence (AI) techniques have been explored for this topic. But different techniques have different advantages and disadvantages on different datasets. Recent studies draw no consistent conclusions to show that one technique is superior to the other, while they suggest combining multiple classifiers, i.e., ensemble learning, may have a better performance. In this study, we conduct an empirical evaluation of the performance of three popular ensemble methods, i.e., bagging, boosting, and stacking, based on four base learners, i.e., Logistic Regression Analysis (LRA), Decision Tree (DT), Artificial Neural Network (ANN) and Support Vector Machine (SVM). The experiment uses the credit dataset including 239 companies’ financial records from China, collected by the Industrial and Commercial Bank of China. Results reveal that ensemble learning can substantially improve individual base learners. Stacking, in our experiments, gets the best performance in terms of all six performance indicators, i.e., type I error, type II error, average accuracy, precision, recall, and F-value.


Author(s):  
Gang Wang ◽  
Jin-xing Hao ◽  
Jian Ma ◽  
Li-hua Huang

Credit scoring is an important finance activity. Both statistical techniques and Artificial Intelligence (AI) techniques have been explored for this topic. But different techniques have different advantages and disadvantages on different datasets. Recent studies draw no consistent conclusions to show that one technique is superior to the other, while they suggest combining multiple classifiers, i.e., ensemble learning, may have a better performance. In this study, we conduct an empirical evaluation of the performance of three popular ensemble methods, i.e., bagging, boosting, and stacking, based on four base learners, i.e., Logistic Regression Analysis (LRA), Decision Tree (DT), Artificial Neural Network (ANN) and Support Vector Machine (SVM). The experiment uses the credit dataset including 239 companies’ financial records from China, collected by the Industrial and Commercial Bank of China. Results reveal that ensemble learning can substantially improve individual base learners. Stacking, in our experiments, gets the best performance in terms of all six performance indicators, i.e., type I error, type II error, average accuracy, precision, recall, and F-value.


Sign in / Sign up

Export Citation Format

Share Document