equal error rate
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 41)

H-INDEX

5
(FIVE YEARS 3)

2022 ◽  
Vol 16 (1) ◽  
pp. 1-62
Author(s):  
Nampoina Andriamilanto ◽  
Tristan Allard ◽  
Gaëtan Le Guelvouit ◽  
Alexandre Garel

Modern browsers give access to several attributes that can be collected to form a browser fingerprint. Although browser fingerprints have primarily been studied as a web tracking tool, they can contribute to improve the current state of web security by augmenting web authentication mechanisms. In this article, we investigate the adequacy of browser fingerprints for web authentication. We make the link between the digital fingerprints that distinguish browsers, and the biological fingerprints that distinguish Humans, to evaluate browser fingerprints according to properties inspired by biometric authentication factors. These properties include their distinctiveness, their stability through time, their collection time, their size, and the accuracy of a simple verification mechanism. We assess these properties on a large-scale dataset of 4,145,408 fingerprints composed of 216 attributes and collected from 1,989,365 browsers. We show that, by time-partitioning our dataset, more than 81.3% of our fingerprints are shared by a single browser. Although browser fingerprints are known to evolve, an average of 91% of the attributes of our fingerprints stay identical between two observations, even when separated by nearly six months. About their performance, we show that our fingerprints weigh a dozen of kilobytes and take a few seconds to collect. Finally, by processing a simple verification mechanism, we show that it achieves an equal error rate of 0.61%. We enrich our results with the analysis of the correlation between the attributes and their contribution to the evaluated properties. We conclude that our browser fingerprints carry the promise to strengthen web authentication mechanisms.


Author(s):  
Amitabh Thapliyal ◽  
Om Prakash Verma ◽  
Amioy Kumar

<p><span>The usage of mobile phones has increased multifold in the recent decades mostly because of its utility in most of the aspects of daily life, such as communications, entertainment, and financial transactions. Feature phones are generally the keyboard based or lower version of touch based mobile phones, mostly targeted for efficient calling and messaging. In comparison to smart phones, feature phones have no provision of a biometrics system for the user access. The literature, have shown very less attempts in designing a biometrics system which could be most suitable to the low-cost feature phones. A biometric system utilizes the features and attributes based on the physiological or behavioral properties of the individual. In this research, we explore the usefulness of keystroke dynamics for feature phones which offers an efficient and versatile biometric framework. In our research, we have suggested an approach to incorporate the user’s typing patterns to enhance the security in the feature phone. We have applied k-nearest neighbors (k-NN) with fuzzy logic and achieved the equal error rate (EER) 1.88% to get the better accuracy. The experiments are performed with 25 users on Samsung On7 Pro C3590. On comparison, our proposed technique is competitive with almost all the other techniques available in the literature.</span></p>


2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Vani Rajasekar ◽  
Bratislav Predić ◽  
Muzafer Saracevic ◽  
Mohamed Elhoseny ◽  
Darjan Karabasevic ◽  
...  

AbstractBiometric security is a major emerging concern in the field of data security. In recent years, research initiatives in the field of biometrics have grown at an exponential rate. The multimodal biometric technique with enhanced accuracy and recognition rate for smart cities is still a challenging issue. This paper proposes an enhanced multimodal biometric technique for a smart city that is based on score-level fusion. Specifically, the proposed approach provides a solution to the existing challenges by providing a multimodal fusion technique with an optimized fuzzy genetic algorithm providing enhanced performance. Experiments with different biometric environments reveal significant improvements over existing strategies. The result analysis shows that the proposed approach provides better performance in terms of the false acceptance rate, false rejection rate, equal error rate, precision, recall, and accuracy. The proposed scheme provides a higher accuracy rate of 99.88% and a lower equal error rate of 0.18%. The vital part of this approach is the inclusion of a fuzzy strategy with soft computing techniques known as an optimized fuzzy genetic algorithm.


2021 ◽  
Author(s):  
David C. Yonekura ◽  
Elloá B. Guedes

Handwritten signature authentication systems are important in many real world scenarios to avoid frauds. Thanks to Deep Learning, state-of-art solutions have been proposed to this problem by making use of Convolutional Neural Networks, but other models in this Machine Learning subarea are still to be further explored. In this perspective, the present article introduces a Conditional Deep Convolutional Generative Adversarial Networks (cDCGAN) approach whose experimental results in a realistic dataset with skilled forgeries have Equal Error Rate (EER) of 18.53% and balanced accuracy of 87.91%. These results validate a writerdependent cDCGAN-based solution to the signature authentication problem in a real world scenario where no forgeries are available nor required in training time.


2021 ◽  
Vol 15 (4) ◽  
pp. 98-117
Author(s):  
Priya C. V. ◽  
K. S. Angel Viji

In a password-based authentication technique, whenever the typed password and username matches the system database, the secure login page allows the client to access it. Despite the password matching, the proposed method checks the similarity between the typing rhythm of entered password and the rhythm of password samples in client's database. In this paper, a novel algorithmic procedure is presented to authenticate the legal client based on empirical threshold values obtained from the timing information of the client's keystroke dynamics. The exploratory outcomes demonstrate an impressive diminish in both false rejection rate and false acceptance rate. Equal error rate and authentication accuracy are also assessed to show the superiority and robustness of the method. Therefore, the proposed keystroke dynamics-based authentication method can be valuable in securing the system protection as a correlative or substitute form of client validation and as a useful resource for identifying the illegal invasion.


2021 ◽  
Author(s):  
Poonam Poonia ◽  
Pawan K. Ajmera

Abstract Biometric systems proven to be one of the most reliable and robust method for human identification. Integration of biometrics among the standard of living provokes the necessity to vogue secure authentication systems. The use of palm-prints for user access and authentication has increased in the last decade. To give the essential security and protection benefits, conventional neural networks (CNNs) has been bestowed during this work. The combined CNN and feature transform structure is employed for mapping palm-prints to random base-n codes. Further, secure hash algorithm (SHA-3) is used to generate secure palm-print templates. The proficiency of the proposed approach has been tested on PolyU, CASIA and IIT-Delhi palm-print datasets. The best recognition performance in terms of Equal Error Rate (EER) of 0.62% and Genuine Acceptance Rate (GAR) of 99.05% was achieved on PolyU database.


2021 ◽  
Vol 17 (2) ◽  
pp. 1-22
Author(s):  
Chaohao Li ◽  
Xiaoyu Ji ◽  
Bin Wang ◽  
Kai Wang ◽  
Wenyuan Xu

Indoor proximity verification has become an increasingly useful primitive for the scenarios where access is granted to the previously unknown users when they enter a given area (e.g., a hotel room). Existing solutions either rely on homogeneous sensing modalities shared by two parties or require additional human interactions. In this article, we propose a context-based indoor proximity verification scheme, called SenCS, to enable real-time autonomous access for mobile devices, utilizing the available heterogeneous sensors at the user side and at the room side. The intuition is that only when the user is within a room can sensors from both sides observe the same events in the room. Yet such a solution is challenging, because the events may not provide enough entropy within the required time and the heterogeneity in sensing modalities may not always agree on the sensed events. To overcome the challenges, we exploit the time intervals between successively human actions to create heterogeneous contextual fingerprints (HCF) at a millisecond level. By comparing the contextual similarity between the HCF s from both the room and user sides, SenCS accomplishes the indoor proximity verification. Through proof-of-concept implementation and evaluations on 30 participants, SenCS achieves an accuracy of 99.77% and an equal error rate (EER) of 0.23% across various hardware configurations.


Author(s):  
Alex Marino Gonçalves De Almeida ◽  
Claudineia Helena Recco ◽  
Rodrigo Capobianco Guido

The state-of-art models for speech synthesis and voice conversion can generate synthetic speech perceptually indistinguishable from human speech, and speaker verification is crucial to prevent breaches. The building feature that best distinguishes genuine speech between spoof attacks is an open research subject. We used the baseline ASVSpoof2017, Transfer Learning (TL) set, and Symlet and Daubechies Discrete Wavelet Packet Transform (DWPT) for this investigation. To qualitatively assess the features, we used Paraconsistent Feature Engineering (PFE). Our experiments pointed out that for the use of more robust classifiers, the best choice would be the AlexNet method, while in terms of classification regarding the Equal Error Rate metric, the best suggestion would be Daubechies filter support 21. Finally, our findings indicate that Symlet filter support 17 as the most promising feature, which is evidence that PFE is a useful tool and contributes to feature selection.


2021 ◽  
Author(s):  
Qinghua Zhong ◽  
Ruining Dai ◽  
Han Zhang ◽  
YongSheng Zhu ◽  
Guofu Zhou

Abstract Text-independent speaker recognition is widely used in identity recognition. In order to improve the features recognition ability, a method of text-independent speaker recognition based on a deep residual network model was proposed in this paper. Firstly, the original audio was extracted with a 64-dimensional log filter bank signal features. Secondly, a deep residual network was used to extract log filter bank signal features. The deep residual network was composed of a residual network and a Convolutional Attention Statistics Pooling (CASP) layer. The CASP layer could aggregate the frame-level features from the residual network into utterance-level features. Lastly, Adaptive Curriculum Learning Loss (ACLL) classifiers was used to optimize the output of abstract features by the deep residual network, and the text-independent speaker recognition was completed by ACLL classifiers. The proposed method was applied to a large VoxCeleb2 dataset for extensive text-independent speaker recognition experiments, and average equal error rate (EER) could achieve 1.76% on VoxCeleb1 test dataset, 1.91% on VoxCeleb1-E test dataset, and 3.24% on VoxCeleb1-H test dataset. Compared with related speaker recognition methods, EER was improved by 1.11% on VoxCeleb1 test dataset, 1.04% on VoxCeleb1-E test dataset, and 1.69% on VoxCeleb1-H test dataset.


Electronics ◽  
2020 ◽  
Vol 9 (12) ◽  
pp. 2201
Author(s):  
Ara Bae ◽  
Wooil Kim

One of the most recent speaker recognition methods that demonstrates outstanding performance in noisy environments involves extracting the speaker embedding using attention mechanism instead of average or statistics pooling. In the attention method, the speaker recognition performance is improved by employing multiple heads rather than a single head. In this paper, we propose advanced methods to extract a new embedding by compensating for the disadvantages of the single-head and multi-head attention methods. The combination method comprising single-head and split-based multi-head attentions shows a 5.39% Equal Error Rate (EER). When the single-head and projection-based multi-head attention methods are combined, the speaker recognition performance improves by 4.45%, which is the best performance in this work. Our experimental results demonstrate that the attention mechanism reflects the speaker’s properties more effectively than average or statistics pooling, and the speaker verification system could be further improved by employing combinations of different attention techniques.


Sign in / Sign up

Export Citation Format

Share Document