scholarly journals ChestLive

Author(s):  
Yanjiao Chen ◽  
Meng Xue ◽  
Jian Zhang ◽  
Qianyun Guan ◽  
Zhiyuan Wang ◽  
...  

Voice-based authentication is prevalent on smart devices to verify the legitimacy of users, but is vulnerable to replay attacks. In this paper, we propose to leverage the distinctive chest motions during speaking to establish a secure multi-factor authentication system, named ChestLive. Compared with other biometric-based authentication systems, ChestLive does not require users to remember any complicated information (e.g., hand gestures, doodles) and the working distance is much longer (30cm). We use acoustic sensing to monitor chest motions with a built-in speaker and microphone on smartphones. To obtain fine-grained chest motion signals during speaking for reliable user authentication, we derive Channel Energy (CE) of acoustic signals to capture the chest movement, and then remove the static and non-static interference from the aggregated CE signals. Representative features are extracted from the correlation between voice signal and corresponding chest motion signal. Unlike learning-based image or speech recognition models with millions of available training samples, our system needs to deal with a limited number of samples from legitimate users during enrollment. To address this problem, we resort to meta-learning, which initializes a general model with good generalization property that can be quickly fine-tuned to identify a new user. We implement ChestLive as an application and evaluate its performance in the wild with 61 volunteers using their smartphones. Experiment results show that ChestLive achieves an authentication accuracy of 98.31% and less than 2% of false accept rate against replay attacks and impersonation attacks. We also validate that ChestLive is robust to various factors, including training set size, distance, angle, posture, phone models, and environment noises.

Sensors ◽  
2021 ◽  
Vol 21 (12) ◽  
pp. 4212
Author(s):  
Priscila Morais Argôlo Bonfim Estrela ◽  
Robson de Oliveira Albuquerque ◽  
Dino Macedo Amaral ◽  
William Ferreira Giozza ◽  
Rafael Timóteo de Sousa Júnior

As smart devices have become commonly used to access internet banking applications, these devices constitute appealing targets for fraudsters. Impersonation attacks are an essential concern for internet banking providers. Therefore, user authentication countermeasures based on biometrics, whether physiological or behavioral, have been developed, including those based on touch dynamics biometrics. These measures take into account the unique behavior of a person when interacting with touchscreen devices, thus hindering identitification fraud because it is hard to impersonate natural user behaviors. Behavioral biometric measures also balance security and usability because they are important for human interfaces, thus requiring a measurement process that may be transparent to the user. This paper proposes an improvement to Biotouch, a supervised Machine Learning-based framework for continuous user authentication. The contributions of the proposal comprise the utilization of multiple scopes to create more resilient reasoning models and their respective datasets for the improved Biotouch framework. Another contribution highlighted is the testing of these models to evaluate the imposter False Acceptance Error (FAR). This proposal also improves the flow of data and computation within the improved framework. An evaluation of the multiple scope model proposed provides results between 90.68% and 97.05% for the harmonic mean between recall and precision (F1 Score). The percentages of unduly authenticated imposters and errors of legitimate user rejection (Equal Error Rate (EER)) are between 9.85% and 1.88% for static verification, login, user dynamics, and post-login. These results indicate the feasibility of the continuous multiple-scope authentication framework proposed as an effective layer of security for banking applications, eventually operating jointly with conventional measures such as password-based authentication.


Author(s):  
Yang Gao ◽  
Yincheng Jin ◽  
Jagmohan Chauhan ◽  
Seokmin Choi ◽  
Jiyang Li ◽  
...  

With the rapid growth of wearable computing and increasing demand for mobile authentication scenarios, voiceprint-based authentication has become one of the prevalent technologies and has already presented tremendous potentials to the public. However, it is vulnerable to voice spoofing attacks (e.g., replay attacks and synthetic voice attacks). To address this threat, we propose a new biometric authentication approach, named EarPrint, which aims to extend voiceprint and build a hidden and secure user authentication scheme on earphones. EarPrint builds on the speaking-induced body sound transmission from the throat to the ear canal, i.e., different users will have different body sound conduction patterns on both sides of ears. As the first exploratory study, extensive experiments on 23 subjects show the EarPrint is robust against ambient noises and body motions. EarPrint achieves an Equal Error Rate (EER) of 3.64% with 75 seconds enrollment data. We also evaluate the resilience of EarPrint against replay attacks. A major contribution of EarPrint is that it leverages two-level uniqueness, including the body sound conduction from the throat to the ear canal and the body asymmetry between the left and the right ears, taking advantage of earphones' paring form-factor. Compared with other mobile and wearable biometric modalities, EarPrint is a low-cost, accurate, and secure authentication solution for earphone users.


2017 ◽  
Vol 28 (1) ◽  
pp. e1998 ◽  
Author(s):  
Artur Souza ◽  
Ítalo Cunha ◽  
Leonardo B Oliveira

2021 ◽  
Vol 2050 (1) ◽  
pp. 012006
Author(s):  
Xili Dai ◽  
Chunmei Ma ◽  
Jingwei Sun ◽  
Tao Zhang ◽  
Haigang Gong ◽  
...  

Abstract Training deep neural networks from only a few examples has been an interesting topic that motivated few shot learning. In this paper, we study the fine-grained image classification problem in a challenging few-shot learning setting, and propose the Self-Amplificated Network (SAN), a method based on meta-learning to tackle this problem. The SAN model consists of three parts, which are the Encoder, Amplification and Similarity Modules. The Encoder Module encodes a fine-grained image input into a feature vector. The Amplification Module is used to amplify subtle differences between fine-grained images based on the self attention mechanism which is composed of multi-head attention. The Similarity Module measures how similar the query image and the support set are in order to determine the classification result. In-depth experiments on three benchmark datasets have showcased that our network achieves superior performance over the competing baselines.


Author(s):  
Harkeerat Bedi ◽  
Li Yang ◽  
Joseph M. Kizza

Fair exchange between a pair of parties can be defined as the fundamental concept of trade where none of the parties involved in the exchange have an unfair advantage over the other once the transaction completes. Fair exchange protocols are a group of protocols that provide means for accomplishing such fair exchanges. In this chapter we analyze one such protocol which offers means for fair contract signing, where two parties exchange their commitments over a pre-negotiated contract. We show that this protocol is not entirely fair and illustrate the possibilities of one party cheating by obtaining the other’s commitment and not providing theirs. We also analyze a revised version of this protocol which offers better fairness by handling many of the weaknesses. Both these protocols however fail to handle the possibilities of replay attacks where an intruder replays messages sent earlier from one party to the other. Our proposed protocol improves upon these protocols by addressing to the weaknesses which leads to such replay attacks. We implement a complete working system which provides fair contract signing along with properties like user authentication and efficient password management achieved by using a fingerprint based authentication system and features like confidentiality, data-integrity and non-repudiation accomplished through implementation of cryptographic algorithms based on elliptic curves.


2020 ◽  
Vol 34 (07) ◽  
pp. 11507-11514
Author(s):  
Jianxin Lin ◽  
Yijun Wang ◽  
Zhibo Chen ◽  
Tianyu He

Unsupervised domain translation has recently achieved impressive performance with Generative Adversarial Network (GAN) and sufficient (unpaired) training data. However, existing domain translation frameworks form in a disposable way where the learning experiences are ignored and the obtained model cannot be adapted to a new coming domain. In this work, we take on unsupervised domain translation problems from a meta-learning perspective. We propose a model called Meta-Translation GAN (MT-GAN) to find good initialization of translation models. In the meta-training procedure, MT-GAN is explicitly trained with a primary translation task and a synthesized dual translation task. A cycle-consistency meta-optimization objective is designed to ensure the generalization ability. We demonstrate effectiveness of our model on ten diverse two-domain translation tasks and multiple face identity translation tasks. We show that our proposed approach significantly outperforms the existing domain translation methods when each domain contains no more than ten training samples.


2020 ◽  
Vol 309 ◽  
pp. 02003
Author(s):  
Gabriela Mogos

Biometric identification is an up and coming authentication method. The growing complexity of and overlap between smart devices, usability patterns and security risks make a strong case for securer and safer user authentication. This paper aims to offer a broad literature review on iris recognition and biometric cryptography to better understand current practices, propose possible future enhancements and anticipate possible future usability and security developments.


2021 ◽  
Vol 16 ◽  
pp. 482-494
Author(s):  
Liqian Liang ◽  
Congyan Lang ◽  
Yidong Li ◽  
Songhe Feng ◽  
Jian Zhao

Sign in / Sign up

Export Citation Format

Share Document