Security weakness of dynamic watermarking-based detection for generalised replay attacks

Author(s):  
Changda Zhang ◽  
Dajun Du ◽  
Qing Sun ◽  
Xue Li ◽  
Aleksandar Rakić ◽  
...  
Keyword(s):  
Author(s):  
Hongyi Pu ◽  
Liang He ◽  
Chengcheng Zhao ◽  
David K. Y. Yau ◽  
Peng Cheng ◽  
...  

Author(s):  
Yang Gao ◽  
Yincheng Jin ◽  
Jagmohan Chauhan ◽  
Seokmin Choi ◽  
Jiyang Li ◽  
...  

With the rapid growth of wearable computing and increasing demand for mobile authentication scenarios, voiceprint-based authentication has become one of the prevalent technologies and has already presented tremendous potentials to the public. However, it is vulnerable to voice spoofing attacks (e.g., replay attacks and synthetic voice attacks). To address this threat, we propose a new biometric authentication approach, named EarPrint, which aims to extend voiceprint and build a hidden and secure user authentication scheme on earphones. EarPrint builds on the speaking-induced body sound transmission from the throat to the ear canal, i.e., different users will have different body sound conduction patterns on both sides of ears. As the first exploratory study, extensive experiments on 23 subjects show the EarPrint is robust against ambient noises and body motions. EarPrint achieves an Equal Error Rate (EER) of 3.64% with 75 seconds enrollment data. We also evaluate the resilience of EarPrint against replay attacks. A major contribution of EarPrint is that it leverages two-level uniqueness, including the body sound conduction from the throat to the ear canal and the body asymmetry between the left and the right ears, taking advantage of earphones' paring form-factor. Compared with other mobile and wearable biometric modalities, EarPrint is a low-cost, accurate, and secure authentication solution for earphone users.


Sign in / Sign up

Export Citation Format

Share Document