scholarly journals BIOMETRIC HUMAN AUTHENTICATION SYSTEM THROUGH SPEECH USING DEEP NEURAL NETWORKS (DNN)

THE BULLETIN ◽  
2020 ◽  
Vol 5 (387) ◽  
pp. 6-15
Author(s):  
O. Mamyrbayev ◽  
◽  
A. Akhmediyarova ◽  
A. Kydyrbekova ◽  
N. O. Mekebayev ◽  
...  

Biometrics offers more security and convenience than traditional methods of identification. Recently, DNN has become a means of a more reliable and efficient authentication scheme. In this work, we compare two modern teaching methods: these two methods are methods based on the Gaussian mixture model (GMM) (denoted by the GMM i-vector) and methods based on deep neural networks (DNN) (denoted as the i-vector DNN). The results show that the DNN system with an i-vector is superior to the GMM system with an i-vector for various durations (from full length to 5s). DNNs have proven to be the most effective features for text-independent speaker verification in recent studies. In this paper, a new scheme is proposed that allows using DNN when checking text using hints in a simple and effective way. Experiments show that the proposed scheme reduces EER by 24.32% compared with the modern method and is evaluated for its reliability using noisy data, as well as data collected in real conditions. In addition, it is shown that the use of DNN instead of GMM for universal background modeling leads to a decrease in EER by 15.7%.

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hongwei Luo ◽  
Yijie Shen ◽  
Feng Lin ◽  
Guoai Xu

Speaker verification system has gained great popularity in recent years, especially with the development of deep neural networks and Internet of Things. However, the security of speaker verification system based on deep neural networks has not been well investigated. In this paper, we propose an attack to spoof the state-of-the-art speaker verification system based on generalized end-to-end (GE2E) loss function for misclassifying illegal users into the authentic user. Specifically, we design a novel loss function to deploy a generator for generating effective adversarial examples with slight perturbation and then spoof the system with these adversarial examples to achieve our goals. The success rate of our attack can reach 82% when cosine similarity is adopted to deploy the deep-learning-based speaker verification system. Beyond that, our experiments also reported the signal-to-noise ratio at 76 dB, which proves that our attack has higher imperceptibility than previous works. In summary, the results show that our attack not only can spoof the state-of-the-art neural-network-based speaker verification system but also more importantly has the ability to hide from human hearing or machine discrimination.


Sign in / Sign up

Export Citation Format

Share Document