A Structured Approach towards Robust Database Collection for Speaker Recognition

2017 ◽  
Vol 9 (3) ◽  
pp. 53 ◽  
Author(s):  
Pardeep Sangwan ◽  
Saurabh Bhardwaj

<p>Speaker recognition systems are classified according to their database, feature extraction techniques and classification methods. It is analyzed that there is a much need to work upon all the dimensions of forensic speaker recognition systems from the very beginning phase of database collection to recognition phase. The present work provides a structured approach towards developing a robust speech database collection for efficient speaker recognition system. The database required for both systems is entirely different. The databases for biometric systems are readily available while databases for forensic speaker recognition system are scarce. The paper also presents several databases available for speaker recognition systems.</p><p> </p>

Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Jiang Lin ◽  
Yi Yumei ◽  
Zhang Maosheng ◽  
Chen Defeng ◽  
Wang Chao ◽  
...  

In speaker recognition systems, feature extraction is a challenging task under environment noise conditions. To improve the robustness of the feature, we proposed a multiscale chaotic feature for speaker recognition. We use a multiresolution analysis technique to capture more finer information on different speakers in the frequency domain. Then, we extracted the speech chaotic characteristics based on the nonlinear dynamic model, which helps to improve the discrimination of features. Finally, we use a GMM-UBM model to develop a speaker recognition system. Our experimental results verified its good performance. Under clean speech and noise speech conditions, the ERR value of our method is reduced by 13.94% and 26.5% compared with the state-of-the-art method, respectively.


2021 ◽  
Vol 10 (1) ◽  
pp. 374-382
Author(s):  
Ayoub Bouziane ◽  
Jamal Kharroubi ◽  
Arsalane Zarghili

A common limitation of the previous comparative studies on speaker-features extraction techniques lies in the fact that the comparison is done independently of the used speaker modeling technique and its parameters. The aim of the present paper is twofold. Firstly, it aims to review the most significant advancements in feature extraction techniques used for automatic speaker recognition. Secondly, it seeks to evaluate and compare the currently dominant ones using an objective comparison methodology that overcomes the various limitations and drawbacks of the previous comparative studies. The results of the carried out experiments underlines the importance of the proposed comparison methodology. 


2019 ◽  
Vol 8 (2) ◽  
pp. 6429-6432

Speaker recognition is the task in which the speaker is identified based on various features from his speech. Speaker recognition is combination of various mathematical operations in which training and testing is the major part. For speaker recognition its very important to extract the features. So far many researches are going on about feature extraction techniques like MFCC, IMFCC etc. In which features can be extracted, but for the exact speaker recognition its very important to get the exact and accurate features so that we can increase the success rate of speaker recognition. For any speaker recognition system feature extraction is the primary and very important step. So the precise result depends on the accurate result of feature extraction technique. In this paper we are proposing a modified feature extraction system.


Author(s):  
V. Jagan Naveen ◽  
K. Krishna Kishore ◽  
P. Rajesh Kumar

In the modern world, human recognition systems play an important role to   improve security by reducing chances of evasion. Human ear is used for person identification .In the Empirical study on research on human ear, 10000 images are taken to find the uniqueness of the ear. Ear based system is one of the few biometric systems which can provides stable characteristics over the age. In this paper, ear images are taken from mathematical analysis of images (AMI) ear data base and the analysis is done on ear pattern recognition based on the Expectation maximization algorithm and k means algorithm.  Pattern of ears affected with different types of noises are recognized based on Principle component analysis (PCA) algorithm.


2020 ◽  
Vol 0 (0) ◽  
Author(s):  
Mohammadreza Azimi ◽  
Seyed Ahmad Rasoulinejad ◽  
Andrzej Pacut

AbstractIn this paper, we attempt to answer the questions whether iris recognition task under the influence of diabetes would be more difficult and whether the effects of diabetes and individuals’ age are uncorrelated. We hypothesized that the health condition of volunteers plays an important role in the performance of the iris recognition system. To confirm the obtained results, we reported the distribution of usable area in each subgroup to have a more comprehensive analysis of diabetes effects. There is no conducted study to investigate for which age group (young or old) the diabetes effect is more acute on the biometric results. For this purpose, we created a new database containing 1,906 samples from 509 eyes. We applied the weighted adaptive Hough ellipsopolar transform technique and contrast-adjusted Hough transform for segmentation of iris texture, along with three different encoding algorithms. To test the hypothesis related to physiological aging effect, Welches’s t-test and Kolmogorov–Smirnov test have been used to study the age-dependency of diabetes mellitus influence on the reliability of our chosen iris recognition system. Our results give some general hints related to age effect on performance of biometric systems for people with diabetes.


Author(s):  
Pratik K. Kurzekar ◽  
Ratnadeep R. Deshmukh ◽  
Vishal B. Waghmare ◽  
Pukhraj P. Shrishrimal

2021 ◽  
Vol 11 (21) ◽  
pp. 10079
Author(s):  
Muhammad Firoz Mridha ◽  
Abu Quwsar Ohi ◽  
Muhammad Mostafa Monowar ◽  
Md. Abdul Hamid ◽  
Md. Rashedul Islam ◽  
...  

Speaker recognition deals with recognizing speakers by their speech. Most speaker recognition systems are built upon two stages, the first stage extracts low dimensional correlation embeddings from speech, and the second performs the classification task. The robustness of a speaker recognition system mainly depends on the extraction process of speech embeddings, which are primarily pre-trained on a large-scale dataset. As the embedding systems are pre-trained, the performance of speaker recognition models greatly depends on domain adaptation policy, which may reduce if trained using inadequate data. This paper introduces a speaker recognition strategy dealing with unlabeled data, which generates clusterable embedding vectors from small fixed-size speech frames. The unsupervised training strategy involves an assumption that a small speech segment should include a single speaker. Depending on such a belief, a pairwise constraint is constructed with noise augmentation policies, used to train AutoEmbedder architecture that generates speaker embeddings. Without relying on domain adaption policy, the process unsupervisely produces clusterable speaker embeddings, termed unsupervised vectors (u-vectors). The evaluation is concluded in two popular speaker recognition datasets for English language, TIMIT, and LibriSpeech. Also, a Bengali dataset is included to illustrate the diversity of the domain shifts for speaker recognition systems. Finally, we conclude that the proposed approach achieves satisfactory performance using pairwise architectures.


Sign in / Sign up

Export Citation Format

Share Document