cepstral features
Recently Published Documents


TOTAL DOCUMENTS

133
(FIVE YEARS 27)

H-INDEX

10
(FIVE YEARS 1)

Author(s):  
Murugaiya Ramashini ◽  
P. Emeroylariffion Abas ◽  
Kusuma Mohanchandra ◽  
Liyanage C. De Silva

Birds are excellent environmental indicators and may indicate sustainability of the ecosystem; birds may be used to provide provisioning, regulating, and supporting services. Therefore, birdlife conservation-related researches always receive centre stage. Due to the airborne nature of birds and the dense nature of the tropical forest, bird identifications through audio may be a better solution than visual identification. The goal of this study is to find the most appropriate cepstral features that can be used to classify bird sounds more accurately. Fifteen (15) endemic Bornean bird sounds have been selected and segmented using an automated energy-based algorithm. Three (3) types of cepstral features are extracted; linear prediction cepstrum coefficients (LPCC), mel frequency cepstral coefficients (MFCC), gammatone frequency cepstral coefficients (GTCC), and used separately for classification purposes using support vector machine (SVM). Through comparison between their prediction results, it has been demonstrated that model utilising GTCC features, with 93.3% accuracy, outperforms models utilising MFCC and LPCC features. This demonstrates the robustness of GTCC for bird sounds classification. The result is significant for the advancement of bird sound classification research, which has been shown to have many applications such as in eco-tourism and wildlife management.


2021 ◽  
Author(s):  
Y. Bhanusree ◽  
T. Vishnu Vardhan Reddy ◽  
S. Karthik Rao

2021 ◽  
Vol 117 ◽  
pp. 107999
Author(s):  
Tusar Kanti Dash ◽  
Soumya Mishra ◽  
Ganapati Panda ◽  
Suresh Chandra Satapathy

2021 ◽  
Vol 7 ◽  
pp. e650
Author(s):  
Mohammad Ali Humayun ◽  
Hayati Yassin ◽  
Pg Emeroylariffion Abas

The success of supervised learning techniques for automatic speech processing does not always extend to problems with limited annotated speech. Unsupervised representation learning aims at utilizing unlabelled data to learn a transformation that makes speech easily distinguishable for classification tasks, whereby deep auto-encoder variants have been most successful in finding such representations. This paper proposes a novel mechanism to incorporate geometric position of speech samples within the global structure of an unlabelled feature set. Regression to the geometric position is also added as an additional constraint for the representation learning auto-encoder. The representation learnt by the proposed model has been evaluated over a supervised classification task for limited vocabulary keyword spotting, with the proposed representation outperforming the commonly used cepstral features by about 9% in terms of classification accuracy, despite using a limited amount of labels during supervision. Furthermore, a small keyword dataset has been collected for Kadazan, an indigenous, low-resourced Southeast Asian language. Analysis for the Kadazan dataset also confirms the superiority of the proposed representation for limited annotation. The results are significant as they confirm that the proposed method can learn unsupervised speech representations effectively for classification tasks with scarce labelled data.


Author(s):  
SHRUTI ARORA ◽  
SUSHMA JAIN ◽  
INDERVEER CHANA

A great increase in the number of cardiovascular cases has been a cause of serious concern for the medical experts all over the world today. In order to achieve valuable risk stratification for patients, early prediction of heart health can benefit specialists to make effective decisions. Heart sound signals help to know about the condition of heart of a patient. Motivated by the success of cepstral features in speech signal classification, authors have used here three different cepstral features, viz. Mel-frequency cepstral coefficients (MFCCs), gammatone frequency cepstral coefficients (GFCCs), and Mel-spectrogram for classifying phonocardiogram into normal and abnormal. Existing research has explored only MFCCs and Mel-feature set extensively for classifying the phonocardiogram. However, in this work, the authors have used a fusion of GFCCs with MFCCs and Mel-spectrogram, and achieved a better accuracy score of 0.96 with sensitivity and specificity scores as 0.91 and 0.98, respectively. The proposed model has been validated on the publicly available benchmark dataset PhysioNet 2016.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1888
Author(s):  
Juraj Kacur ◽  
Boris Puterka ◽  
Jarmila Pavlovicova ◽  
Milos Oravec

Many speech emotion recognition systems have been designed using different features and classification methods. Still, there is a lack of knowledge and reasoning regarding the underlying speech characteristics and processing, i.e., how basic characteristics, methods, and settings affect the accuracy, to what extent, etc. This study is to extend physical perspective on speech emotion recognition by analyzing basic speech characteristics and modeling methods, e.g., time characteristics (segmentation, window types, and classification regions—lengths and overlaps), frequency ranges, frequency scales, processing of whole speech (spectrograms), vocal tract (filter banks, linear prediction coefficient (LPC) modeling), and excitation (inverse LPC filtering) signals, magnitude and phase manipulations, cepstral features, etc. In the evaluation phase the state-of-the-art classification method and rigorous statistical tests were applied, namely N-fold cross validation, paired t-test, rank, and Pearson correlations. The results revealed several settings in a 75% accuracy range (seven emotions). The most successful methods were based on vocal tract features using psychoacoustic filter banks covering the 0–8 kHz frequency range. Well scoring are also spectrograms carrying vocal tract and excitation information. It was found that even basic processing like pre-emphasis, segmentation, magnitude modifications, etc., can dramatically affect the results. Most findings are robust by exhibiting strong correlations across tested databases.


Author(s):  
Marco Alves ◽  
Gabriel Silva ◽  
Bruno C. Bispo ◽  
María E. Dajer ◽  
Pedro M. Rodrigues

Sign in / Sign up

Export Citation Format

Share Document