scholarly journals Automatic Content based Classification of Speech Audio using Multiple Instance Learning

Audio content understanding is an active research problem in the area of speech analytics. A novel approach for content-based news audio classification using Multiple Instance Learning (MIL) approach is introduced in this paper. Content-based analysis provides useful information for audio classification as well as segmentation. A key step taken in this direction is to propose a classifier that can predict the category of the input audio sample. There are two types of features used for audio content detection, namely, Perceptual Linear Prediction (PLP) coefficients and Mel-Frequency Cepstral Coefficients (MFCC). Two MIL techniques viz. mi-Graph and mi-SVM are used for classification purpose. The results obtained using these methods are evaluated using different performance matrices. From the experimental results, it is marked that the MIL demonstrates excellent audio classification capability.

2018 ◽  
Vol 29 (1) ◽  
pp. 327-344 ◽  
Author(s):  
Mohit Dua ◽  
Rajesh Kumar Aggarwal ◽  
Mantosh Biswas

Abstract The classical approach to build an automatic speech recognition (ASR) system uses different feature extraction methods at the front end and various parameter classification techniques at the back end. The Mel-frequency cepstral coefficients (MFCC) and perceptual linear prediction (PLP) techniques are the conventional approaches used for many years for feature extraction, and the hidden Markov model (HMM) has been the most obvious selection for feature classification. However, the performance of MFCC-HMM and PLP-HMM-based ASR system degrades in real-time environments. The proposed work discusses the implementation of discriminatively trained Hindi ASR system using noise robust integrated features and refined HMM model. It sequentially combines MFCC with PLP and MFCC with gammatone-frequency cepstral coefficient (GFCC) to obtain MF-PLP and MF-GFCC integrated feature vectors, respectively. The HMM parameters are refined using genetic algorithm (GA) and particle swarm optimization (PSO). Discriminative training of acoustic model using maximum mutual information (MMI) and minimum phone error (MPE) is preformed to enhance the accuracy of the proposed system. The results show that discriminative training using MPE with MF-GFCC integrated feature vector and PSO-HMM parameter refinement gives significantly better results than the other implemented techniques.


2020 ◽  
Vol 17 (1) ◽  
pp. 303-307
Author(s):  
S. Lalitha ◽  
Deepa Gupta

Mel Frequency Cepstral Coefficients (MFCCs) and Perceptual linear prediction coefficients (PLPCs) are widely casted nonlinear vocal parameters in majority of the speaker identification, speaker and speech recognition techniques as well in the field of emotion recognition. Post 1980s, significant exertions are put forth on for the progress of these features. Considerations like the usage of appropriate frequency estimation approaches, proposal of appropriate filter banks, and selection of preferred features perform a vital part for the strength of models employing these features. This article projects an overview of MFCC and PLPC features for different speech applications. The insights such as performance metrics of accuracy, background environment, type of data, and size of features are inspected and concise with the corresponding key references. Adding more to this, the advantages and shortcomings of these features have been discussed. This background work will hopefully contribute to floating a heading step in the direction of the enhancement of MFCC and PLPC with respect to novelty, raised levels of accuracy, and lesser complexity.


2014 ◽  
Vol 571-572 ◽  
pp. 205-208
Author(s):  
Guan Yu Li ◽  
Hong Zhi Yu ◽  
Yong Hong Li ◽  
Ning Ma

Speech feature extraction is discussed. Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction coefficient (PLP) method is analyzed. These two types of features are extracted in Lhasa large vocabulary continuous speech recognition system. Then the recognition results are compared.


Author(s):  
Gurpreet Kaur ◽  
Mohit Srivastava ◽  
Amod Kumar

Huge growth is observed in the speech and speaker recognition field due to many artificial intelligence algorithms being applied. Speech is used to convey messages via the language being spoken, emotions, gender and speaker identity. Many real applications in healthcare are based upon speech and speaker recognition, e.g. a voice-controlled wheelchair helps control the chair. In this paper, we use a genetic algorithm (GA) for combined speaker and speech recognition, relying on optimized Mel Frequency Cepstral Coefficient (MFCC) speech features, and classification is performed using a Deep Neural Network (DNN). In the first phase, feature extraction using MFCC is executed. Then, feature optimization is performed using GA. In the second phase training is conducted using DNN. Evaluation and validation of the proposed work model is done by setting a real environment, and efficiency is calculated on the basis of such parameters as accuracy, precision rate, recall rate, sensitivity, and specificity. Also, this paper presents an evaluation of such feature extraction methods as linear predictive coding coefficient (LPCC), perceptual linear prediction (PLP), mel frequency cepstral coefficients (MFCC) and relative spectra filtering (RASTA), with all of them used for combined speaker and speech recognition systems. A comparison of different methods based on existing techniques for both clean and noisy environments is made as well.


2020 ◽  
Vol 12 (5) ◽  
pp. 1-8
Author(s):  
Nahyan Al Mahmud ◽  
Shahfida Amjad Munni

The performance of various acoustic feature extraction methods has been compared in this work using Long Short-Term Memory (LSTM) neural network in a Bangla speech recognition system. The acoustic features are a series of vectors that represents the speech signals. They can be classified in either words or sub word units such as phonemes. In this work, at first linear predictive coding (LPC) is used as acoustic vector extraction technique. LPC has been chosen due to its widespread popularity. Then other vector extraction techniques like Mel frequency cepstral coefficients (MFCC) and perceptual linear prediction (PLP) have also been used. These two methods closely resemble the human auditory system. These feature vectors are then trained using the LSTM neural network. Then the obtained models of different phonemes are compared with different statistical tools namely Bhattacharyya Distance and Mahalanobis Distance to investigate the nature of those acoustic features.


In the recent advancements of applications, one of the challenging task in many gadgets are incorporated, which is based on audio classification and recognition. A set of emotion detection after post-surgical issues, classification of various voice sequence, classification of random voice data, surveillance and speaker detection audio data act as a crucial input. Most of the audio data is inherent with the environmental noise or instrumental noise. Extracting the unique features from the audio data is very important to determine the speaker effectively. Such kind of a novel idea is evaluated here. The research focus is based on classification of TV broadcast audios in which the type of audio is being class separated through a novel approach. The design evaluates, the five different categories of audio data such as advertisement, news, songs, cartoon and sports from the data collected using the TV tuner card. The proposed design associated with python as a Development environment. The audio samples are converted to images using Spectrogram and then transfer learning is applied on the pretrained models ResNet50 and Inceptionv3 to extract the deep features and to classify the audio data. Inception V3 is compared here with the ResNet50 to get greater accuracy in classification. The pre-trained models are models that was trained on the ImageNet data set for a certain task and are used here to quick train the audio classification model on training set with high accuracy. The proposed model produces accuracy of 94% for Inceptionv3 which gives greater accuracy when compared with the ResNet50 which gives 93%. accuracy.


2019 ◽  
Vol 8 (02) ◽  
pp. 24469-24472
Author(s):  
Thiruven Gatanadhan R

Automatic audio classification is very useful in audio indexing; content based audio retrieval and online audio distribution. This paper deals with the Speech/Music classification problem, starting from a set of features extracted directly from audio data. Automatic audio classification is very useful in audio indexing; content based audio retrieval and online audio distribution. The accuracy of the classification relies on the strength of the features and classification scheme. In this work Perceptual Linear Prediction (PLP) features are extracted from the input signal. After feature extraction, classification is carried out, using Support Vector Model (SVM) model. The proposed feature extraction and classification models results in better accuracy in speech/music classification.


Sign in / Sign up

Export Citation Format

Share Document