scholarly journals Speech Recognition for Task Domains with Sparse Matched Training Data

2020 ◽  
Vol 10 (18) ◽  
pp. 6155
Author(s):  
Byung Ok Kang ◽  
Hyeong Bae Jeon ◽  
Jeon Gue Park

We propose two approaches to handle speech recognition for task domains with sparse matched training data. One is an active learning method that selects training data for the target domain from another general domain that already has a significant amount of labeled speech data. This method uses attribute-disentangled latent variables. For the active learning process, we designed an integrated system consisting of a variational autoencoder with an encoder that infers latent variables with disentangled attributes from the input speech, and a classifier that selects training data with attributes matching the target domain. The other method combines data augmentation methods for generating matched target domain speech data and transfer learning methods based on teacher/student learning. To evaluate the proposed method, we experimented with various task domains with sparse matched training data. The experimental results show that the proposed method has qualitative characteristics that are suitable for the desired purpose, it outperforms random selection, and is comparable to using an equal amount of additional target domain data.

2021 ◽  
Vol 11 (6) ◽  
pp. 2866
Author(s):  
Damheo Lee ◽  
Donghyun Kim ◽  
Seung Yun ◽  
Sanghun Kim

In this paper, we propose a new method for code-switching (CS) automatic speech recognition (ASR) in Korean. First, the phonetic variations in English pronunciation spoken by Korean speakers should be considered. Thus, we tried to find a unified pronunciation model based on phonetic knowledge and deep learning. Second, we extracted the CS sentences semantically similar to the target domain and then applied the language model (LM) adaptation to solve the biased modeling toward Korean due to the imbalanced training data. In this experiment, training data were AI Hub (1033 h) in Korean and Librispeech (960 h) in English. As a result, when compared to the baseline, the proposed method improved the error reduction rate (ERR) by up to 11.6% with phonetic variant modeling and by 17.3% when semantically similar sentences were applied to the LM adaptation. If we considered only English words, the word correction rate improved up to 24.2% compared to that of the baseline. The proposed method seems to be very effective in CS speech recognition.


2021 ◽  
Author(s):  
Justin Larocque-Villiers ◽  
Patrick Dumond

Abstract Through the intelligent classification of bearing faults, predictive maintenance provides for the possibility of service schedule, inventory, maintenance, and safety optimization. However, real-world rotating machinery undergo a variety of operating conditions, fault conditions, and noise. Due to these factors, it is often required that a fault detection algorithm perform accurately even on data outside its trained domain. Although open-source datasets offer an incredible opportunity to advance the performance of predictive maintenance technology and methods, more research is required to develop algorithms capable of generalized intelligent fault detection across domains and discrepancies. In this study, current benchmarks on source–target domain discrepancy challenges are reviewed using the Case Western Reserve University (CWRU) and the Paderborn University (PbU) datasets. A convolutional neural network (CNN) architecture and data augmentation technique more suitable for generalization tasks is proposed and tested against existing benchmarks on the Pb U dataset by training on artificial faults and testing on real faults. The proposed method improves fault classification by 13.35%, with less than half the standard deviation of the compared benchmark. Transfer learning is then used to leverage the larger PbU dataset in order to make predictions on the CWRU dataset under a challenging source-target domain discrepancy in which there is minimal training data to adequately represent unseen bearing faults. The transfer learning-based CNN is found to be capable of generalizing across two open-source datasets, resulting in an improvement in accuracy from 53.1% to 68.3%.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Qingshan She ◽  
Kang Chen ◽  
Zhizeng Luo ◽  
Thinh Nguyen ◽  
Thomas Potter ◽  
...  

Recent technological advances have enabled researchers to collect large amounts of electroencephalography (EEG) signals in labeled and unlabeled datasets. It is expensive and time consuming to collect labeled EEG data for use in brain-computer interface (BCI) systems, however. In this paper, a novel active learning method is proposed to minimize the amount of labeled, subject-specific EEG data required for effective classifier training, by combining measures of uncertainty and representativeness within an extreme learning machine (ELM). Following this approach, an ELM classifier was first used to select a relatively large batch of unlabeled examples, whose uncertainty was measured through the best-versus-second-best (BvSB) strategy. The diversity of each sample was then measured between the limited labeled training data and previously selected unlabeled samples, and similarity is measured among the previously selected samples. Finally, a tradeoff parameter is introduced to control the balance between informative and representative samples, and these samples are then used to construct a powerful ELM classifier. Extensive experiments were conducted using benchmark and multiclass motor imagery EEG datasets to evaluate the efficacy of the proposed method. Experimental results show that the performance of the new algorithm exceeds or matches those of several state-of-the-art active learning algorithms. It is thereby shown that the proposed method improves classifier performance and reduces the need for training samples in BCI applications.


Author(s):  
Wening Mustikarini ◽  
Risanuri Hidayat ◽  
Agus Bejo

Abstract — Automatic Speech Recognition (ASR) is a technology that uses machines to process and recognize human voice. One way to increase recognition rate is to use a model of language you want to recognize. In this paper, a speech recognition application is introduced to recognize words "atas" (up), "bawah" (down), "kanan" (right), and "kiri" (left). This research used 400 samples of speech data, 75 samples from each word for training data and 25 samples for each word for test data. This speech recognition system was designed using Mel Frequency Cepstral Coefficient (MFCC) as many as 13 coefficients as features and Support Vector Machine (SVM) as identifiers. The system was tested with linear kernels and RBF, various cost values, and three sample sizes (n = 25, 75, 50). The best average accuracy value was obtained from SVM using linear kernels, a cost value of 100 and a data set consisted of 75 samples from each class. During the training phase, the system showed a f1-score (trade-off value between precision and recall) of 80% for the word "atas", 86% for the word "bawah", 81% for the word "kanan", and 100% for the word "kiri". Whereas by using 25 new samples per class for system testing phase, the f1-score was 76% for the "atas" class, 54% for the "bawah" class, 44% for the "kanan" class, and 100% for the "kiri" class.


2014 ◽  
Vol 490-491 ◽  
pp. 1347-1355
Author(s):  
Xiang Lilan Zhang ◽  
Ji Ping Sun ◽  
Xu Hui Huang ◽  
Zhi Gang Luo

Lightweight speaker-dependent (SD) automatic speech recognition (ASR) is a promising solution for the problems of possibility of disclosing personal privacy and difficulty of obtaining training material for many seldom used English words and (often non-English) names. Dynamic time warping (DTW) algorithm is the state-of-the-art algorithm for small foot-print SD ASR applications, which have limited storage space and small vocabulary. In our previous work, we have successfully developed two fast and accurate DTW variations for clean speech data. However, speech recognition in adverse conditions is still a big challenge. In order to improve recognition accuracy in noisy and bad recording conditions, such as too high or low recording volume, we introduce a novel weighted DTW method. This method defines a feature index for each time frame of training data, and then applies it to the core DTW process to tune the final alignment score. With extensive experiments on one representative SD dataset of three speakers' recordings, our method achieves better accuracy than DTW, where 0.5% relative reduction of error rate (RRER) on clean speech data and 7.5% RRER on noisy and bad recording speech data. To the best of our knowledge, our new weighted DTW is the first weighted DTW method specially designed for speech data in noisy and bad recording conditions.


Sign in / Sign up

Export Citation Format

Share Document