A study on model-based error rate estimation for automatic speech recognition

2003 ◽  
Vol 11 (6) ◽  
pp. 581-589 ◽  
Author(s):  
Chao-Shih Huang ◽  
Hsiao-Chuan Wang ◽  
Chin-Hui Lee
2021 ◽  
Vol 3 (1) ◽  
pp. 68-83
Author(s):  
Wiqas Ghai ◽  
Navdeep Singh

Punjabi language is a tonal language belonging to an Indo-Aryan language family and has a number of speakers all around the world. Punjabi language has gained acceptability in the media & communication and therefore deserves to have a place in the growing field of automatic speech recognition which has been explored already for a number of other Indian and foreign languages successfully. Some work has been done in the field of isolated word speech recognition for Punjabi language, but only using whole word based acoustic models. A phone based approach has yet to be applied for Punjabi language speech recognition. This paper describes an automatic speech recognizer that recognizes isolated word speech and connected word speech using a triphone based acoustic model on the HTK 3.4.1 speech Engine and compares the performance with acoustic whole word model based ASR system. Word recognition accuracy of isolated word speech was 92.05% for acoustic whole word model based system and 97.14% for acoustic triphone model based system whereas word recognition accuracy of connected word speech was 87.75% for acoustic whole word model based system and 91.62% for acoustic triphone model based system.


Author(s):  
Lakshika Kavmini ◽  
Thilini Dinushika ◽  
Uthayasanker Thayasivam ◽  
Sanath Jayasena

The recent advancements in conversational Artificial Intelligence (AI) are fastly getting integrated with every realm of human lives. Conversational agents who can learn, understand human languages and mimic the human thinking process have already created a revolution in human lifestyle. Understanding the intention of a speaker from his natural speech is a significant step in conversational AI. A major challenge that hinders the efficacy of this process is the lack of language resources. In this research, we address this issue and develop a domain-specific speech command classification system for the Sinhala language, one of the low-resourced languages. An effective speech command classification system can be utilized in several value added applications such as speech dialog systems. Our speech command classification system is developed by integrating Automatic Speech Recognition (ASR) and Natural Language Understanding (NLU). The ASR engine is implemented using Gaussian Mixture Model-Hidden Markov Model (GMM-HMM) and it converts a Sinhala speech command into a corresponding text representation. The text classifier, which is implemented as an ensemble unit of several classifiers, predicts the intent of the speaker when provided with the above text output. In this paper, we discuss and evaluate various algorithms and techniques that can be utilized to optimize the performance of both the ASR and text classifier. As well, we present our novel Sinhala speech data corpus of 4.15[Formula: see text]h which is based on the banking domain. As the final outcome, our system reports its Sinhala speech command classification accuracy as 91.03%. It shows that our system outperforms the state-of-the-art speech-to-intent mapping systems developed for the Sinhala language. The individual evaluation on the ASR system reports a 9.91% Word Error Rate and a 19.95% Sentence Error Rate, suggesting the applicability of advanced speech recognition techniques despite the limited language resources. Finally, our findings deliver useful insights on further research in speech command classification in the low-resourced context.


Electronics ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 807
Author(s):  
Jiho Jeong ◽  
S. I. M. M. Raton Mondol ◽  
Yeon Wook Kim ◽  
Sangmin Lee

The automatic speech recognition (ASR) model usually requires a large amount of training data to provide better results compared with the ASR models trained with a small amount of training data. It is difficult to apply the ASR model to non-standard speech such as that of cochlear implant (CI) patients, owing to privacy concerns or difficulty of access. In this paper, an effective finetuning and augmentation ASR model is proposed. Experiments compare the character error rate (CER) after training the ASR model with the basic and the proposed method. The proposed method achieved a CER of 36.03% on the CI patient’s speech test dataset using only 2 h and 30 min of training data, which is a 62% improvement over the basic method.


2020 ◽  
Vol 2 (2) ◽  
pp. 7-13
Author(s):  
Andi Nasri

Dengan semakin berkembangnya teknologi speech recognition, berbagai software yang bertujuan untuk memudahkan orang tunarungu dalam berkomunikasi dengan yang lainnya telah dikembangkan. Sistem tersebut menterjemahkan suara ucapan menjadi bahasa isyarat atau sebaliknya bahasa isyarat diterjemahkan ke suara ucapan. Sistem tersebut sudah dikembangkan dalam berbagai bahasa seperti bahasa Inggris, Arab, Spanyol, Meksiko, Indonesia dan lain-lain. Khusus untuk bahasa Indonesia mulai juga sudah yang mencoba melakukan penelitian untuk membuat system seperti tersebut. Namun system yang dibuat masih terbatas pada Automatic Speech Recognition (ASR) yang digunakan dimana mempunyai kosa-kata yang terbatas. Dalam penelitian ini bertujuan untuk mengembangkan sistem penterjemah suara ucapan bahasa Indonesia ke Sistem Bahasa Isyarat Indonesia (SIBI) dengan data korpus yang lebih besar dan meggunkanan continue speech recognition  untuk meningkatkan akurasi system.Dari hasil pengujian system menunjukan diperoleh hasil akurasi sebesar rata-rata 90,50 % dan Word Error Rate (WER)  9,50%. Hasil akurasi lebih tinggi dibandingkan penelitian kedua  48,75%  dan penelitan pertama 66,67%. Disamping itu system juga dapat mengenali kata yang diucapkan secara kontinyu atau pengucapan kalimat. Kemudian hasil pengujian kinerja system mencapai         0,83 detik untuk Speech to Text  dan 8,25 detik untuk speech to sign.


Author(s):  
Nguyen Thi My Thanh ◽  
Phan Xuan Dung ◽  
Nguyen Ngoc Hay ◽  
Le Ngoc Bich ◽  
Dao Xuan Quy

Bài báo này giới thiệu kết quả đánh giá các hệ thống nhận dạng giọng nói tiếng Việt (VASP-Vietnamese Automatic Speech Recognition) trong bản tin từ các công ty hàng đầu của Việt Nam như Vais (Vietnam AI System), Viettel, Zalo, Fpt và công ty hàng đầu thế giới Google. Để đánh giá các hệ thống nhận dạng giọng nói, chúng tôi sử dụng hệ số Word Error Rate (WER) với đầu vào là văn bản thu được từ các hệ thống Vais VASP, Viettel VASP, Zalo VASP, Fpt VASP và Google VASP. Ở đây, chúng tôi sử dụng tập tin âm thanh là các bản tin và API từ các hệ thống Vais VASP, Viettel VASP, Zalo VASP, Fpt VASP và Google VASP để đưa ra văn bản được nhận dạng tương ứng. Kết quả so sánh WER từ Vais, Viettel, Zalo, Fpt và Google cho thấy hệ thống nhận dạng tiếng nói tiếng Việt trong các bản tin từ Viettel, Zalo, Fpt và Google đều có kết quả tốt, trong đó Vais cho kết quả vượt trội hơn.


Sign in / Sign up

Export Citation Format

Share Document