scholarly journals Selection of acoustic modeling unit for Tibetan speech recognition based on deep learning

2021 ◽  
Vol 336 ◽  
pp. 06014
Author(s):  
Baojia Gong ◽  
Rangzhuoma Cai ◽  
Zhijie Cai ◽  
Yuntao Ding ◽  
Maozhaxi Peng

The selection of the speech recognition modeling unit is the primary problem of acoustic modeling in speech recognition, and different acoustic modeling units will directly affect the overall performance of speech recognition. This paper designs the Tibetan character segmentation and labeling model and algorithm flow for the purpose of solving the problem of selecting the acoustic modeling unit in Tibetan speech recognition by studying and analyzing the deficiencies of the existing acoustic modeling units in Tibetan speech recognition. After experimental verification, the Tibetan character segmentation and labeling model and algorithm achieved good performance of character segmentation and labeling, and the accuracy of Tibetan character segmentation and labeling reached 99.98%, respectively.

IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 163829-163843
Author(s):  
Chongchong Yu ◽  
Meng Kang ◽  
Yunbing Chen ◽  
Jiajia Wu ◽  
Xia Zhao

2021 ◽  
Vol 11 (22) ◽  
pp. 10542
Author(s):  
Tanu Sharma ◽  
Kamaldeep Kaur

With the advancements in processing units and easy availability of cloud-based GPU servers, many deep learning-based methods have been proposed for Aspect Level Sentiment Classification (ALSC) literature. With this increase in the number of deep learning methods proposed in ALSC literature, it has become difficult to ascertain the performance difference of one method over the other. To this end, our study provides a statistical comparison of the performance of 35 recent deep learning methods with respect to three performance metrics-Accuracy, Macro F1 score, and Time. The methods are evaluated for eight benchmark datasets. In this study, the statistical comparison is based on Friedman, Nemenyi, and Wilcoxon tests. As per the results of statistical tests, the top-ranking methods could not significantly outperform several other methods in terms of Accuracy and Macro F1 score and performed poorly on-time metric. However, the time taken by any method is crucial to analyze the overall performance. Thus, this study aids the selection of the Deep Learning method, which maximizes the accuracy and Macro F1 score and takes minimal time. Our study also establishes a framework for validating the performance of new and alternate methods in ALSC that can be helpful for researchers and practitioners working in this area.


1987 ◽  
Vol 18 (3) ◽  
pp. 250-266 ◽  
Author(s):  
R. Jane Lieberman ◽  
Ann Marie C. Heffron ◽  
Stephanie J. West ◽  
Edward C. Hutchinson ◽  
Thomas W. Swem

Four recently developed adolescent language tests, the Fullerton Test for Adolescents (FLTA), the Test of Adolescent Language (TOAL), the Clinical Evaluation of Language Functions (CELF), and the Screening Test of Adolescent Language (STAL), were compared to determine: (a) whether they measured the same language skills (content) in the same way (procedures); and (b) whether students performed similarly on each of the tests. First, respective manuals were reviewed to compare selection of subtest content areas and subtest procedures. Then, each of the tests was administered according to standardized procedures to 30 unselected sixth-grade students. Despite apparent differences in test content and procedures, there was no significant difference in students' performance on three of the four tests, and correlations among test performance were moderate to high. A comparison of the pass/fail rates for overall performance on the tests, however, revealed a significant discrepancy between the proportions of students identified in need of further evaluation on the STAL (20%) and the proportion diagnosed as language impaired on the three diagnostic tests (60-73%). Clinical implications are discussed.


Author(s):  
C.-H. Lee ◽  
E. Giachin ◽  
L. R. Rabiner ◽  
R. Pieraccini ◽  
A. E. Rosenberg

Sign in / Sign up

Export Citation Format

Share Document