scholarly journals Speech Quality of Computer‐Simulated Voice‐Switched Amplifiers

1973 ◽  
Vol 53 (1) ◽  
pp. 322-322
Author(s):  
Herman R. Silbiger ◽  
Richard E. Cullingford ◽  
Linda Pierce
Keyword(s):  
2002 ◽  
Vol 45 (4) ◽  
pp. 689-699 ◽  
Author(s):  
Donald G. Jamieson ◽  
Vijay Parsa ◽  
Moneca C. Price ◽  
James Till

We investigated how standard speech coders, currently used in modern communication systems, affect the quality of the speech of persons who have common speech and voice disorders. Three standardized speech coders (GSM 6.10 RPELTP, FS1016 CELP, and FS1015 LPC) and two speech coders based on subband processing were evaluated for their performance. Coder effects were assessed by measuring the quality of speech samples both before and after processing by the speech coders. Speech quality was rated by 10 listeners with normal hearing on 28 different scales representing pitch and loudness changes, speech rate, laryngeal and resonatory dysfunction, and coder-induced distortions. Results showed that (a) nine scale items were consistently and reliably rated by the listeners; (b) all coders degraded speech quality on these nine scales, with the GSM and CELP coders providing the better quality speech; and (c) interactions between coders and individual voices did occur on several voice quality scales.


1989 ◽  
Vol 25 (19) ◽  
pp. 1275 ◽  
Author(s):  
J.I. Lee ◽  
C.K. Un
Keyword(s):  

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Babak Naderi ◽  
Rafael Zequeira Jiménez ◽  
Matthias Hirth ◽  
Sebastian Möller ◽  
Florian Metzger ◽  
...  

AbstractSubjective speech quality assessment has traditionally been carried out in laboratory environments under controlled conditions. With the advent of crowdsourcing platforms tasks, which need human intelligence, can be resolved by crowd workers over the Internet. Crowdsourcing also offers a new paradigm for speech quality assessment, promising higher ecological validity of the quality judgments at the expense of potentially lower reliability. This paper compares laboratory-based and crowdsourcing-based speech quality assessments in terms of comparability of results and efficiency. For this purpose, three pairs of listening-only tests have been carried out using three different crowdsourcing platforms and following the ITU-T Recommendation P.808. In each test, listeners judge the overall quality of the speech sample following the Absolute Category Rating procedure. We compare the results of the crowdsourcing approach with the results of standard laboratory tests performed according to the ITU-T Recommendation P.800. Results show that in most cases, both paradigms lead to comparable results. Notable differences are discussed with respect to their sources, and conclusions are drawn that establish practical guidelines for crowdsourcing-based speech quality assessment.


2011 ◽  
Vol 18 (12) ◽  
pp. 725-728 ◽  
Author(s):  
Jae-Yul Yoon ◽  
Hochong Park

Author(s):  
Kenzo Itoh ◽  
Nobuhiko Kitawaki ◽  
Kazuhiko Kakehi ◽  
Shinji Hayashi

2017 ◽  
Vol 2 (1) ◽  
Author(s):  
Sebastian Möller ◽  
Friedemann Köster

Sign in / Sign up

Export Citation Format

Share Document