scholarly journals Impacts of machine translation and speech synthesis on speech-to-speech translation

2012 ◽  
Vol 54 (7) ◽  
pp. 857-866 ◽  
Author(s):  
Kei Hashimoto ◽  
Junichi Yamagishi ◽  
William Byrne ◽  
Simon King ◽  
Keiichi Tokuda
2013 ◽  
Vol 27 (2) ◽  
pp. 420-437 ◽  
Author(s):  
John Dines ◽  
Hui Liang ◽  
Lakshmi Saheer ◽  
Matthew Gibson ◽  
William Byrne ◽  
...  

2017 ◽  
Vol 11 (4) ◽  
pp. 55
Author(s):  
Parnyan Bahrami Dashtaki

Speech-to-speech translation is a challenging problem, due to poor sentence planning typically associated with spontaneous speech, as well as errors caused by automatic speech recognition. Based upon a statistically trained speech translation system, in this study, we try to investigate methodologies and metrics employed to assess the (speech-to-speech) way in translation systems. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. Speech-input translation can be properly approached as a pattern recognition problem by means of statistical alignment models and stochastic finite-state transducers. Under this general framework, some specific models are presented. One of the features of such models is their capability of automatically learning from training examples. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. In this research, we want explore methodologies and metrics employed to assess the (speech-to-speech) way in translation systems.


Sign in / Sign up

Export Citation Format

Share Document