Web-Based Framework for Assisting Users Using Speech Recognition

Author(s):  
Hassan Zahr ◽  
Hussein Al Haj Hassan ◽  
Jamal Haydar
Keyword(s):  
Informatics ◽  
2019 ◽  
Vol 6 (1) ◽  
pp. 13 ◽  
Author(s):  
Carlos Teixeira ◽  
Joss Moorkens ◽  
Daniel Turner ◽  
Joris Vreeke ◽  
Andy Way

Commercial software tools for translation have, until now, been based on the traditional input modes of keyboard and mouse, latterly with a small amount of speech recognition input becoming popular. In order to test whether a greater variety of input modes might aid translation from scratch, translation using translation memories, or machine translation postediting, we developed a web-based translation editing interface that permits multimodal input via touch-enabled screens and speech recognition in addition to keyboard and mouse. The tool also conforms to web accessibility standards. This article describes the tool and its development process over several iterations. Between these iterations we carried out two usability studies, also reported here. Findings were promising, albeit somewhat inconclusive. Participants liked the tool and the speech recognition functionality. Reports of the touchscreen were mixed, and we consider that it may require further research to incorporate touch into a translation interface in a usable way.


Sign in / Sign up

Export Citation Format

Share Document