Assessing the quality of TTS audio in the LARA learning-by-reading
platform
A popular idea in Computer Assisted Language Learning (CALL) is to use multimodal annotated texts, with annotations typically including embedded audio and translations, to support L2 learning through reading. An important question is how to create the audio, which can be done either through human recording or by a Text-To-Speech (TTS) synthesis engine. We may reasonably expect TTS to be quicker and easier, but humans to be of higher quality. Here, we report a study using the open-source LARA platform and ten languages. Samples of LARA audio totaling about three and a half minutes were provided for each language in both human and TTS form; subjects used a web form to compare different versions of the same item and rate the voices as a whole. Although human voice was more often preferred, TTS achieved higher ratings in some languages and was close in others.