scholarly journals Language To Language Translation System

Author(s):  
Ms Pratheeksha ◽  
Pratheeksha Rai ◽  
Ms Vijetha

The system used in Language to Language Translation is the phrases spoken in one language are immediately spoken in other language by the device. Language to Language Translation is a three steps software process which includes Automatic Speech Recognition, Machine Translation and Voice Synthesis. Language to Language system includes the major speech translation projects using different approaches for Speech Recognition, Translation and Text to Speech synthesis highlighting the major pros and cons for the approach being used. Language translation is a process that takes the conversational phrase in one language as an input and translated speech phrases in another language as the output. The three components of language-to-language translation are connected in a sequential order. Automatic Speech Recognition (ASR) is responsible for converting the spoken phrases of source language to the text in the same language followed by machine translation which translates the source language to next target language text and finally the speech synthesizer is responsible for text to speech conversion of target language.

2017 ◽  
Vol 11 (4) ◽  
pp. 55
Author(s):  
Parnyan Bahrami Dashtaki

Speech-to-speech translation is a challenging problem, due to poor sentence planning typically associated with spontaneous speech, as well as errors caused by automatic speech recognition. Based upon a statistically trained speech translation system, in this study, we try to investigate methodologies and metrics employed to assess the (speech-to-speech) way in translation systems. The speech translation is performed incrementally based on generation of partial hypotheses from speech recognition. Speech-input translation can be properly approached as a pattern recognition problem by means of statistical alignment models and stochastic finite-state transducers. Under this general framework, some specific models are presented. One of the features of such models is their capability of automatically learning from training examples. The speech translation system consists of three modules: automatic speech recognition, machine translation and text to speech synthesis. Many procedures for incorporation of speech recognition and machine translation have been projected. In this research, we want explore methodologies and metrics employed to assess the (speech-to-speech) way in translation systems.


2018 ◽  
Vol 6 (3) ◽  
pp. 79-92
Author(s):  
Sahar A. El-Rahman ◽  
Tarek A. El-Shishtawy ◽  
Raafat A. El-Kammar

This article presents a realistic technique for the machine aided translation system. In this technique, the system dictionary is partitioned into a multi-module structure for fast retrieval of Arabic features of English words. Each module is accessed through an interface that includes the necessary morphological rules, which directs the search toward the proper sub-dictionary. Another factor that aids fast retrieval of Arabic features of words is the prediction of the word category, and accesses its sub-dictionary to retrieve the corresponding attributes. The system consists of three main parts, which are the source language analysis, the transfer rules between source language (English) and target language (Arabic), and the generation of the target language. The proposed system is able to translate, some negative forms, demonstrations, and conjunctions, and also adjust nouns, verbs, and adjectives according their attributes. Then, it adds the symptom of Arabic words to generate a correct sentence.


2021 ◽  
pp. 126-131
Author(s):  
Yue He ◽  
Walcir Cardoso

This study investigated whether a translation tool (Microsoft Translator – MT) and its built-in speech features (Text-To-Speech synthesis – TTS – and speech recognition) can promote learners’ acquisition in pronunciation of English regular past tense -ed in a self-directed manner. Following a pretest/posttest design, we compared 29 participants’ performances of past -ed allomorphy (/t/, /d/, and /id/) by assessing their pronunciation in terms of phonological awareness, phonemic discrimination, and oral production. The findings highlight the affordances of MT regarding its pedagogical use for helping English as a Foreign Language (EFL) learners improve their pronunciation.


2020 ◽  
Vol 2 (4) ◽  
pp. 28
Author(s):  
. Zeeshan

Machine Translation (MT) is used for giving a translation from a source language to a target language. Machine translation simply translates text or speech from one language to another language, but this process is not sufficient to give the perfect translation of a text due to the requirement of identification of whole expressions and their direct counterparts. Neural Machine Translation (NMT) is one of the most standard machine translation methods, which has made great progress in the recent years especially in non-universal languages. However, local language translation software for other foreign languages is limited and needs improving. In this paper, the Chinese language is translated to the Urdu language with the help of Open Neural Machine Translation (OpenNMT) in Deep Learning. Firstly, a Chineseto Urdu language sentences datasets were established and supported with Seven million sentences. After that, these datasets were trained by using the Open Neural Machine Translation (OpenNMT) method. At the final stage, the translation was compared to the desired translation with the help of the Bleu Score Method.


2020 ◽  
Vol 21 (3) ◽  
Author(s):  
Benyamin Ahmadnia ◽  
Bonnie J. Dorr ◽  
Parisa Kordjamshidi

Neural Machine Translation (NMT) systems require a massive amount of Maintaining semantic relations between words during the translation process yields more accurate target-language output from Neural Machine Translation (NMT). Although difficult to achieve from training data alone, it is possible to leverage Knowledge Graphs (KGs) to retain source-language semantic relations in the corresponding target-language translation. The core idea is to use KG entity relations as embedding constraints to improve the mapping from source to target. This paper describes two embedding constraints, both of which employ Entity Linking (EL)---assigning a unique identity to entities---to associate words in training sentences with those in the KG: (1) a monolingual embedding constraint that supports an enhanced semantic representation of the source words through access to relations between entities in a KG; and (2) a bilingual embedding constraint that forces entity relations in the source-language to be carried over to the corresponding entities in the target-language translation. The method is evaluated for English-Spanish translation exploiting Freebase as a source of knowledge. Our experimental results show that exploiting KG information not only decreases the number of unknown words in the translation but also improves translation quality.


Sign in / Sign up

Export Citation Format

Share Document