scholarly journals Machine Translation: The Case of Arabic- English Translation of News Texts

2020 ◽  
Vol 10 (4) ◽  
pp. 408
Author(s):  
Noureldin Mohamed Abdelaal ◽  
Abdulkhaliq Alazzawie

This study aims at identifying the common types of errors in Google Translate (GT) in the translation of informative news texts from Arabic to English, to measure the translation errors quality and to assess the fluency and the semantic adequacy of the translation output, and therefore to explain the extent a human translator is needed to rectify the output translation. For this purpose, some examples were purposively selected from online newspapers. The collected data was analyzed using a mixed method approach, as the errors were qualitatively identified, guided by Hsu’s (2014) classification of machine translation errors. Quantitative descriptive approach was used to measure the translation errors quality, using the Multidimensional Quality Metrics and Localization Quality Evaluation. As for assessing the semantic adequacy and fluency, a questionnaire that was adapted from Dorr, Snover, and Madnani (2011) was used. The results of the analysis show that omission, which is a lexical error and inappropriate lexical choice, which is a semantic error are the most common errors. Inappropriate lexical choice is sometimes a result of the homophonic nature of some source text words which can be misinterpreted by the machine translation system. This study concludes that it is useful to use machine translation systems to expedite the translation process, but that accuracy is sacrificed for the sake of ease (less work for the human) and speed of translation. If greater accuracy is required, or desired, a human translator must at least proofread and work on the material.

Author(s):  
A.V. Kozina ◽  
Yu.S. Belov

Automatically assessing the quality of machine translation is an important yet challenging task for machine translation research. Translation quality assessment is understood as predicting translation quality without reference to the source text. Translation quality depends on the specific machine translation system and often requires post-editing. Manual editing is a long and expensive process. Since the need to quickly determine the quality of translation increases, its automation is required. In this paper, we propose a quality assessment method based on ensemble supervised machine learning methods. The bilingual corpus WMT 2019 for the EnglishRussian language pair was used as data. The text data volume is 17089 sentences, 85% of the data was used for training, and 15% for testing the model. Linguistic functions extracted from the text in the source and target languages were used as features for training the system, since it is these characteristics that can most accurately characterize the translation in terms of quality. The following tools were used for feature extraction: a free language modeling tool based on SRILM and a Stanford POS Tagger parts of speech tagger. Before training the system, the text was preprocessed. The model was trained using three regression methods: Bagging, Extra Tree, and Random Forest. The algorithms were implemented in the Python programming language using the Scikit learn library. The parameters of the random forest method have been optimized using a grid search. The performance of the model was assessed by the mean absolute error MAE and the root mean square error RMSE, as well as by the Pearsоn coefficient, which determines the correlation with human judgment. Testing was carried out using three machine translation systems: Google and Bing neural systems, Mouses statistical machine translation systems based on phrases and based on syntax. Based on the results of the work, the method of additional trees showed itself best. In addition, for all categories of indicators under consideration, the best results are achieved using the Google machine translation system. The developed method showed good results close to human judgment. The system can be used for further research in the task of assessing the quality of translation.


2016 ◽  
Vol 5 (4) ◽  
pp. 51-66 ◽  
Author(s):  
Krzysztof Wolk ◽  
Krzysztof P. Marasek

The quality of machine translation is rapidly evolving. Today one can find several machine translation systems on the web that provide reasonable translations, although the systems are not perfect. In some specific domains, the quality may decrease. A recently proposed approach to this domain is neural machine translation. It aims at building a jointly-tuned single neural network that maximizes translation performance, a very different approach from traditional statistical machine translation. Recently proposed neural machine translation models often belong to the encoder-decoder family in which a source sentence is encoded into a fixed length vector that is, in turn, decoded to generate a translation. The present research examines the effects of different training methods on a Polish-English Machine Translation system used for medical data. The European Medicines Agency parallel text corpus was used as the basis for training of neural and statistical network-based translation systems. A comparison and implementation of a medical translator is the main focus of our experiments.


2019 ◽  
Vol 28 (3) ◽  
pp. 455-464 ◽  
Author(s):  
M. Anand Kumar ◽  
B. Premjith ◽  
Shivkaran Singh ◽  
S. Rajendran ◽  
K. P. Soman

Abstract In recent years, the multilingual content over the internet has grown exponentially together with the evolution of the internet. The usage of multilingual content is excluded from the regional language users because of the language barrier. So, machine translation between languages is the only possible solution to make these contents available for regional language users. Machine translation is the process of translating a text from one language to another. The machine translation system has been investigated well already in English and other European languages. However, it is still a nascent stage for Indian languages. This paper presents an overview of the Machine Translation in Indian Languages shared task conducted on September 7–8, 2017, at Amrita Vishwa Vidyapeetham, Coimbatore, India. This machine translation shared task in Indian languages is mainly focused on the development of English-Tamil, English-Hindi, English-Malayalam and English-Punjabi language pairs. This shared task aims at the following objectives: (a) to examine the state-of-the-art machine translation systems when translating from English to Indian languages; (b) to investigate the challenges faced in translating between English to Indian languages; (c) to create an open-source parallel corpus for Indian languages, which is lacking. Evaluating machine translation output is another challenging task especially for Indian languages. In this shared task, we have evaluated the participant’s outputs with the help of human annotators. As far as we know, this is the first shared task which depends completely on the human evaluation.


To bridge the language constraint of the people residing in northeastern region of India, machine translation system is a necessity. Large number of people in this region cannot access many services due to the language incomprehensibility. Among several languages spoken, Assamese is one of the major languages used in northeast India. Machine translation for Assamese language is limited compared to other languages. As a result, large number of people using Assamese language cannot avail lots of benefits associated with it. This paper has focused on the development of the English to Assamese translation system using n-gram model. The n-gram model works very well with the language pair having high dissimilarity in syntax compared to other models. The value of n has a very big role in the quality and efficiency of the system. Bilingual Evaluation Understudy (BLEU) score differs significantly with the change of the n-gram. This model uses tuples to reduce the consumption of excess memory and to accelerate the translation process. Parallel corpus has been used for training the n-gram based decoder called MARIE. The number of translation units extracted using n-gram model is much less than the translation units extracted using phrase based model. This has a high impact on system efficiency.


2021 ◽  
Vol 10 (1) ◽  
pp. 11
Author(s):  
Ardik Ardianto

This paper aims to contrast the translation of two machine translation systems, Google Translate and Bing Translator, in translating the lexeme in news articles. The approach used in scrutinizing the lexeme's translation correspondence in this study is systemic functional linguistics, especially in both experiential and logical structures. This study was carried out through descriptive comparative analysis. This study's data were 40 constituents that were taken from six BBC World news articles randomly selected. A thorough analysis demonstrates that the two machine translation systems can recognize the three functions of that, i.e., Head, post-modifier, and conjunction. The highest emerging function is post-modifier by 19 times (47.5%), followed by the conjunction function by 17 times (42.5%) on the first machine translation system and 18 times (45%) on the second one. The lowest emerging function is Head by four times (10%) on the first machine translation system and three times (7.5%) on the second one. Furthermore, due to the elliptical variation of that as a relative pronoun and the translation variation of that as a post-determiner, it concludes that the translation outputs of Google Translate are more accurate, semantically acceptable, creative, and contextual than those of Bing Translator.


Author(s):  
Anna Fernández Torné ◽  
Anna Matamala

This article aims to compare three machine translation systems with a focus on human evaluation. The systems under analysis are a domain-adapted statistical machine translation system, a domain-adapted neural machine translation system and a generic machine translation system. The comparison is carried out on translation from Spanish into German with industrial documentation of machine tool components and processes. The focus is on the human evaluation of the machine translation output, specifically on: fluency, adequacy and ranking at the segment level; fluency, adequacy, need for post-editing, ease of post-editing, and mental effort required in post-editing at the document level; productivity (post-editing speed and post-editing effort) and attitudes. Emphasis is placed on human factors in the evaluation process.


2013 ◽  
Vol 1 ◽  
pp. 429-440 ◽  
Author(s):  
Ann Irvine ◽  
John Morgan ◽  
Marine Carpuat ◽  
Hal Daumé ◽  
Dragos Munteanu

We develop two techniques for analyzing the effect of porting a machine translation system to a new domain. One is a macro-level analysis that measures how domain shift affects corpus-level evaluation; the second is a micro-level analysis for word-level errors. We apply these methods to understand what happens when a Parliament-trained phrase-based machine translation system is applied in four very different domains: news, medical texts, scientific articles and movie subtitles. We present quantitative and qualitative experiments that highlight opportunities for future research in domain adaptation for machine translation.


2020 ◽  
Vol 44 (1) ◽  
pp. 33-50
Author(s):  
Ivan Dunđer

Machine translation is increasingly becoming a hot research topic in information and communication sciences, computer science and computational linguistics, due to the fact that it enables communication and transferring of meaning across different languages. As the Croatian language can be considered low-resourced in terms of available services and technology, development of new domain-specific machine translation systems is important, especially due to raised interest and needs of industry, academia and everyday users. Machine translation is not perfect, but it is crucial to assure acceptable quality, which is purpose-dependent. In this research, different statistical machine translation systems were built – but one system utilized domain adaptation in particular, with the intention of boosting the output of machine translation. Afterwards, extensive evaluation has been performed – in form of applying several automatic quality metrics and human evaluation with focus on various aspects. Evaluation is done in order to assess the quality of specific machine-translated text.


Author(s):  
Ignatius Ikechukwu Ayogu ◽  
Adebayo Olusola Adetunmbi ◽  
Bolanle Adefowoke Ojokoh

The global demand for translation and translation tools currently surpasses the capacity of available solutions. Besides, there is no one-solution-fits-all, off-the-shelf solution for all languages. Thus, the need and urgency to increase the scale of research for the development of translation tools and devices continue to grow, especially for languages suffering under the pressure of globalisation. This paper discusses our experiments on translation systems between English and two Nigerian languages: Igbo and Yorùbá. The study is setup to build parallel corpora, train and experiment English-to-Igbo, (), English-to-Yorùbá, () and Igbo-to-Yorùbá, () phrase-based statistical machine translation systems. The systems were trained on parallel corpora that were created for each language pair using text from the religious domain in the course of this research. A BLEU score of 30.04, 29.01 and 18.72 respectively was recorded for the English-to-Igbo, English-to-Yorùbá and Igbo-to-Yorùbá MT systems. An error analysis of the systems’ outputs was conducted using a linguistically motivated MT error analysis approach and it showed that errors occurred mostly at the lexical, grammatical and semantic levels. While the study reveals the potentials of our corpora, it also shows that the size of the corpora is yet an issue that requires further attention. Thus an important target in the immediate future is to increase the quantity and quality of the data.  


2003 ◽  
Vol 29 (3) ◽  
pp. 421-457 ◽  
Author(s):  
Andy Way ◽  
Nano Gough

We have developed an example-based machine translation (EBMT) system that uses the World Wide Web for two different purposes: First, we populate the system's memory with translations gathered from rule-based MT systems located on the Web. The source strings input to these systems were extracted automatically from an extremely small subset of the rule types in the Penn-II Treebank. In subsequent stages, the source, target translation pairs obtained are automatically transformed into a series of resources that render the translation process more successful. Despite the fact that the output from on-line MT systems is often faulty, we demonstrate in a number of experiments that when used to seed the memories of an EBMT system, they can in fact prove useful in generating translations of high quality in a robust fashion. In addition, we demonstrate the relative gain of EBMT in comparison to on-line systems. Second, despite the perception that the documents available on the Web are of questionable quality, we demonstrate in contrast that such resources are extremely useful in automatically postediting translation candidates proposed by our system.


Sign in / Sign up

Export Citation Format

Share Document