scholarly journals Providing Automatic Feedback to Trainees after Automatic Evaluation

Author(s):  
Megane Millan ◽  
Catherine Achard
2020 ◽  
Vol 65 (1) ◽  
pp. 181-205
Author(s):  
Hye-Yeon Chung

AbstractHuman evaluation (HE) of translation is generally considered to be valid, but it requires a lot of effort. Automatic evaluation (AE) which assesses the quality of machine translations can be done easily, but it still requires validation. This study addresses the questions of whether and how AE can be used for human translations. For this purpose AE formulas and HE criteria were compared to each other in order to examine the validity of AE. In the empirical part of the study, 120 translations were evaluated by professional translators as well as by two representative AE-systems, BLEU/ METEOR, respectively. The correlations between AE and HE were relatively high at 0.849** (BLEU) and 0.862** (METEOR) in the overall analysis, but in the ratings of the individual texts, AE and ME exhibited a substantial difference. The AE-ME correlations were often below 0.3 or even in the negative range. Ultimately, the results indicate that neither METEOR nor BLEU can be used to assess human translation at this stage. But this paper suggests three possibilities to apply AE to compromise the weakness of HE.


Author(s):  
Anderson Pinheiro Cavalcanti ◽  
Arthur Diego ◽  
Ruan Carvalho ◽  
Fred Freitas ◽  
Yi-Shan Tsai ◽  
...  

Measurement ◽  
2020 ◽  
pp. 108534
Author(s):  
Consolatina Liguori ◽  
Alessandro Ruggiero ◽  
Domenico Russo ◽  
Paolo Sommella ◽  
Jan Lundgren

Sign in / Sign up

Export Citation Format

Share Document