logo Idiap Research Institute        
 [BibTeX] [Marc21]
Translation Error Spotting from a User's Point of View
Type of publication: Idiap-RR
Citation: Meyer_Idiap-RR-31-2012
Number: Idiap-RR-31-2012
Year: 2012
Month: 11
Institution: Idiap
Note: EPFL course project paper
Abstract: The evaluation of errors made by Machine Translation (MT) systems still needs human effort despite the fact that there are automated MT evaluation tools, such as the BLEU metric. Moreover, assuming that there would be tools that support humans in this translation quality checking task, for example by automatically marking some errors found in the MT system output, there is no guarantee that this actually helps to achieve a more correct or faster human evaluation. The paper presents a user study which found statistically significant interaction effects for the task of finding MT errors under the conditions of non-annotated and automatically pre-annotated errors, in terms of the time needed to complete the task and the number of correctly found errors.
Keywords: Error Analysis, Linear Mixed-Effects Modeling, Machine Translation, user study
Projects Idiap
Authors Meyer, Thomas
Added by: [ADM]
Total mark: 0
Attachments
  • Meyer_Idiap-RR-31-2012.pdf (MD5: 6d98fb6fe79fed102ec84e342d61ff40)
Notes