Đang chuẩn bị liên kết để tải về tài liệu:
Báo cáo khoa học: "A Semantic Framework for Translation Quality Assessment"

Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ

This work introduces AM-FM, a semantic framework for machine translation evaluation. Based upon this framework, a new evaluation metric, which is able to operate without the need for reference translations, is implemented and evaluated. The metric is based on the concepts of adequacy and fluency, which are independently assessed by using a cross-language latent semantic indexing approach and an n-gram based language model approach, respectively. | AM-FM A Semantic Framework for Translation Quality Assessment Rafael E. Banchs Human Language Technology Department Institute for Infocomm Research 1 Fusionopolis Way Singapore 138632 rembanchs@i2r.a-star.edu.sg Haizhou Li Human Language Technology Department Institute for Infocomm Research 1 Fusionopolis Way Singapore 138632 hli@i2r.a-star.edu.sg Abstract This work introduces AM-FM a semantic framework for machine translation evaluation. Based upon this framework a new evaluation metric which is able to operate without the need for reference translations is implemented and evaluated. The metric is based on the concepts of adequacy and fluency which are independently assessed by using a cross-language latent semantic indexing approach and an n-gram based language model approach respectively. Comparative analyses with conventional evaluation metrics are conducted on two different evaluation tasks overall quality assessment and comparative ranking over a large collection of human evaluations involving five European languages. Finally the main pros and cons of the proposed framework are discussed along with future research directions. 1 Introduction Evaluation has always been one of the major issues in Machine Translation research as both human and automatic evaluation methods exhibit very important limitations. On the one hand although highly reliable in addition to being expensive and time consuming human evaluation suffers from inconsistency problems due to inter- and intraannotator agreement issues. On the other hand while being consistent fast and cheap automatic 153 evaluation has the major disadvantage of requiring reference translations. This makes automatic evaluation not reliable in the sense that good translations not matching the available references are evaluated as poor or bad translations. The main objective of this work is to propose and evaluate AM-FM a semantic framework for assessing translation quality without the need for reference translations. .