tailieunhanh - Báo cáo khoa học: "Beyond Log-Linear Models: Boosted Minimum Error Rate Training for N-best Re-ranking"

Current re-ranking algorithms for machine translation rely on log-linear models, which have the potential problem of underfitting the training data. We present BoostedMERT, a novel boosting algorithm that uses Minimum Error Rate Training (MERT) as a weak learner and builds a re-ranker far more expressive than log-linear models. BoostedMERT is easy to implement, inherits the efficient optimization properties of MERT, and can quickly boost the BLEU score on N-best re-ranking tasks. | Beyond Log-Linear Models Boosted Minimum Error Rate Training for N-best Re-ranking Kevin Duh Dept. of Electrical Engineering University of Washington Seattle WA 98195 kevinduh@ Katrin Kirchhoff Dept. of Electrical Engineering University of Washington Seattle WA 98195 katrin@ Abstract Current re-ranking algorithms for machine translation rely on log-linear models which have the potential problem of underfitting the training data. We present BoostedMERT a novel boosting algorithm that uses Minimum Error Rate Training MERT as a weak learner and builds a re-ranker far more expressive than log-linear models. BoostedMERT is easy to implement inherits the efficient optimization properties of MERT and can quickly boost the BLEU score on N-best re-ranking tasks. In this paper we describe the general algorithm and present preliminary results on the IWSLT 2007 Arabic-English task. 1 Introduction N-best list re-ranking is an important component in many complex natural language processing applications . machine translation speech recognition parsing . Re-ranking the N-best lists generated from a 1st-pass decoder can be an effective approach because a additional knowledge features can be incorporated and b the search space is smaller . choose 1 out of N hypotheses . Despite these theoretical advantages we have often observed little gains in re-ranking machine translation MT N-best lists in practice. It has often been observed that N-best list rescoring only yields a moderate improvement over the first-pass output although the potential improvement as measured by the oracle-best hypothesis for each sentence is much Work supported by anNSF Graduate Research Fellowship. higher. This shows that hypothesis features are either not discriminative enough or that the reranking model is too weak This performance gap can be mainly attributed to two problems optimization error and modeling error see Figure 1 .1 Much work has focused on developing .

crossorigin="anonymous">
Đã phát hiện trình chặn quảng cáo AdBlock
Trang web này phụ thuộc vào doanh thu từ số lần hiển thị quảng cáo để tồn tại. Vui lòng tắt trình chặn quảng cáo của bạn hoặc tạm dừng tính năng chặn quảng cáo cho trang web này.