Đang chuẩn bị liên kết để tải về tài liệu:
Báo cáo khoa học: "Modelling lexical redundancy for machine translation"

Đang chuẩn bị nút TẢI XUỐNG, xin hãy chờ

Certain distinctions made in the lexicon of one language may be redundant when translating into another language. We quantify redundancy among source types by the similarity of their distributions over target types. We propose a languageindependent framework for minimising lexical redundancy that can be optimised directly from parallel text. Optimisation of the source lexicon for a given target language is viewed as model selection over a set of cluster-based translation models. Redundant distinctions between types may exhibit monolingual regularities, for example, inflexion patterns. . | Modelling lexical redundancy for machine translation David Talbot and Miles Osborne School of Informatics University of Edinburgh 2 Buccleuch Place Edinburgh Eh8 9LW uK d.r.talbot@sms.ed.ac.uk miles@inf.ed.ac.uk Abstract Certain distinctions made in the lexicon of one language may be redundant when translating into another language. We quantify redundancy among source types by the similarity of their distributions over target types. We propose a languageindependent framework for minimising lexical redundancy that can be optimised directly from parallel text. Optimisation of the source lexicon for a given target language is viewed as model selection over a set of cluster-based translation models. Redundant distinctions between types may exhibit monolingual regularities for example inflexion patterns. We define a prior over model structure using a Markov random field and learn features over sets of monolingual types that are predictive of bilingual redundancy. The prior makes model selection more robust without the need for language-specific assumptions regarding redundancy. Using these models in a phrase-based SMT system we show significant improvements in translation quality for certain language pairs. 1 Introduction Data-driven machine translation MT relies on models that can be efficiently estimated from parallel text. Token-level independence assumptions based on word-alignments can be used to decompose parallel corpora into manageable units for parameter estimation. However if training data is scarce or language pairs encode significantly different information in the lexicon such as Czech and English additional independence assumptions may assist the model estimation process. Standard statistical translation models use separate parameters for each pair of source and target types. In these models distinctions in either lexicon that are redundant to the translation process will result in unwarranted model complexity and make parameter estimation from limited .