tailieunhanh - Báo cáo khoa học: "Better Word Alignments with Supervised ITG Models"

This work investigates supervised word alignment methods that exploit inversion transduction grammar (ITG) constraints. We consider maximum margin and conditional likelihood objectives, including the presentation of a new normal form grammar for canonicalizing derivations. Even for non-ITG sentence pairs, we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency, we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. . | Better Word Alignments with Supervised ITG Models Aria Haghighi John Blitzer John DeNero and Dan Klein Computer Science Division University of California at Berkeley aria42 blitzer denero klein @ Abstract This work investigates supervised word alignment methods that exploit inversion transduction grammar ITG constraints. We consider maximum margin and conditional likelihood objectives including the presentation of a new normal form grammar for canoni-calizing derivations. Even for non-ITG sentence pairs we show that it is possible learn ITG alignment models by simple relaxations of structured discriminative learning objectives. For efficiency we describe a set of pruning techniques that together allow us to align sentences two orders of magnitude faster than naive bitext CKY parsing. Finally we introduce many-to-one block alignment features which significantly improve our ITG models. Altogether our method results in the best reported AER numbers for Chinese-English and a performance improvement of BLEU over GIZA alignments. 1 Introduction Inversion transduction grammar ITG constraints Wu 1997 provide coherent structural constraints on the relationship between a sentence and its translation. ITG has been extensively explored in unsupervised statistical word alignment Zhang and Gildea 2005 Cherry and Lin 2007a Zhang et al. 2008 and machine translation decoding Cherry and Lin 2007b Petrov et al. 2008 . In this work we investigate large-scale discriminative ITG word alignment. Past work on discriminative word alignment has focused on the family of at-most-one-to-one matchings Melamed 2000 Taskar et al. 2005 Moore et al. 2006 . An exception to this is the work of Cherry and Lin 2006 who discrim-inatively trained one-to-one ITG models albeit with limited feature sets. As they found ITG approaches offer several advantages over general matchings. First the additional structural constraint can result in superior alignments. We confirm and extend this .

TÀI LIỆU LIÊN QUAN