tailieunhanh - Báo cáo khoa học: "Improving IBM Word-Alignment Model "

We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters. | Improving IBM Word-Alignment Model 1 Robert C. MOORE Microsoft Research One Microsoft Way Redmond WA 90052 USA bobmoore@ Abstract We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30 resulting from 1 giving extra weight to the probability of alignment to the null word 2 smoothing probability estimates for rare words and 3 using a simple heuristic estimation method to initialize or replace EM training of model parameters. 1 Introduction IBM Model 1 Brown et al. 1993a is a wordalignment model that is widely used in working with parallel bilingual corpora. It was originally developed to provide reasonable initial parameter estimates for more complex word-alignment models but it has subsequently found a host of additional uses. Among the applications of Model 1 are segmenting long sentences into subsentental units for improved word alignment Nevado et al. 2003 extracting parallel sentences from comparable corpora Munteanu et al. 2004 bilingual sentence alignment Moore 2002 aligning syntactictree fragments Ding et al. 2003 and estimating phrase translation probabilities Venugopal et al. 2003 . Furthermore at the 2003 Johns Hopkins summer workshop on statistical machine translation a large number of features were tested to discover which ones could improve a state-of-the-art translation system and the only feature that produced a truly significant improvement was the Model 1 score Och et al. 2004 . Despite the fact that IBM Model 1 is so widely used essentially no attention seems to have been paid to whether it is possible to improve on the standard Expectation-Maximization EM procedure for estimating its parameters. This may be due in part to the fact that Brown et al. 1993a proved that the log-likelihood objective function for Model 1 is a strictly concave function of the model parameters so that it has a unique local maximum. This in turn .